LLM call logging, plan persistence API, quote-to-feedback UX, requirement input improvements

- Add llm_call_log table and per-call timing/token tracking in agent loop
- New GET /workflows/{id}/plan endpoint to restore plan from snapshots on page load
- New GET /workflows/{id}/llm-calls endpoint + WS LlmCallLog broadcast
- Parse Usage from LLM API response (prompt_tokens, completion_tokens)
- Detailed mode toggle in execution log showing LLM call cards with phase/tokens/latency
- Quote-to-feedback: hover quote buttons on plan steps and log entries, multi-quote chips in comment input
- Requirement input: larger textarea, multi-line display with pre-wrap and scroll

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-03-02 09:16:51 +00:00
parent 46424cfbc4
commit 0a8eee0285
14 changed files with 601 additions and 26 deletions

View File

@@ -67,9 +67,20 @@ pub struct ToolCallFunction {
pub arguments: String,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Usage {
#[serde(default)]
pub prompt_tokens: u32,
#[serde(default)]
pub completion_tokens: u32,
#[serde(default)]
pub total_tokens: u32,
}
#[derive(Debug, Deserialize)]
pub struct ChatResponse {
pub choices: Vec<ChatChoice>,
pub usage: Option<Usage>,
}
#[derive(Debug, Deserialize)]