Conversation
…gest under debug - `analyze` now runs the full ingest pipeline (Drain + semantic labeling + DuckDB storage) before launching the AI agent - Move `ingest` command under `debug ingest` for step-by-step debugging - Extract shared pipeline helpers into `cmd/lapp/pipeline.go` - Remove top-level `templates` command - Add workspace path constraint to analyzer system prompt to prevent the agent from scanning files outside the workspace directory - Add Langfuse tracing support with docker-compose for local dev - Update CLAUDE.md with new CLI structure and code style notes
|
@codex review please |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 5bd60cfeb6
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| } | ||
|
|
||
| result, err := analyzer.Analyze(cmd.Context(), config, mergedLines, question) | ||
| result, err := analyzer.Analyze(ctx, config, mergedLines, question) |
There was a problem hiding this comment.
Reuse one Drain parse across ingest and analysis
runAnalyze stores pattern_id values from the first DrainParser, then immediately calls analyzer.Analyze, which reparses the same lines with a new parser that generates fresh UUIDs for clusters. That means template IDs shown in the analyzer workspace/answer can differ from the IDs just written to patterns, so users cannot reliably correlate analysis output with DB records from the same run.
Useful? React with 👍 / 👎.
| return errors.Errorf("drain parser: %w", err) | ||
| } | ||
|
|
||
| s, err := store.NewDuckDBStore(dbPath) |
There was a problem hiding this comment.
Isolate analyze writes from the shared DuckDB file
This command now always ingests into the persistent dbPath before analysis, but there is no per-run/source isolation in this flow. Running lapp analyze repeatedly (especially on different log files) will append and mix datasets in the same DB, which skews downstream counts/queries and makes results from prior analyses indistinguishable from the current run.
Useful? React with 👍 / 👎.
analyzenow runs the full ingest pipeline (Drain + semantic labeling +DuckDB storage) before launching the AI agent
ingestcommand underdebug ingestfor step-by-step debuggingcmd/lapp/pipeline.gotemplatescommandthe agent from scanning files outside the workspace directory