Understanding agent-written code

Review agent-written code with the context intact.

Agents are shipping more of the code in your repo — and the PR alone no longer explains how it got there. LaserOwl captures the working session behind a change: what was asked, how the agent worked, what it tried, what it rejected, and what shipped. So your team can understand the code, not just the diff.

What’s insideWhat was askedHow the agent workedWhat it rejectedWhat it shipped
LaserOwl/sessions
live
LIVEsx-9f2a

Add rate limiting to billing webhook

northwind/payments-api·sx/rate-limit-billing·14:22:01 · 3m 14s
Intent·@abigail

Protect /webhooks/stripe from retry storms during incident conditions. Target: token bucket at 50 req/s per source IP, with backoff signaling.

Timeline
CL
@claude-codeagent14:22:03
evaluate_plan
confidence 0.71. noted missing: tests/webhooks/rate_limit_spec.ts
AB
add to the conversation…
Live session preview

The problem

The PR shows the result. Not the session that produced it.

When a human wrote the code, the PR and the author were the record. When an agent writes the code, most of the reasoning happens in a session that nobody keeps.

What reviewers see

The PR shows the result.

A diff. A commit message. Maybe a short description. Nothing about how the code came to exist.

What gets dropped

Not the session that produced it.

The intent, tool calls, attempts and rejected paths lived in an agent session — and closed when the PR opened.

What it costs

Reviewers inherit code they can’t interrogate.

“Why this approach?” “What else did it try?” The answers aren’t in the diff, and the author didn’t make every call.

What a session captures

The context behind a change.

Six pieces, captured as the agent works. Together they turn the session into something a reviewer, debugger, or auditor can actually read.

Intent

What the author asked the agent to do.

Tool activity

Every tool call, file read, and edit.

Attempts & rejections

What the agent tried, and what it backed out of.

Commit linkage

Every commit the session produced — linked to the PR.

Discussion

Anchored comments from humans and agents.

Outcome

How the session closed — merged, reverted, abandoned.

Why teams care

Review. Debug. Audit.

The same session record solves three problems teams already have with agent-written code. Each one is a workflow that pays rent today — not a promise about the future.

01 · Review

Review with context

Open a PR and see the session that produced it — intent, tools, attempts, rejected paths. Reviewers stop guessing why the code looks the way it does.

02 · Debug

Debug with provenance

When an agent-written change breaks later, the trail back to the intent is still there. Root cause in minutes of reading, not an afternoon of archaeology.

03 · Audit

Keep an audit trail

A durable, queryable record of how each change was produced. Enough to answer an internal review board — or a customer asking how an AI-assisted change came to exist.

Shared context for future agents

Downstream

Sessions are machine-readable, so the next agent can read what prior agents tried, rejected, and shipped. Secondary for now — a natural follow-on once the record exists.

An example

A review conversation, with the context intact.

A reviewer questions a rate-limit value on an agent-authored PR. Because the session is linked to the commit, the author, reviewer, and the agent itself can all speak to the same record.

Discussion
3 replies
M

Maya

reviewer · 14:14
Token bucket for /v1/context · commit cfa9e2

Are we sure 50 rps is the right ceiling? Our analytics showed 38 rps p95 last week but there are bursts.

C

Claude Code

agent · 14:15agent

The intent said “cap around 50 rps”, so I kept headroom above p95. I tried 40 first and backed out — not enough slack for the burst window. Happy to tighten to 45 if you want a follow-up session.

A

Abigail

author · 14:16

Good — that’s the reasoning I needed to see. Leaving it at 50 and watching for a week. Merging.

How it fits

Alongside GitHub, not in place of it.

LaserOwl sits quietly next to the tools your team already uses. It captures the session behind each change and links it back to the commits and PRs in GitHub.

Your agent workflow stays the same.

Developers keep using the IDE, CLI, and agents they already work with. LaserOwl listens in — it doesn’t get in the way.

PRs stay in GitHub.

Code review still happens where it happens. Sessions link back to the PR, so reviewers can open the context without switching tools.

LaserOwl adds the missing layer.

Every commit links to the session that produced it. Every session links to the commits it shipped. Two-way, durable, searchable.

Session ⇄ commitTwo-way link
LaserOwl session
sx-9f2a · Rate-limit /v1/contextauthoring agent · claude-codeoutcome · merged
  • intent · tools · rejections · commits · discussion
  • the working record behind the change
GitHub pull request
PR #1208commit cfa9e2merged by abigail
  • diff · CI · review · merge commit
  • source of truth for code — unchanged by LaserOwl

Working with a few teams

If this problem is live for you, we’d like to talk.

Speaking with a small number of engineering teams already dealing with agent-authored code in real workflows. Validating where session-based context actually helps with review, debugging, and traceability.