Over the last month, everyone and their mother has been talking about context graphs. The term exploded after a Foundation Capital post about AI's "trillion-dollar opportunity," with responses from Glean's CEO and debates across LinkedIn and Twitter about whether this is genuinely new or just rebranded knowledge graphs.
We think the concept is real, and more importantly, it applies directly to code reviews.
Your AI tools don’t have your team’s context
Code decisions draw from knowledge that rarely gets written down. A senior developer knows that the team prefers composition over inheritance after the Q3 incident. Another remembers that configuration files get exceptions to the type safety rules. Someone else knows which performance optimizations are worth the readability tradeoff.
This knowledge lives in people's heads, in old PR comments, and in Slack threads about architecture decisions. New team members spend months absorbing these unwritten rules. When someone leaves, some of this context leaves with them. And AI tools? They never learn it at all.
The traditional solution is documentation: style guides, architecture decision records, and wiki pages. But documentation requires constant maintenance & keeps falling out of date. Most importantly, it captures the "what" but struggles with the "why" and "when does this apply?"
Context Graphs: Teaching AI all about your org
The enterprise AI world has been grappling with a similar problem. When AI agents try to automate workflows like deal approvals or incident response, they run into decisions that depend on organizational context. Not just "what's the policy?" but "how did we handle a similar case last quarter?" and "why did we make an exception then?"
The solution emerging is called a context graph. Instead of just storing rules, you capture decision traces: records of what happened, why it was approved, who decided, and what context informed the choice. Over time, these traces connect into a graph that makes precedent searchable.
Code review fits this pattern perfectly. When a developer corrects an AI suggestion, that's a decision trace waiting to be captured. "This is fine because of performance constraints" contains multiple pieces of context: the pattern, the exception, the reasoning, and the conditions under which it applies.
Building the graph
Here's how it works in practice. The AI flags an issue during review. You provide feedback; maybe you confirm the issue, or maybe you explain why it's actually fine. We capture that interaction as a structured trace:
- What pattern was involved
- Your correction or confirmation
- The reasoning you provided
- Links to relevant files, past PRs, or related discussions
- Who made the call
As these traces accumulate, the system learns that your team allows certain practices in specific contexts. That you make exceptions for legacy code. That performance concerns override style preferences in hot paths. That the mobile team has different conventions from the backend team.
The next time similar code appears, the AI agent checks: have we seen this before? What did the team decide? Under what conditions? It can surface relevant precedent: "A similar pattern was approved in PR #2891 for the same performance reasons."
How context graphs will transform AI agents
We're still early, but the direction is clear. Code review knowledge becomes a durable asset rather than a tribal lore. New developers can onboard faster with AI that already knows your conventions, and senior developers' judgment calls are preserved as searchable precedent even after they move on.
But this extends beyond code review. AI coding assistants can understand your actual constraints, knowing your team uses in-memory caching for internal APIs but Redis for customer-facing ones, and why. New developers can ask, "Why is this structured this way?" and get a precedent with reasoning instead of stale documentation. When debugging production, engineers can query why implementations exist and whether similar issues occurred before.
The context graph becomes a living knowledge base for any AI agent working with your code.
What we're building
The early foundation is there: we're capturing feedback, building connections between patterns, and surfacing relevant precedent. But we're just scratching the surface.
We're figuring out how to capture richer context without burdening developers, how to visualize what the AI has learned so that teams can audit it, and how to balance consistency with the flexibility that good judgment requires.
The vision is to build an AI that truly knows your codebase, driven by how your team thinks, the tradeoffs you’ve made, and the lessons you’ve learned over time. Context graphs are how we get there.
If you're interested in where this is heading, we'd love to hear from you. The future of code reviews is context-aware, and we're figuring out what that means together.