Skip to content
Home
Back to writing

Your Agents Are Your Fastest Engineers and Your Most Uninformed Ones

AIbuildingEngramopinion

Here's a scene that's playing out on engineering teams everywhere right now. An agent picks up a ticket, writes clean code, opens a PR. The senior engineer reviewing it sighs. The code works fine. It also uses an API the team deprecated last month, ignores the rate-limiting convention the CTO decided on in Slack, and duplicates a shared utility that lives three directories over. The agent didn't make a mistake. It made a perfectly reasonable decision with the information it had, which was almost none.

This is happening ten times a week on teams that are serious about using AI agents. And most engineering leaders are misdiagnosing the problem. They think their agents need better models, more context window, or smarter prompting. They don't. They need the same thing every new hire needs: institutional knowledge.

The Problem Isn't Intelligence, It's Context

Think about what makes a senior engineer on your team effective. It's not that they're fundamentally smarter than a junior hire. It's that they've sat through the postmortems. They've read the Slack threads. They know which parts of the codebase are load-bearing and which are legacy debt that nobody wants to touch. They know that the billing service has a race condition everyone works around, that the team tried microservices for the notification system two years ago and it was a disaster, that the VP of Engineering cares deeply about API consistency.

None of that is documented anywhere. It lives in people's heads, in Slack history, in the collective scar tissue of a team that's been building together.

Your AI agents have zero access to any of it. Every session, every agent on your team starts as a talented engineer who has never attended a standup, never read a PR comment thread, never overheard the conversation at lunch where someone explained why the data pipeline is built the way it is.

Existing Tools Don't Solve This

The obvious response is "just point the agent at our docs." But Confluence pages, Notion wikis, and README files were designed for humans to browse. They're organized for human navigation patterns, written in human-readable prose, and maintained (or more accurately, not maintained) on a human schedule.

Agents don't browse. They need the right piece of context surfaced at the right moment, in a format they can reason with. Telling an agent to "check the wiki" is like handing a new hire a 500-page employee handbook and expecting them to find the one paragraph that's relevant to the code they're writing right now.

RAG over your docs gets you part of the way there, but it's retrieval without understanding. It can find a paragraph that mentions the keyword you searched for. It can't connect the dots between a Slack decision, a postmortem finding, and an architectural convention to surface the context that actually matters for the task at hand.

The Real Cost Is Senior Engineer Time

Here's what's actually happening on most teams. You adopted AI agents to move faster. Your agents write code quickly. But the code requires heavy review because the agents lack context. So your senior engineers, the people whose time is most valuable, are now spending a significant chunk of their week reviewing and correcting agent output. Explaining the same context over and over. Re-teaching decisions that were already made.

You didn't save engineering time. You moved it. The work shifted from "writing code" to "babysitting agents." And the engineers who are best positioned to do high-leverage work are the ones stuck doing it.

Some teams try to solve this with longer and longer system prompts. Project files, CLAUDE.md files, custom instructions stuffed with conventions. That works for a while, until you're managing a 2,000-line context document that's always slightly out of date, differs across team members, and still can't capture the nuanced, interconnected web of decisions that make up real institutional knowledge.

What Actually Solving This Looks Like

The answer isn't better documentation. It's a shared knowledge layer that works the way institutional knowledge actually works: connected, contextual, and always available.

Imagine this. An engineer on your frontend team is working with an agent, and in the course of their session the agent learns that your team recently migrated from REST to GraphQL for all new endpoints. That knowledge enters a shared graph. The next day, a different engineer on the backend team asks their agent to scaffold a new service. That agent already knows about the GraphQL decision. Not because someone updated a wiki page. Not because the engineer remembered to mention it. Because the knowledge propagated through the team's shared context automatically.

That's the difference between a team where every agent is perpetually on day one and a team where agents build on each other's understanding over time. The first team scales linearly. The second compounds.

This Is What I'm Building

I've been thinking about this problem for months, and it's what led me to build Engram. Engram started as a memory protocol for individual agents, but the real unlock turned out to be shared knowledge across teams. A knowledge graph where decisions, conventions, and context connect to each other. Where one agent's learning becomes every agent's context. Where the institutional knowledge that makes your senior engineers effective is available to every agent on the team, at the moment it's relevant.

It's early. But the teams I'm working with are already seeing the shift: fewer "obvious" mistakes in agent-generated PRs, less time re-explaining context, senior engineers getting back to actual engineering work.

If your team is using AI agents seriously and running into this wall, I'd love to talk.

Want to know when I post something new?

No spam. Just a heads up when there's a new post.