runshiftrunshift

Eight agents. No coordination. Here's what that looks like in practice.

2026-04-09

A healthcare startup was running 8 AI coding agents simultaneously with no coordination layer. A shared config file got modified by two agents that didn't know about each other. What followed was an hour of manual reconstruction that should have been product time.

coordinationclaude codecursorhealthcaremulti-agentrunshift

tl;dr — a healthcare startup was running 8 agents simultaneously across Claude Code and Cursor sessions. no coordination layer. a shared config file got modified by two agents that didn't know about each other, neither following existing rules. what followed was not a build error. it was an hour of manual reconstruction and code repair that should have been product time.


Eight agents. No coordination. Here's what that looks like in practice.

A design partner — a healthcare startup — came to runshift running somewhere between eight and ten AI coding agents simultaneously. Claude Code in multiple terminal sessions. Cursor open in the editor. All of them working on the same codebase, none of them aware of the others.

That is not unusual. That is how most teams running AI agents actually work right now. Spin up a session, point it at a task, let it run. Repeat.

The problem is not that the agents are bad at their jobs. The problem is that no agent knows what the others are doing.


The shared config problem

Config files are where parallel agent sessions break first.

They are shared by definition. Every agent that touches infrastructure, environment, or application behavior has a reason to read or write them. And in a healthcare context — where configuration decisions touch access controls, data handling, and compliance boundaries — the stakes of a silent conflict are not just technical.

Two sessions modified the same config file. Neither knew the other had been there. Neither was following the existing rules already established in the codebase — the conventions, the constraints, the decisions that had been made deliberately and documented for exactly this reason.

The result was not a merge conflict. Merge conflicts are visible. This was a silent divergence. The file looked fine. The build passed. The problem surfaced later, sideways, in behavior that did not match expectations.


What happened next

The founder spent an hour reconstructing what had happened.

Not reviewing logs in a dashboard. Manually. Tracing back through agent outputs, terminal histories, file diffs, trying to understand which session had done what, in what order, and what the cumulative effect was.

That is the part that does not show up in the pitch for AI coding agents. The agents saved time on the tasks they were given. The coordination failure cost more time than the agents saved — and the time lost was not writing code. It was fixing code that should never have been broken, and understanding what happened well enough to fix it correctly.

In a healthcare codebase, that last part matters more than most. You cannot guess at what changed. You need to know.


What runshift changed

The coordination layer is the thing that was missing.

Before an agent modifies a shared file, runshift checks what else is in flight. If another session has touched that file — or is currently working in a scope that depends on it — a gate fires. The agent waits. The operator sees what is being proposed, what the conflict is, and decides how to sequence the work.

The existing rules in the codebase are not suggestions. They are enforced. An agent that is about to violate an established convention gets stopped before the violation lands, not after.

And when something does go wrong — when a conflict slips through or a decision needs to be reconstructed — the audit trail is not just logs. It is a recoverable record. Every agent action, every file modification scope, every gate decision, timestamped and queryable. Not so you can understand what happened after an hour of manual reconstruction. So you can understand it in thirty seconds and get back to building.


Why this matters more in healthcare

Eight agents on a fintech codebase running without coordination is a productivity problem. Eight agents on a healthcare codebase running without coordination is a compliance problem.

The config files that got modified were not PHI. But they were adjacent to the systems that handle it. The rules that were not followed were not arbitrary style conventions. They were constraints that existed for a reason — access patterns, data handling boundaries, behavior that needed to be predictable and auditable.

The audit question in a regulated environment is not just "what went wrong." It is "who approved this, when, and what did they know when they approved it." Without a coordination layer, that question does not have an answer. The agents acted. Nobody was in the loop.

runshift puts someone in the loop. Not for every action — most of what eight agents do in a day is routine and fast and does not need a human in the path. But for the decisions that touch shared state, established rules, and compliance-sensitive configuration, the gate exists to make sure a human saw it before it landed.


The time math

An hour of reconstruction and repair, once a week, across a team running eight agents simultaneously, is not a rounding error. It is a structural cost that compounds as agent usage scales.

The agents are not the problem. The gap between what the agents can do and what the operator can see, in real time, across all of them at once — that is the problem.

Coordination is not overhead. It is the thing that makes the speed sustainable.