Here's my thinking: multi-agent systems are just wider transformers, with siloed attention.
Each 'agent' is just another instantiation of the same transformer block applied to different tokens. The only thing that makes them separate is that they can't attend to each other's context outside of the deliberate mixing step.
In theory, if you had an infinitely large context, you wouldn't need multiple agents. But you can never have this because of the communication cost. So all of these multi-agent coordination systems are really just approximate versions of attention.
The "alignment principle" vs "sequential penalty" finding mirrors my production experience exactly.
I run a multi-agent system where specialized agents handle different business functions (customer support, code review, deployment monitoring). The key insight: task decomposability determines architecture.
Parallelizable tasks (analyzing independent customer tickets, running separate test suites) show massive gains with independent agents. Sequential workflows (debugging a specific issue that requires following a chain of logic) degrade with coordination overhead.
The "tool-use bottleneck" is real. We hit it around 12-15 tools per agent. The coordination tax becomes severe. Solution: role-based tool access. Support agents get 5 tools, deployment agents get 8, code review agents get 6. Overlap is minimal.
One counter-intuitive finding: persistent memory per agent beats centralized knowledge. Each agent has AGENTS.md (instructions), TOOLS.md (available actions), and memory/ directory (session logs). Agents learn from their own mistakes without polluting each other's context.
The error amplification metric (17.2x for independent vs 4.4x for centralized) explains why we use a hub-and-spoke model with human checkpoints at handoff boundaries.
Documented these patterns at howtoopenclawfordummies.com for anyone building similar systems.
> We found that independent multi-agent systems (agents working in parallel without talking) amplified errors by 17.2x
The paper sounds too shallow. The errors data doesn't seem to have a rationale or correlation against the architecture. Specifically, what makes the SAS architecture to have lowest error rates while the similar architecture with independent agents having highest error rates? The conclusion doesn't seem well-grounded with reasoning.
I’ve been building a lot of agent workflows at my day job. Something that I’ve found a lot of success with when deciding on an orchestration strategy is to ask the agent what they recommend as part of the planning for phase. This technique of using the agent to help you improve its performance has been a game changer for me in leveraging this tech effectively. YMMV of course. I mostly use Claude code so who knows with the others.
This is a neat idea but there are so many variables here that it's hard to make generalizations.
Empirically, a top level orchestrator that calls out to a planning committee, then generates a task-dag from the plan which gets orchestrated in parallel where possible is the thing I've seen put in the best results in various heterogeneous environments. As models evolve, crosstalk may become less of a liability.
Reasoning is recursive - you cannot isolate where is should be symbolic and where it should be llm based (fuzzy/neural). This is the idea that started https://github.com/zby/llm-do - there is also RLM: https://alexzhang13.github.io/blog/2025/rlm/ RLM is simpler - but my approach also have some advantages.
I only agree with that statement if you're drawing from the set of all possible problems a priori. For any individual domain I think it's likely you can bound your analytic. This ties into the no free lunch theorem.
The underlying models are impressive, be it Gemini (via direct API calls, vs the app or search), I would include alpha-go/fold/etc in that classification
The products they build, where the agentic stuff is, is what I find unimpressive. The quality is low, the UX is bad, they are forced into every product. Two notable examples, search in GCloud, gemini-cli, antigravity (not theirs technically, $2B whitelabel deal with windsurf iirc)
So yes, I see it as perfectly acceptable to be more skeptical of Google's take on agentic systems when I find their real world applications lackluster
I agree with you in general re "agentic systems". Though they might deliberately not be trying to compete in the "agent harness" space yet.
The antigravity experiment yes was via windsurf - probably nobody expected that to take off but maybe was work that made have surfaced some lessons worth learning from.
My hunch is that Google is past it's prime, all the good PMs are gone, and now it looks like a chicken hydra with all the heads off and trying to run in multiple directs.
There is no clear vision, coherence, or confidence that the products will be around in a another year
Kind of a weird take given they are one of the strongest AI providers who are the most vertically integrated. Sure, maybe the company isn’t as healthy as it once was, but none of them are - late stage capitalism is rotting most foundations
Here's my thinking: multi-agent systems are just wider transformers, with siloed attention.
Each 'agent' is just another instantiation of the same transformer block applied to different tokens. The only thing that makes them separate is that they can't attend to each other's context outside of the deliberate mixing step.
In theory, if you had an infinitely large context, you wouldn't need multiple agents. But you can never have this because of the communication cost. So all of these multi-agent coordination systems are really just approximate versions of attention.
The "alignment principle" vs "sequential penalty" finding mirrors my production experience exactly.
I run a multi-agent system where specialized agents handle different business functions (customer support, code review, deployment monitoring). The key insight: task decomposability determines architecture.
Parallelizable tasks (analyzing independent customer tickets, running separate test suites) show massive gains with independent agents. Sequential workflows (debugging a specific issue that requires following a chain of logic) degrade with coordination overhead.
The "tool-use bottleneck" is real. We hit it around 12-15 tools per agent. The coordination tax becomes severe. Solution: role-based tool access. Support agents get 5 tools, deployment agents get 8, code review agents get 6. Overlap is minimal.
One counter-intuitive finding: persistent memory per agent beats centralized knowledge. Each agent has AGENTS.md (instructions), TOOLS.md (available actions), and memory/ directory (session logs). Agents learn from their own mistakes without polluting each other's context.
The error amplification metric (17.2x for independent vs 4.4x for centralized) explains why we use a hub-and-spoke model with human checkpoints at handoff boundaries.
Documented these patterns at howtoopenclawfordummies.com for anyone building similar systems.
"Master Open Claw in Hours, Not Months"
How old is openClaw again?
But your webpage is delicious. 11 blog posts only today. You all wrote them yourself?
> We found that independent multi-agent systems (agents working in parallel without talking) amplified errors by 17.2x
The paper sounds too shallow. The errors data doesn't seem to have a rationale or correlation against the architecture. Specifically, what makes the SAS architecture to have lowest error rates while the similar architecture with independent agents having highest error rates? The conclusion doesn't seem well-grounded with reasoning.
I’ve been building a lot of agent workflows at my day job. Something that I’ve found a lot of success with when deciding on an orchestration strategy is to ask the agent what they recommend as part of the planning for phase. This technique of using the agent to help you improve its performance has been a game changer for me in leveraging this tech effectively. YMMV of course. I mostly use Claude code so who knows with the others.
This is a neat idea but there are so many variables here that it's hard to make generalizations.
Empirically, a top level orchestrator that calls out to a planning committee, then generates a task-dag from the plan which gets orchestrated in parallel where possible is the thing I've seen put in the best results in various heterogeneous environments. As models evolve, crosstalk may become less of a liability.
Reasoning is recursive - you cannot isolate where is should be symbolic and where it should be llm based (fuzzy/neural). This is the idea that started https://github.com/zby/llm-do - there is also RLM: https://alexzhang13.github.io/blog/2025/rlm/ RLM is simpler - but my approach also have some advantages.
I only agree with that statement if you're drawing from the set of all possible problems a priori. For any individual domain I think it's likely you can bound your analytic. This ties into the no free lunch theorem.
[dead]
gonna read this with a grain of salt because I have been rather unimpressed with Google's Ai products, save direct API calls to gemini
The rest is trash they are forcing down our throats
Yeah alpha go and zero were lame. The earth foundation model - that's just ridiculous.
That's sarcasm
---
Your "direct Gemini calls" is maybe the least impressive
edit: This paper is mostly a sort of "quantitative survey". Nothing to get too excited about requiring a grain of salt
The underlying models are impressive, be it Gemini (via direct API calls, vs the app or search), I would include alpha-go/fold/etc in that classification
The products they build, where the agentic stuff is, is what I find unimpressive. The quality is low, the UX is bad, they are forced into every product. Two notable examples, search in GCloud, gemini-cli, antigravity (not theirs technically, $2B whitelabel deal with windsurf iirc)
So yes, I see it as perfectly acceptable to be more skeptical of Google's take on agentic systems when I find their real world applications lackluster
I agree with you in general re "agentic systems". Though they might deliberately not be trying to compete in the "agent harness" space yet.
The antigravity experiment yes was via windsurf - probably nobody expected that to take off but maybe was work that made have surfaced some lessons worth learning from.
My hunch is that Google is past it's prime, all the good PMs are gone, and now it looks like a chicken hydra with all the heads off and trying to run in multiple directs.
There is no clear vision, coherence, or confidence that the products will be around in a another year
Kind of a weird take given they are one of the strongest AI providers who are the most vertically integrated. Sure, maybe the company isn’t as healthy as it once was, but none of them are - late stage capitalism is rotting most foundations