12 Topology Patterns for Multi-Agent Systems
Sequential, Parallel, Hierarchical, DAG, Debate, Mesh, Star, Grid, Forest, Circular, Mixture-of-Agents, Maker — when to use each, with ASCII diagrams.
Most frameworks give you one topology. We built twelve.
When engineers build multi-agent systems, they typically default to a single execution pattern: chain agents together sequentially and call it a day. But agent orchestration is a graph problem, and different tasks demand fundamentally different graph shapes. A code review pipeline has different execution semantics than a debate between reasoning models. A research workflow has different coordination needs than a parallel test suite.
This is why Qualixar OS ships with 12 topology patterns out of the box. Each one is a first-class citizen with full execution semantics, error handling, and observability.
Why Topology Matters
If you have worked with network topologies, the analogy is direct. A star network has a single point of failure at the hub. A mesh network is resilient but noisy. A ring network has predictable latency but breaks if one node fails. The same trade-offs apply to agent orchestration.
The topology you choose determines three things: latency (how fast results arrive), quality (how good results are), and cost (how many tokens you burn). Pick wrong and you either waste money on unnecessary parallelism or bottleneck your pipeline on sequential steps that could run concurrently.
Here are the twelve patterns Qualixar OS supports, when to use each, and what they look like in practice.
1. Sequential
Agents run one after another. Each agent receives the output of the previous one.
[A] --> [B] --> [C] --> [D]
When to use: When each step depends on the previous step's output and order matters. Classic pipeline processing.
Real-world example: A document processing pipeline where Agent A extracts text, Agent B classifies the document type, Agent C applies domain-specific rules, and Agent D generates a summary. Each step needs the full context of the previous one.
2. Parallel
All agents run simultaneously. Results are merged at the end.
+--> [A] --+
[IN]+--> [B] --+--> [MERGE]
+--> [C] --+
When to use: When agents perform independent work on the same input and you want to minimize latency. The merge strategy (concatenate, vote, rank) is configurable.
Real-world example: Running security analysis, performance analysis, and code style checks on the same codebase simultaneously. Each analyzer is independent; results are combined into a single report.
3. Hierarchical
A manager agent delegates tasks to worker agents, collects results, and synthesizes a final output.
[Manager]
/ | \
[W1] [W2] [W3]
When to use: When a complex task needs to be decomposed dynamically at runtime. The manager decides which subtasks to create based on the input, unlike Parallel where the fan-out is fixed at design time.
Real-world example: A research agent that receives a broad question, breaks it into sub-questions, dispatches specialized workers (one for academic papers, one for web search, one for internal docs), and synthesizes their findings into a coherent answer.
4. DAG (Directed Acyclic Graph)
Agents execute based on a dependency graph. An agent runs only when all its dependencies have completed.
[A] --> [B] --> [D]
[A] --> [C] --> [D]
When to use: When your workflow has complex dependencies that are neither purely sequential nor purely parallel. DAG is the general case that subsumes both Sequential and Parallel as special cases.
Real-world example: A CI/CD pipeline where linting and type-checking run in parallel after checkout, unit tests run after linting passes, integration tests run after both unit tests and a database migration step complete, and deployment runs only after all tests pass.
5. Debate
Two or more agents argue opposing positions. A judge agent evaluates the arguments and picks a winner or synthesizes a consensus.
[Pro] <---> [Con]
\ /
[Judge]
When to use: When you need high-quality reasoning on ambiguous problems. Debate forces the system to consider multiple perspectives before converging on an answer. It costs more tokens but produces more robust outputs.
Real-world example: Architecture decision records where one agent argues for microservices, another argues for a monolith, and a judge agent evaluates both arguments against the project's specific constraints (team size, deployment frequency, latency requirements) to make a recommendation.
6. Mesh
Every agent can communicate with every other agent. There is no central coordinator.
[A] <--> [B]
^ \ / ^
| \/ |
| /\ |
v / \ v
[C] <--> [D]
When to use: When agents need to share information dynamically and the communication pattern is not predictable at design time. Mesh is the most flexible but also the most expensive in terms of coordination overhead.
Real-world example: A multi-agent simulation where agents represent different departments in an organization. Sales discovers a customer need, engineering evaluates feasibility, finance checks budget, and legal reviews compliance. Any agent might need to consult any other agent based on what they discover.
7. Star
A central coordinator connects to all spoke agents. Spokes do not communicate with each other directly.
[Hub]
/ | \ \
[A] [B] [C] [D]
When to use: When you want the simplicity of centralized control with the parallelism of distributed execution. The hub manages state and routing; spokes are stateless workers. Easier to debug than Mesh, more flexible than Parallel.
Real-world example: A customer support system where the hub agent triages incoming tickets and routes them to specialized agents (billing, technical, account management). The hub tracks state and handles escalation; spoke agents are domain experts.
8. Grid
Agents are arranged in a 2D lattice. Each agent communicates with its neighbors (up, down, left, right).
[A1]-[A2]-[A3]
| | |
[B1]-[B2]-[B3]
| | |
[C1]-[C2]-[C3]
When to use: When your problem domain has spatial or matrix-like structure. Grid topology naturally maps to problems where local interactions between adjacent cells produce emergent global behavior.
Real-world example: A large document analysis where rows represent sections and columns represent analysis dimensions (factual accuracy, tone, clarity). Each agent analyzes one cell and shares context with its neighbors to maintain consistency across the document.
9. Forest
Multiple independent trees running in parallel. Each tree has its own root and hierarchy, but trees do not communicate with each other.
[R1] [R2] [R3]
/ \ / \ |
[A] [B] [C] [D] [E]
When to use: When you have multiple independent hierarchical workflows that should run concurrently without cross-contamination. Results from all trees are collected at the end.
Real-world example: Evaluating multiple vendor proposals simultaneously. Each tree independently analyzes one proposal (root decomposes into cost analysis, technical evaluation, reference checks). Trees run in parallel; final comparison happens after all trees complete.
10. Circular
Agents form a ring. Each agent processes the input and passes it to the next. The output of the last agent feeds back to the first for iterative refinement.
[A] --> [B] --> [C]
^ |
|________________|
When to use: When iterative refinement improves quality and you want each pass to build on the previous one. The loop terminates based on a convergence criterion or a maximum iteration count.
Real-world example: A writing pipeline where Agent A drafts, Agent B critiques, Agent C rewrites based on the critique, and the cycle repeats. Each iteration sharpens the output. The loop exits when the critique agent finds no significant issues or after three rounds.
11. Mixture-of-Agents
Multiple models (or the same model with different temperatures/prompts) process the same input. Their outputs are aggregated through voting, ranking, or a meta-model that selects the best response.
[Model-A] \
[Model-B] --> [Aggregator]
[Model-C] /
When to use: When reliability matters more than cost. Different models have different failure modes; combining them reduces the probability of any single model's blind spot affecting the output. Inspired by ensemble methods in machine learning.
Real-world example: A content moderation system where GPT-4, Claude, and Gemini each classify a piece of content. An aggregator uses majority voting for clear cases and escalates to human review when models disagree. This catches edge cases that any single model would miss.
12. Maker
One agent creates, another evaluates. They iterate until the output meets a quality threshold. This is a two-agent generate-and-test loop.
[Maker] --> [Evaluator]
^ |
|___(revise)__|
When to use: When the task has clear acceptance criteria that can be evaluated programmatically or by a specialized critic. Maker topology separates generation from evaluation, preventing the common failure mode where a single agent is both creator and critic of its own work.
Real-world example: Code generation where the Maker agent writes a function and the Evaluator agent runs the test suite. If tests fail, the Evaluator sends the failure output back to the Maker with instructions to fix. The loop continues until all tests pass or a retry limit is hit.
How to Choose: The Decision Matrix
Choosing the right topology comes down to three questions:
Are your tasks independent? If yes, start with Parallel. If they have dependencies, use DAG. If the dependencies are purely linear, Sequential is simpler.
Do you need iteration? If the output improves with multiple passes, use Circular for multi-agent refinement or Maker for generate-and-test. If you need adversarial reasoning, use Debate.
How complex is coordination? For simple fan-out/fan-in, Star is clean. For dynamic decomposition, Hierarchical gives the manager agent control. For unrestricted communication, Mesh is the most flexible but hardest to debug. Grid works when your problem has spatial structure. Forest handles multiple independent hierarchies.
Is reliability the priority? Use Mixture-of-Agents to ensemble across models and reduce single-model failure modes.
In practice, production systems often compose multiple topologies. A DAG might contain a Debate node at a critical decision point. A Hierarchical workflow might use Parallel execution within each worker. Qualixar OS supports this nesting natively through its topology engine.
The Full Picture
These twelve topologies are not theoretical. They are implemented with full execution semantics in Qualixar OS, including retry policies, timeout handling, streaming results, and observability hooks. Every topology reports execution traces, token usage, and latency breakdowns.
The key insight is that topology is an architectural decision, not an implementation detail. The right topology can cut latency by 80% (Parallel vs Sequential), improve output quality by 40% (Debate vs single-agent), or reduce cost by 60% (Star vs Mesh). Picking the wrong one and trying to compensate with better prompts is like optimizing a bubble sort instead of switching to quicksort.
If you want to dig deeper into the execution semantics, fault tolerance model, and benchmark results for each topology, check out the Qualixar OS paper on arXiv:2604.06392.
This post is about qualixar-os→