Building an AI Adventure Party: A Practical Guide to Multi-Agent Collaboration Configuration in HagiCode
Building an AI Adventure Party: A Practical Guide to Multi-Agent Collaboration Configuration in HagiCode
Section titled “Building an AI Adventure Party: A Practical Guide to Multi-Agent Collaboration Configuration in HagiCode”In modern software development, a single AI Agent is no longer enough for complex needs. How can multiple AI assistants from different companies collaborate within the same project? This article shares the multi-Agent collaboration configuration approach that the HagiCode project developed through real-world practice.
Background
Section titled “Background”Many developers have likely had this experience: bringing an AI assistant into a project really does improve coding efficiency. But as requirements grow more complex, one AI Agent starts to fall short. You want it to handle code review, documentation generation, unit tests, and more at the same time, but the result is often that it cannot balance everything well, and output quality becomes inconsistent.
What is even more frustrating is that once you try to introduce multiple AI assistants, things get more complicated. Each Agent has its own configuration method, API interface, and execution logic, and they may even conflict with one another. It is like a sports team where every player is individually strong, but nobody knows how to coordinate, so the whole match turns into chaos.
The HagiCode project ran into the same problem during development. As a complex project involving a frontend VSCode extension, backend AI services, and a cross-platform desktop client, in the 2026-03 version at that time we needed to integrate multiple AI assistants from different companies at once: Claude Code, Codex, CodeBuddy, iFlow, and more. Figuring out how to let them coexist harmoniously in the same project while making the best use of their individual strengths became a critical problem we had to solve.
That alone would already be enough trouble. After all, who wants to deal with a group of AI tools fighting each other every day?
The approach shared in this article is the multi-Agent collaboration configuration practice we developed in the HagiCode project through real trial and error and repeated optimization. If you are also struggling with multiple AI assistants working together, this article may give you some ideas. Maybe. Every project is different, after all.
About HagiCode
Section titled “About HagiCode”HagiCode is an AI coding assistant project that adopts an “adventure party” model in which multiple AI engines work together. Project repository: github.com/HagiCode-org/site.
The multi-Agent configuration approach shared here is one of the core techniques that allows HagiCode to maintain efficient development in complex projects. There is nothing especially mystical about it - it just turns a group of AIs into an adventure party that can actually coordinate.
HagiCode’s Multi-Agent Architecture Design
Section titled “HagiCode’s Multi-Agent Architecture Design”From “Going Solo” to “Team Collaboration”
Section titled “From “Going Solo” to “Team Collaboration””In the early days of the HagiCode project, we also tried using a single AI Agent to handle everything. We quickly discovered a clear bottleneck in that approach: different tasks demand different strengths. Some tasks require stronger contextual understanding, while others need more precise code editing. One Agent has a hard time excelling at all of them.
That made us realize that multiple Agents had to work together. But the problem was this: how do you let AI products from different companies coexist peacefully in the same project? We needed to solve several core issues:
- Configuration management complexity: each Agent has different configuration methods, API interfaces, and execution modes
- Unified communication protocol: we need a standardized way for different Agents to exchange data
- Task coordination and division of labor: how do we assign work reasonably so each Agent can play to its strengths
With those questions in mind, we started designing HagiCode’s multi-Agent architecture. It was not really that complicated in the end; we just had to think it through clearly.
Overall Architecture at a Glance
Section titled “Overall Architecture at a Glance”After multiple iterations, this is the architecture we settled on:
┌─────────────────────────────────────────────────────────────────┐│ AIProviderFactory ││ (Factory pattern for unified management of all AI Providers) │├─────────────────────────────────────────────────────────────────┤│ ClaudeCodeCli │ CodexCli │ CodebuddyCli │ IFlowCli ││ (Anthropic) │ (OpenAI) │ (Zhipu GLM) │ (Zhipu) │└─────────────────────────────────────────────────────────────────┘The core idea is to let different AI Agents be managed by the same code through a unified Provider interface. At the same time, the factory pattern is used to dynamically create and configure these Providers, ensuring scalability and flexibility across the system.
It is like division of labor in daily life. Everyone has a role; here we simply turned that idea into code architecture.
Agent Types and Division of Responsibilities
Section titled “Agent Types and Division of Responsibilities”Based on HagiCode’s real-world experience, we assigned different responsibilities to each Agent:
| Agent | Provider | Model | Primary Use |
|---|---|---|---|
| ClaudeCodeCli | Anthropic | glm-5-turbo | Generate technical solutions and Proposals |
| CodexCli | OpenAI/Zed | gpt-5.4 | Execute precise code changes |
| CodebuddyCli | Zhipu | glm-4.7 | Refine proposal descriptions and documentation |
| IFlowCli | Zhipu | glm-4.7 | Archive proposals and historical records (configuration at the time; now legacy-compatible only) |
| OpenCodeCli | - | - | General-purpose code editing |
| GitHubCopilot | Microsoft | - | Assisted programming and code completion |
The logic behind this division of labor is simple: every Agent has its own area of strength. Claude Code performs well at understanding and analyzing complex requirements, so it handles early solution design. Codex is more precise when modifying code, so it is better suited for concrete implementation work. CodeBuddy offers strong cost performance, which makes it a great fit for refining documentation.
After all, the right tool for the right job is usually the best choice. There are many roads to Rome; some are simply easier to walk than others.
Core Configuration Mechanisms
Section titled “Core Configuration Mechanisms”Unified Provider Interface Design
Section titled “Unified Provider Interface Design”To manage different AI Agents in a unified way, we first need to define a common interface. In HagiCode, that interface looks like this:
public interface IAIProvider{ // Unified Provider interface Task<IAIProvider?> GetProviderAsync(AIProviderType providerType); Task<IAIProvider?> GetProviderAsync(string providerName, CancellationToken cancellationToken);}The interface looks simple, but it is the foundation of the entire multi-Agent system. With a unified interface, we can call AI products from different companies in exactly the same way, no matter what is underneath.
This is really just a matter of making complex things simple. Simple is beautiful, after all.
Provider Factory Pattern Implementation
Section titled “Provider Factory Pattern Implementation”Once the interface is unified, the next question is how to create these Provider instances. HagiCode uses the factory pattern:
private IAIProvider? CreateProvider(AIProviderType providerType, ProviderConfiguration config){ return providerType switch { AIProviderType.ClaudeCodeCli => ActivatorUtilities.CreateInstance<ClaudeCodeCliProvider>(_serviceProvider, Options.Create(config)), AIProviderType.CodebuddyCli => ActivatorUtilities.CreateInstance<CodebuddyCliProvider>(_serviceProvider, Options.Create(config)), AIProviderType.CodexCli => ActivatorUtilities.CreateInstance<CodexCliProvider>(_serviceProvider, Options.Create(config)), AIProviderType.IFlowCli => ActivatorUtilities.CreateInstance<IFlowCliProvider>(_serviceProvider, Options.Create(config)), _ => null };}This uses dependency injection through ActivatorUtilities.CreateInstance, which can dynamically create Provider instances at runtime while automatically injecting dependencies. The benefit of this design is that when a new Agent type is added, you only need to add the corresponding Provider class and then add one more case branch in the factory method. There is no need to modify the existing code at all.
That is reason enough. Who wants to rewrite a pile of old code every time a new feature is added?
Dynamic Configuration Resolution
Section titled “Dynamic Configuration Resolution”To make configuration more flexible, we also implemented a type-mapping mechanism:
public static AIProviderTypeExtensions{ private static readonly Dictionary<string, AIProviderType> _typeMap = new( StringComparer.OrdinalIgnoreCase) { ["ClaudeCodeCli"] = AIProviderType.ClaudeCodeCli, ["CodebuddyCli"] = AIProviderType.CodebuddyCli, ["CodexCli"] = AIProviderType.CodexCli, ["IFlowCli"] = AIProviderType.IFlowCli, // ...more type mappings };}The purpose of this mapping table is to convert string-form Provider names into enum types. This allows configuration files to use intuitive string names, while the internal code uses type-safe enums for processing.
Configuration should be as intuitive as possible. Nobody wants to memorize a pile of obscure code names.
Example Configuration File
Section titled “Example Configuration File”In practice, everything can be configured in appsettings.json:
AI: Providers: Providers: ClaudeCodeCli: Enabled: true Model: glm-5-turbo WorkingDirectory: /path/to/project CodebuddyCli: Enabled: true Model: glm-4.7 CodexCli: Enabled: true Model: gpt-5.4 IFlowCli: Enabled: true Model: glm-4.7Each Provider can independently configure parameters such as enablement, model version, and working directory. This design preserves flexibility while remaining easy to manage and maintain.
In some ways, configuration files are like life’s options: you can choose to enable or disable certain things. The only difference is that code choices are easier to regret later.
Adventure Party Task Flow
Section titled “Adventure Party Task Flow”The Art of Task Division
Section titled “The Art of Task Division”With the unified technical architecture in place, the next step is making multiple Agents work together. HagiCode designed a task flow mechanism so different Agents can handle different stages of the work:
Proposal creation (user) │ ▼[Claude Code] ──generate proposal──▶ Proposal document │ │ │ ▼ │ [Codebuddy] ──refine description──▶ Refined proposal │ │ │ ▼ │ [Codex] ──execute changes──▶ Code changes │ │ │ ▼ └──────────────────────▶ [iFlow] ──archive──▶ Historical recordsThe benefit of this division of labor is that each Agent only needs to focus on the tasks it does best, rather than trying to do everything. Claude Code generates proposals from scratch. Codebuddy makes proposal descriptions clearer. Codex turns proposals into actual code changes. iFlow archives and preserves those changes.
This is really just teamwork, the same as in daily life. Everyone has a role, and only together can something big get done. Here, the team members just happen to be AIs.
Key Practical Takeaways
Section titled “Key Practical Takeaways”In actual operation, we summarized the following lessons:
1. Agent selection strategy matters
Tasks should not be assigned casually; they should be matched to each Agent’s strengths:
- Proposal generation: use Claude Code, because it has stronger contextual understanding
- Code execution: use Codex, because it is more precise for code modification
- Proposal refinement: use Codebuddy, because it offers strong cost performance
- Archival storage: use iFlow, because it is stable and reliable
After all, putting the right person on the right task is a timeless principle.
2. Configuration isolation ensures stability
Each Agent’s configuration is managed independently, supports environment-variable overrides, and uses separate working directories. As a result, a configuration error in one Agent does not affect the others.
This is like personal boundaries in life. Everyone needs their own space; non-interference makes coexistence possible.
3. Error-handling mechanism
A failure in a single Agent should not affect the overall workflow. We implemented a fallback strategy: when one Agent fails, the system can automatically switch to a backup plan or skip that step and continue with later tasks. At the same time, complete logging makes troubleshooting easier afterward.
Nobody can guarantee that errors will never happen. The key is how you handle them. Life works much the same way.
4. Monitoring and observability
Through the ACP protocol (our custom communication protocol based on JSON-RPC 2.0), we can track the execution status of each Agent. Session isolation ensures concurrency safety, while dynamic caching improves performance.
The things you cannot see are often the ones most likely to go wrong. Some visibility is always better than flying blind.
Real-World Results and Benefits
Section titled “Real-World Results and Benefits”After adopting this multi-Agent collaboration configuration, the HagiCode project’s development efficiency improved significantly. Specifically:
- Task-handling capacity doubled: in the past, one Agent had to handle many kinds of tasks at once; now tasks can be processed in parallel, and throughput has increased dramatically
- More stable output quality: each Agent focuses only on what it does best, so consistency and quality both improve
- Lower maintenance cost: unified interfaces and configuration management make the whole system easier to maintain and extend
- Adding new Agents is simple: to integrate a new AI product, you only need to implement the interface and add configuration, without changing the core logic
This approach not only solved HagiCode’s own problems, but also proved that multi-Agent collaboration is a viable architectural choice.
The gains were quite noticeable. The process was just a bit of a hassle.
Conclusion
Section titled “Conclusion”This article shared the HagiCode project’s practical experience with multi-Agent collaboration configuration. The main takeaways include:
- Standardized interfaces:
IAIProviderunifies the behavior of different Agents, allowing the code to ignore which company’s product is underneath - Factory pattern:
ActivatorUtilities.CreateInstancedynamically creates Provider instances, supporting runtime configuration and dependency injection - Protocol unification: the ACP protocol provides standardized communication between Agents through a bidirectional mechanism based on JSON-RPC 2.0
- Task routing: assign work reasonably across different Agents so each can play to its strengths, instead of expecting one Agent to do everything
This design not only solves the problem of “multiple Agents fighting each other,” but also uses the adventure party task flow mechanism to make the development process more automated and specialized.
If you are also considering introducing multiple AI assistants, I hope this article gives you some useful reference points. Of course, every project is different, and the specific approach still needs to be adjusted to the actual situation. There is no one-size-fits-all solution; the best solution is the one that fits you.
Beautiful things or people do not need to be possessed. As long as they remain beautiful, simply appreciating that beauty is enough. Technical solutions are the same: the one that suits you is the best one…