Learning by studying and reproducing real projects is becoming mainstream, but scattered learning materials and broken context make it hard for AI assistants to deliver their full value. This article introduces the Vault system design in the HagiCode project: through a unified storage abstraction layer, AI assistants can understand and access all learning resources, enabling true cross-project knowledge reuse.
In fact, in the AI era, the way we learn new technologies is quietly changing. Traditional approaches like reading books and watching videos still matter, but “studying and reproducing projects” - deeply researching and learning from the code, architecture, and design patterns of excellent open source projects - is clearly becoming more efficient. Running and modifying high-quality open source projects directly is one of the fastest ways to understand real-world engineering practice.
But this approach also brings new challenges.
Learning materials are too scattered. Notes might live in Obsidian, code repositories may be spread across different folders, and an AI assistant’s conversation history becomes a separate data island. Every time you need AI help analyzing a project, you have to manually copy code snippets and organize context, which is quite tedious.
Context keeps getting lost. AI assistants cannot directly access local learning resources, so every conversation starts with re-explaining background information. The code repositories you study update quickly, and manual synchronization is error-prone. Worse still, knowledge is hard to share across multiple learning projects - the design patterns learned in project A are completely unknown to the AI when it works on project B.
At the core, these issues are all forms of “data islands.” If there were a unified storage abstraction layer that let AI assistants understand and access all learning resources, the problem would be solved.
To address these pain points, we made a key design decision while developing HagiCode: build a Vault system as a unified knowledge storage abstraction layer. The impact of that decision may be even greater than you expect - more on that shortly.
The approach shared in this article comes from practical experience in the HagiCode project. HagiCode is an AI coding assistant based on the OpenSpec workflow. Its core idea is that AI should not only be able to “talk,” but also be able to “do” - directly operate on code repositories, execute commands, and run tests. GitHub: github.com/HagiCode-org/site
During development, we found that AI assistants need frequent access to many kinds of user learning resources: code repositories, notes, configuration files, and more. If users had to provide everything manually each time, the experience would be terrible. That led us to design the Vault system.
HagiCode’s Vault system supports four types, each corresponding to different usage scenarios:
Type
Purpose
Typical Scenario
folder
General-purpose folder type
Temporary learning materials, drafts
coderef
Designed specifically for studying code projects
Systematically learning an open source project
obsidian
Integrates with Obsidian note-taking software
Reusing an existing notes library
system-managed
Managed automatically by the system
Project configuration, prompt templates, and more
Among them, the coderef type is the most commonly used in HagiCode. It provides a standardized directory structure and AI-readable metadata descriptions for code-study projects. Why design this type specifically? Because studying an open source project is not as simple as “downloading code.” You also need to manage the code itself, learning notes, configuration files, and other content at the same time, and coderef standardizes all of that.
This design may look simple, but it was carefully considered:
Simple and reliable. JSON is human-readable, making it easy to debug and modify manually. When something goes wrong, you can open the file directly to inspect the state or even repair it by hand - especially useful during development.
Reduced dependencies. File system storage avoids the complexity of a database. There is no need to install and configure an extra database service, which reduces system complexity and maintenance cost.
Concurrency-safe.SemaphoreSlim is used to guarantee thread safety. In an AI coding assistant scenario, multiple operations may access the Vault registry at the same time, so concurrency control is necessary.
This allows the AI assistant to automatically understand which learning resources are available, without requiring the user to provide context manually every time. It makes the HagiCode experience feel especially natural - tell the AI, “Help me analyze React concurrent rendering,” and it can automatically find the previously registered React learning Vault instead of asking you to paste code over and over again.
reference (read-only): AI can only use the content for analysis and understanding, without modifying it
editable (modifiable): AI can modify the content as needed for the task
This distinction tells the AI which content is “read-only reference” and which content it is allowed to modify, reducing the risk of accidental changes. For example, if you register an open source project’s Vault as learning material, you definitely do not want AI casually editing the code inside it - so mark it as reference. But if it is your own project Vault, you can mark it as editable and let AI help modify the code.
For coderef Vaults, the system provides a standardized directory structure:
my-coderef-vault/
├── index.yaml # vault metadata description
├── AGENTS.md # operating guide for AI assistants
├── docs/ # stores learning notes and documentation
└── repos/ # manages referenced code repositories through Git submodules
What is the design philosophy behind this structure?
docs/ stores learning notes, using Markdown to record your understanding of the code, architecture analysis, and lessons from debugging. These notes are not only for you - AI can understand them too, and will automatically reference them when handling related tasks.
repos/ manages the studied repositories through Git submodules rather than by copying code directly. This has two benefits: first, it stays in sync with upstream, and a single git submodule update fetches the latest code; second, it saves space, because multiple Vaults can reference different versions of the same repository.
index.yaml contains Vault metadata so the AI assistant can quickly understand its purpose and contents. It is essentially a “self-introduction” for the Vault, so the AI knows what it is for the first time it sees it.
AGENTS.md is a guide written specifically for AI assistants, explaining how to handle the content inside the Vault. You can tell the AI things like: “When analyzing this project, focus on code related to performance optimization” or “Do not modify test files.”
Scenario 1: Systematically studying open source projects
Create a CodeRef Vault, manage the target repository through Git submodules, and record learning notes in the docs/ directory. AI can access both the code and the notes at the same time, providing more accurate analysis. Notes written while studying a module are automatically referenced by the AI when it later analyzes related code - like having an “assistant” that remembers your previous thinking.
Scenario 2: Reusing an Obsidian notes library
If you are already using Obsidian to manage notes, just register your existing Vault in HagiCode directly. AI can access your knowledge base without manual copy-paste. This feature is especially practical because many people have years of accumulated notes, and once connected, AI can “read” and understand that knowledge system.
Scenario 3: Cross-project knowledge reuse
Multiple AI proposals can reference the same Vault, enabling knowledge reuse across projects. For example, you can create a “design patterns learning Vault” that contains notes and code examples for many design patterns. No matter which project the AI is analyzing, it can refer to the content in that Vault - knowledge does not need to be accumulated repeatedly.
"Vault file paths must stay inside the registered vault root.");
}
return combinedPath;
}
This ensures all file operations stay within the Vault root directory and prevents malicious path access. Security is not something to take lightly. If an AI assistant is going to operate on the file system, the boundaries must be clearly defined.
When using the HagiCode Vault system, there are several things to pay special attention to:
Path safety: Make sure custom paths stay within the allowed scope, otherwise the system will reject the operation. This prevents accidental misuse and potential security risks.
Git submodule management: CodeRef Vaults are best managed with Git submodules instead of directly copying code. The benefits were covered earlier - keeping in sync and saving space. That said, submodules have their own workflow, so first-time users may need a little time to get familiar with them.
File preview limits: The system limits file size (256KB) and quantity (500 files), so oversized files need to be handled in batches. This limit exists for performance reasons. If you run into very large files, you can split them manually or process them another way.
Diagnostic information: Creating a Vault returns diagnostic information that can be used for debugging on failure. Check the diagnostics first when you run into issues - in most cases, that is where you will find the clue.
The HagiCode Vault system is fundamentally solving a simple but profound problem: how to let AI assistants understand and use local knowledge resources.
Through a unified storage abstraction layer, a standardized directory structure, and automated context injection, it delivers a knowledge management model of “register once, reuse everywhere.” Once a Vault is created, AI can automatically access and understand learning notes, code repositories, and documentation resources.
The experience improvement from this design is obvious. There is no longer any need to manually copy code snippets or repeatedly explain background information - the AI assistant becomes more like a teammate who truly understands the project and can provide more valuable help based on existing knowledge.
The Vault system shared in this article is a solution shaped through real trial and error and real optimization during HagiCode development. If you think this design is valuable, that says something about the engineering behind it - and HagiCode itself is worth checking out as well.
Thank you for reading. If you found this article useful, please like, save, and share it.
This content was created with AI-assisted collaboration, and the final version was reviewed and confirmed by the author.
In the MonoSpecs project management system, DESIGN.md carries the architectural design and technical decisions of a project. But the traditional editing workflow forces users to jump out to an external editor. That fragmented experience is like being interrupted in the middle of reading a poem: the inspiration is gone, and so is the mood. This article shares the solution we put into practice in the HagiCode project: editing DESIGN.md directly in the web interface, with support for importing templates from an online design site. After all, who does not enjoy the feeling of completing everything in one flow?
As the core carrier of project design documents, DESIGN.md holds key information such as architecture design, technical decisions, and implementation guidance. However, the traditional editing approach requires users to switch to an external editor such as VS Code, manually locate the physical path, and then edit the file. It is not especially complicated, but after repeating the process a few times, it becomes tiring.
The problems mainly show up in the following ways:
Fragmented workflow: users must constantly switch between the web management interface and a local editor, breaking the continuity of their workflow, much like having the music cut out in the middle of a song.
Hard to reuse: the design site already publishes a rich library of design templates, but they cannot be integrated directly into the project editing workflow. The good stuff exists, but you still cannot use it where you need it.
Missing experience loop: there is no closed loop for “preview-select-import,” so users must copy and paste manually, which increases the risk of mistakes.
Collaboration friction: keeping design documents and code implementation in sync becomes a high-friction process, which hurts team efficiency.
To solve these pain points, we decided to add direct editing support for DESIGN.md in the web interface and allow one-click template import from an online design site. It was not some earth-shaking decision. We simply wanted to make the development experience smoother.
The solution shared in this article comes from our hands-on experience in the HagiCode project. HagiCode is an AI-driven coding assistant project, and during development we frequently need to maintain project design documents. To help the team collaborate more efficiently, we explored and implemented this online editing and import solution. There is nothing mysterious about it. We ran into a problem and worked out a way to solve it.
This solution uses a frontend-backend separated architecture with a same-origin proxy, mainly composed of the following layers. In practice, the design can be summed up as “each part doing its own job”:
We use a single global drawer instead of local pop-up layers, with state managed through layoutSlice, which gives users a consistent experience across views (classic and kanban). No matter which view the user opens the editor from, they get the same interaction model. A consistent experience makes people feel more at ease instead of getting disoriented when they switch views.
We mounted DESIGN.md-related endpoints under ProjectController, reusing the existing project permission boundary and avoiding the complexity of adding a separate controller. This makes permission handling clearer and also aligns with RESTful resource organization. Sometimes reuse is more meaningful than creating something new from scratch.
We derive an opaque version from the file system’s LastWriteTimeUtc, which gives us lightweight optimistic concurrency control. When multiple users edit the same file at once, the system can detect conflicts and prompt the user to refresh. This design does not block editing, while still protecting data consistency.
We use IHttpClientFactory to proxy external design-site resources, avoiding both cross-origin issues and SSRF risks. This keeps the system secure while also simplifying frontend calls. You can hardly be too careful with security.
The backend is mainly responsible for path resolution, file read/write, and version management. These tasks are basic, but indispensable, like the foundation of a house:
The version design is interesting in its simplicity: we use the file’s last modified time and size to generate a unique version identifier. It is lightweight and reliable, with no extra version database to maintain. Simple solutions are often the most effective.
On the frontend, we implement dirty-state detection and save logic. This design helps users understand whether their changes have been saved and reduces the anxiety of “what if I lose it?”:
// Dirty-state detection and save logic
const [draft, setDraft] = useState('');
const [savedDraft, setSavedDraft] = useState('');
const isDirty = draft !== savedDraft;
const handleSave = useCallback(async () => {
const result = await saveProjectDesignMdDocument({
...activeTarget,
content: draft,
expectedVersion: document.version, // optimistic concurrency control
});
setSavedDraft(draft); // update saved state
}, [activeTarget, document, draft]);
In this implementation, we maintain two pieces of state: draft is the content currently being edited, while savedDraft is the saved content. Comparing them tells us whether there are unsaved changes. The design is simple, but it gives people peace of mind. Nobody wants the thing they worked hard on to disappear.
The index file on the design site defines all available templates. With this index, users can choose the template they want as easily as ordering from a menu:
Each entry includes the template’s basic information and download links. The backend reads the list of available templates from this index and presents them for the user to choose from. That makes selection intuitive instead of forcing people to feel their way around in the dark.
We do two things here: first, we validate the slug format strictly with a regular expression to prevent path traversal attacks; second, we cache preview images to reduce pressure on the external site. The former is protection, the latter is optimization, and both matter.
const result = await getProjectDesignMdSiteImportDocument(
activeTarget.projectId,
entry.slug
);
setDraft(result.content); // replace editor text only, do not save automatically
setIsImportDrawerOpen(false);
}, [activeTarget?.projectId]);
The import flow follows a “user confirmation” principle: after import, only the editor content is updated, and nothing is saved automatically. Users can inspect the imported content and save it manually only after confirming it looks right. The final decision should stay in the hands of the user.
When DESIGN.md does not exist, the backend returns a virtual document state. This lets the frontend avoid special handling for the “file does not exist” case, and a unified API simplifies the code logic:
returnnew SaveProjectDesignDocumentResultDto { Created =!exists };
}
The benefit of this design is that the frontend does not need special-case logic for missing files. By hiding that complexity in the backend, the frontend can focus more easily on user experience.
Scenario 2: Import a Template from the Design Site
After the user selects the “Linear” design template in the import drawer, the system fetches the DESIGN.md content through the backend proxy. The whole process is transparent to the user: they only choose a template, and the system handles the network requests and data transformation automatically.
// 1. The system fetches DESIGN.md content through the backend proxy
// 2. The backend validates the slug and fetches content from upstream
var entry = FindDesignSiteEntry(catalog, "linear.app");
using var upstreamResponse = await httpClient.SendAsync(request);
var content = await upstreamResponse.Content.ReadAsStringAsync();
// 3. The frontend replaces the editor text
setDraft(result.content);
// The user reviews it and then saves it manually to disk
The whole flow stays transparent to the user. They just choose a template, and the system handles the networking and transformation behind the scenes. That is the experience we want: simple, but powerful.
When multiple users edit the same DESIGN.md at the same time, the system detects version conflicts. This optimistic concurrency control mechanism preserves data consistency without blocking the user’s edits:
if (!string.Equals(currentVersion, expectedVersion, StringComparison.Ordinal))
{
thrownew BusinessException(
ProjectDesignDocumentErrorCodes.VersionConflict,
$"DESIGN.md at '{targetPath}' changed on disk.");
}
The frontend catches this error and prompts the user:
// Frontend prompts the user to refresh and retry
<Alert>
<AlertTitle>Version conflict</AlertTitle>
<AlertDescription>
The file was modified by another process.Please refresh to get the latest version and try again.
</AlertDescription>
</Alert>
This optimistic concurrency control mechanism keeps data consistent without blocking users while they work. Conflicts are unavoidable, but at least users should know what happened instead of silently losing their changes.
Gracefully degrade when the upstream site is unavailable. This design ensures that even if an external dependency fails, the core editing functionality still works normally:
// Gracefully degrade when the upstream site is unavailable
This graceful degradation ensures that even when external dependencies are unavailable, the core editing function continues to work. A system should be resilient instead of collapsing the moment something goes wrong.
Markdown enhancement: we currently use a basic Textarea, but we could upgrade to CodeMirror for syntax highlighting and keyboard shortcuts. When the editor feels better, writing documentation feels better too.
Preview mode: add real-time Markdown preview to improve the editing experience. What-you-see-is-what-you-get always gives people more confidence.
Diff merge: implement an intelligent merge algorithm instead of simple full-text replacement. Conflicts are inevitable, but the conflict-resolution process does not have to be painful.
Local caching: cache design.json in the database to reduce dependency on the external site. The fewer dependencies a system has, the more stable it tends to be.
In the HagiCode project, we implemented a complete online editing and import solution for DESIGN.md through frontend-backend collaboration. The core value of this solution lies in the following points:
Higher efficiency: no need to switch tools; editing and importing design documents can happen in one unified web interface.
Lower barrier to entry: one-click design template import helps new projects get started quickly.
Secure and reliable: path validation, version conflict detection, and graceful degradation mechanisms keep the system stable.
Better user experience: the global drawer, dirty-state detection, and confirmation dialogs refine the overall interaction experience.
This solution is already running in the HagiCode project and has solved the team’s pain points around design document management. If you are facing similar problems, I hope this article gives you some useful ideas. There is no particularly profound theory here, only the practical work of running into a problem and finding a way to solve it.
If this article helped you, feel free to give the project a Star on GitHub. The public beta has already started, and you can join the experience right after installing it. Open-source projects always need more feedback and encouragement, and if you found this useful, it is worth helping more people discover it.
“Beautiful things or people do not have to belong to you. As long as they remain beautiful, it is enough to quietly appreciate that beauty.”
The same goes for a DESIGN.md editor. It does not need to be overly complex. If it helps you work efficiently, that is already enough.
Thank you for reading. If you found this article useful, please consider liking, bookmarking, and sharing it.
This content was created with AI-assisted collaboration, and the final version was reviewed and approved by the author.
In the era of AI-assisted development, how can we help AI assistants better understand our learning resources? The HagiCode project built the Vault system as a unified knowledge storage abstraction layer that AI can understand, greatly improving the efficiency of learning through project reproduction.
In the AI era, the way developers learn new technologies and architectures is changing profoundly. “Reproducing projects” - that is, deeply studying and learning from the code, architecture, and design patterns of excellent open source projects - has become an efficient way to learn. Compared with traditional methods like reading books or watching videos, directly reading and running high-quality open source projects helps you understand real-world engineering practices much faster.
Still, this learning method comes with quite a few challenges.
Learning materials are too scattered. Your notes may live in Obsidian, code repositories may be scattered across different folders, and your AI assistant’s conversation history becomes yet another isolated data island. When you want AI to help analyze a project, you have to manually copy code snippets and organize context, which is rather tedious.
What is even more troublesome is the broken context. AI assistants cannot directly access your local learning resources, so you have to provide background information again in every conversation. On top of that, reproduced code repositories update quickly, manual syncing is error-prone, and knowledge is hard to share across multiple learning projects.
At the root, all of these problems come from “data islands.” If there were a unified storage abstraction layer that allowed AI assistants to understand and access all your learning resources, the problem would be solved neatly.
The Vault system shared in this article is exactly the solution we developed while building HagiCode. HagiCode is an AI coding assistant project, and in our daily development work we often need to study and refer to many different open source projects. To help AI assistants better understand these learning resources, we designed Vault, a cross-project persistent storage system.
This solution has already been validated in HagiCode in real use. If you are facing similar knowledge management challenges, I hope these experiences can offer some inspiration. After all, once you’ve fallen into a few pits yourself, you should leave something behind for the next person.
The core idea of the Vault system is simple: create a unified knowledge storage abstraction layer that AI can understand. From an implementation perspective, the system has several key characteristics.
Among them, the coderef type is the most commonly used in HagiCode. It is specifically designed for reproduced code projects, providing a standardized directory structure and AI-readable metadata descriptions.
The advantage of this design is that it is simple and reliable. JSON is human-readable, which makes debugging and manual editing easier; filesystem storage avoids the complexity of a database and reduces system dependencies. After all, sometimes the simplest option really is the best one.
This enables an important capability: AI assistants can automatically understand the available learning resources without users manually providing context. You could say that counts as a kind of tacit understanding.
The system automatically creates and manages the following vaults:
hagiprojectdata: project data storage used to save project configuration and state
personaldata: personal data storage used to save user preferences
hbsprompt: a prompt template library used to manage commonly used AI prompts
These vaults are initialized automatically when the system starts, so users do not need to configure them manually. Some things are simply better left to the system instead of humans worrying about them.
An important part of the design is access control. The system divides vaults into two access types:
exportinterface VaultForText {
id:string;
name:string;
type:string;
physicalPath:string;
accessType:'read'|'write'; // Key: distinguish read-only from editable
}
reference (read-only): AI is only used for analysis and understanding and cannot modify content. Suitable for referenced open source projects, documents, and similar materials
editable (editable): AI can modify content as needed for the task. Suitable for your notes, drafts, and similar materials
This distinction matters. It tells AI which content is “read-only reference” and which content is “safe to edit,” reducing the risk of accidental changes. After all, nobody wants their hard work to disappear because of an unintended edit.
// 1. Clone the React repository into vault/repos/react
// 2. Create the docs/ directory for notes
// 3. Generate the index.yaml metadata
// 4. Create the AGENTS.md guide file
return response;
};
This API call completes a series of actions: creating the directory structure, initializing Git submodules, generating metadata files, and more. You only need to provide the basic information and let the system handle the rest. It is honestly a fairly worry-free approach.
quickRequestText: "Focus on the Fiber architecture and scheduler implementation"
});
The system automatically injects vault information into the AI context, letting AI know which learning resources are available. When AI can understand what you have in mind, that kind of tacit understanding is hard to come by.
"Vault file paths must stay inside the registered vault root.");
}
return combinedPath;
}
This is important. If you customize a vault path, make sure it stays within the allowed range, otherwise the system will reject the operation. You really cannot overemphasize security.
CodeRef Vault recommends Git submodules instead of directly copying code:
privatestaticstringBuildCodeRefAgentsContent()
{
return"""
# CodeRef Vault Guide
Repositories under `repos/` should be maintained through Git submodules
rather than copied directly into the vault root.
Keep this structure stable so assistants and tools can understand the vault quickly.
"""+Environment.NewLine;
}
This brings several advantages: keeping code synchronized with upstream, saving disk space, and making it easier to manage multiple versions of the code. After all, who wants to download the same thing again and again?
If your vault contains a large number of files or very large files, preview performance may be affected. In that case, you can consider processing files in batches or using specialized search tools. Sometimes when something gets too large, it becomes harder to handle, not easier.
If creation fails, you can inspect the diagnostic information to understand the specific cause. When something goes wrong, checking the diagnostics is often the most direct way forward.
Through a unified storage abstraction layer, the Vault system solves several core pain points of reproducing projects in the AI era:
Centralized knowledge management: all learning resources are gathered in one place instead of scattered everywhere
Automatic AI context injection: AI assistants can automatically understand the available learning resources without manual context setup
Cross-project knowledge reuse: knowledge can be shared and reused across multiple learning projects
Standardized directory structure: a consistent directory layout lowers the learning curve
This solution has already been validated in the HagiCode project. If you are also building tools related to AI-assisted development, or facing similar knowledge management problems, I hope these experiences can serve as a useful reference.
In truth, the value of a technical solution does not lie in how complicated it is, but in whether it solves real problems. The core idea of the Vault system is very simple: build a unified knowledge storage layer that AI can understand. Yet it is precisely this simple abstraction that improved our development efficiency quite a bit.
Sometimes the simple approach really is the best one. After all, complicated things often hide even more pitfalls…
If this article helped you, feel free to give the project a Star on GitHub, or visit the official website to learn more about HagiCode. The public beta has already started, and you can experience the full AI coding assistant features as soon as you install it.
Thank you for reading. If you found this article useful, feel free to like, bookmark, and share it.
This content was created with AI-assisted collaboration, and the final content was reviewed and confirmed by the author.
In AI product design, the quality of user input often determines the quality of the output. This article shares a “progressive disclosure” interaction approach we practiced in the HagiCode project. Through step-by-step guidance, intelligent completion, and instant feedback, it turns users’ brief and vague inputs into structured technical proposals, significantly improving human-computer interaction efficiency.
Anyone building AI products has probably seen this situation: a user opens your app and enthusiastically types a one-line request, only for the AI to return something completely off target. It is not that the AI is not smart enough. The user simply did not provide enough information. Mind-reading is hard for anyone.
This issue became especially obvious while we were building HagiCode. HagiCode is an AI-driven coding assistant where users describe requirements in natural language to create technical proposals and sessions. In actual use, we found that user input often has these problems:
Inconsistent input quality: some users type only a few words, such as “optimize login” or “fix bug”, without the necessary context
Inconsistent technical terminology: different users use different terms for the same thing; some say “frontend” while others say “FE”
Missing structured information: there is no project background, repository scope, or impact scope, even though these are critical details
Repeated problems: the same types of requests appear again and again, and each time they need to be explained from scratch
The direct result is predictable: the AI has a harder time understanding the request, proposal quality becomes unstable, and the user experience suffers. Users think, “This AI is not very good,” while we feel unfairly blamed. If you give me only one sentence, how am I supposed to guess what you really want?
In truth, this is understandable. Even people need time to understand one another, and machines are no exception.
To solve these pain points, we made a bold decision: introduce the design principle of “progressive disclosure” to improve human-computer interaction. The changes this brought were probably larger than you would imagine. To be honest, we did not expect it to be this effective at the time.
The approach shared in this article comes from our practical experience in the HagiCode project. HagiCode is an open-source AI coding assistant designed to help developers complete tasks such as code writing, technical proposal generation, and code review through natural language interaction. Project link: github.com/HagiCode-org/site.
We developed this progressive disclosure approach through multiple rounds of iteration and optimization during real product development. If you find it valuable, that at least suggests our engineering is doing something right. In that case, HagiCode itself may also be worth a look. Good tools are meant to be shared.
“Progressive disclosure” is a design principle from the field of HCI (human-computer interaction). Its core idea is simple: do not show users all information and options at once. Instead, reveal only what is necessary step by step, based on the user’s actions and needs.
This principle is especially suitable for AI products because AI interaction is naturally progressive. The user says a little, the AI understands a little, then the user adds more, and the AI understands more. It is very similar to how people communicate with each other: understanding usually develops gradually.
In HagiCode’s scenario, we applied progressive disclosure in four areas:
1. Description optimization mechanism: let AI help you say things more clearly
When a user enters a short description, we do not send it directly to the AI for interpretation. Instead, we first trigger a “description optimization” flow. The core of this flow is “structured output”: converting the user’s free text into a standard format. It is like stringing loose pearls into a necklace so everything becomes easier to understand.
The optimized description must include the following standard sections:
Background: the problem background and context
Analysis: technical analysis and reasoning
Solution: the solution and implementation steps
Practice: concrete code examples and notes
At the same time, we automatically generate a Markdown table showing information such as the target repository, paths, and edit permissions, making subsequent AI operations easier. A clear directory always makes things easier to find.
Here is the actual implementation:
// Core method in ProposalDescriptionMemoryService.cs
The key to this flow is “memory injection”. We inject historical context such as project conventions, similar cases, and negative patterns into the prompt, allowing the AI to reference past experience during optimization. Experience should not go to waste.
Notes:
Make sure the current input takes priority over historical memory, so explicitly specified user information is not overridden
HagIndex references must be treated as factual sources and must not be altered by historical cases
Low-confidence correction suggestions should not be injected as strong constraints
2. Voice input capability: speaking is more natural than typing
In addition to text input, we also support voice input. This is especially useful for describing complex requirements. Typing a technical request can take minutes, while saying it out loud may take only a few dozen seconds.
The key design focus for voice input is “state management”. Users must clearly know what state the system is currently in. We defined the following states:
Idle: the system is ready and recording can start
Waiting upstream: the system is connecting to the backend service
There is an important detail here: we use a “fingerprint set” to manage deletion synchronization. When speech recognition returns multiple results, users may delete some of them. We store the fingerprints of deleted content so that if the same content appears again later, it is skipped automatically. It is essentially a way to remember what the user has already rejected.
3. Prompt management system: externalize the AI’s “brain”
For complex tasks, such as first-time installation and configuration, we use a multi-step wizard design. Each step requests only the necessary information and provides clear progress indicators. Large tasks become much more manageable when handled one step at a time.
The wizard state model:
interface WizardState {
currentStep:number; // 0-3, corresponding to 4 steps
steps:WizardStep[];
canGoNext:boolean;
canGoBack:boolean;
isLoading:boolean;
error:string|null;
}
interface WizardStep {
id:number;
title:string;
description:string;
completed:boolean;
}
// Step navigation logic
const goToNextStep = () => {
if (wizardState.currentStep < wizardState.steps.length - 1) {
Each step has its own validation logic, and completed steps receive clear visual markers. Canceling opens a confirmation dialog to prevent users from losing progress through accidental actions.
Looking back at HagiCode’s progressive disclosure practice, we can summarize several core principles:
Step-by-step guidance: break complex tasks into smaller steps and request only the necessary information at each stage
Intelligent completion: use historical context and project knowledge to fill in information automatically
Instant feedback: give every action clear visual feedback and status hints
Fault-tolerance mechanisms: allow users to undo and reset so mistakes do not lead to irreversible loss
Input diversity: support multiple input methods such as text and voice
In HagiCode, the practical result of this approach was clear: the average length of user input increased from fewer than 20 characters to structured descriptions of 200-300 characters, the quality of AI-generated proposals improved significantly, and user satisfaction increased along with it.
This is not surprising. The more information users provide, the more accurately the AI can understand them, and the better the results it can return. In that sense, it is not very different from communication between people.
If you are also building AI-related products, I hope these experiences offer some useful inspiration. Remember: users do not necessarily refuse to provide information. More often, the product has not yet asked the right questions in the right way. The core of progressive disclosure is finding the best timing and form for those questions.
If this article helped you, feel free to give us a Star on GitHub and follow the continued development of the HagiCode project. Public beta has already started, and you can experience the full feature set right now by installing it:
Thank you for reading. If you found this article useful, feel free to like, save, and share it.
This content was created with AI-assisted collaboration, and the final content was reviewed and confirmed by the author.
In AI application development, token consumption directly affects cost. In the HagiCode project, we implemented an “ultra-minimal Classical Chinese output mode” through the SOUL system. Without sacrificing information density, it reduces output tokens by roughly 30-50%. This article shares the implementation details of that approach and the lessons we learned using it.
In AI application development, token consumption is an unavoidable cost issue. This becomes especially painful in scenarios where the AI needs to produce large amounts of content. How do you reduce output tokens without sacrificing information density? The more you think about it, the more frustrating the problem can get.
Traditional optimization ideas mostly focus on the input side: trimming system prompts, compressing context, or using more efficient encoding. But these methods eventually hit a ceiling. Push compression too far, and you start hurting the AI’s comprehension and output quality. That is basically just deleting content, which is not very meaningful.
So what about the output side? Could we get the AI to express the same meaning more concisely?
The question sounds simple, but there is quite a bit hidden beneath it. If you directly ask the AI to “be concise,” it may really give you only a few words. If you add “keep the information complete,” it may drift back to the original verbose style. Constraints that are too strong hurt usability; constraints that are too weak do nothing. Where exactly is the balance point? No one can say for sure.
To solve these pain points, we made a bold decision: start from language style itself and design a configurable, composable constraint system for expression. The impact of that decision may be even larger than you expect. I will get into the details shortly, and the result may surprise you a little.
The approach shared in this article comes from our practical experience in the HagiCode project.
HagiCode is an open-source AI coding assistant that supports multiple AI models and custom configuration. During development, we discovered that AI output token usage was too high, so we designed a solution for it. If you find this approach valuable, that probably says something good about our engineering work. And if that is the case, HagiCode itself may also be worth your attention. Code does not lie.
The full name of the SOUL system is Soul Oriented Universal Language. It is the configuration system used in the HagiCode project to define the language style of an AI Hero. Its core idea is simple: by constraining how the AI expresses itself, it can output content in a more concise linguistic form while preserving informational completeness.
It is a bit like putting a linguistic mask on the AI… though honestly, it is not quite that mystical.
Ultra-minimal Classical Chinese mode is the most representative token-saving strategy in the SOUL system. Its core principle is to use the high semantic density of Classical Chinese to compress output length while preserving complete information.
Semantic compression: the same meaning can be expressed with fewer characters.
Redundancy removal: Classical Chinese naturally omits many conjunctions and particles common in modern Chinese.
Concise structure: each sentence carries high information density, making it well suited as a vehicle for AI output.
Here is a concrete example:
Modern Chinese output (about 80 characters):
Based on your code analysis, I found several issues. First, on line 23, the variable name is too long and should be shortened. Second, on line 45, you did not handle null values and should add conditional logic. Finally, the overall code structure is acceptable, but it can be further optimized.
Ultra-minimal Classical Chinese output (about 35 characters, saving 56%):
Code reviewed: line 23 variable name verbose, abbreviate; line 45 lacks null handling, add checks. Overall structure acceptable; minor tuning suffices.
The gap is large enough to make you stop and think.
"name": "Ultra-Minimal Classical Chinese Output Mode",
"summary": "Use relatively readable Classical Chinese to compress semantic density, convey the meaning with as few words as possible, and retain only conclusions, judgments, and necessary actions, thereby significantly reducing output tokens.",
"soul": "Your persona core comes from the \"Ultra-Minimal Classical Chinese Output Mode\": use relatively readable Classical Chinese to compress semantic density, convey the meaning with as few words as possible, and retain only conclusions, judgments, and necessary actions, thereby significantly reducing output tokens.\nMaintain the following signature language traits: 1. Prefer concise Classical Chinese sentence patterns such as \"can\", \"should\", \"do not\", \"already\", \"however\", and \"therefore\", while avoiding obscure and difficult wording;\n2. Compress each sentence to 4-12 characters whenever possible, removing preamble, pleasantries, repeated explanation, and ineffective modifiers;\n3. Do not expand arguments unless necessary; if the user does not ask a follow-up, provide only conclusions, steps, or judgments;\n4. Do not alter the core persona of the main Catalog; only compress the expression into restrained, classical, ultra-minimal short sentences."
}
There are several key points in this template design:
Clear constraints: 4-12 characters per sentence, remove redundancy, prioritize conclusions.
Avoid obscurity: use concise Classical Chinese sentence patterns and avoid rare, difficult wording.
Preserve persona: only change the mode of expression, not the core persona.
When you keep adjusting configuration, it all comes down to a few parameters in the end.
No modal particles, exclamation marks, or reduplication throughout
Short fragmented muttering mode (soul-orth-01):
Keep sentences within 1-5 characters
Simulate fragmented self-talk
Weaken explicit logic and prioritize emotional transmission
Guided Q&A mode (soul-orth-03):
Use questions to guide the user’s thinking
Reduce direct output content
Lower token usage through interaction
Each of these modes emphasizes a different design direction, but the core goal is the same: reduce output tokens while preserving information quality. There are many roads to Rome; some are simply easier to walk than others.
One powerful feature of the SOUL system is support for cross-combining main Catalogs and orthogonal dimensions:
50 main Catalog groups: define the base persona (such as healing style, top-student style, aloof style, and so on)
10 orthogonal dimensions: define the mode of expression (such as Classical Chinese, telegraph-style, Q&A style, and so on)
Combination effect: can generate 500+ unique language-style combinations
For example, you can combine “Professional Development Engineer” with “Ultra-Minimal Classical Chinese Output Mode” to create an AI assistant that is both professional and concise. This flexibility allows the SOUL system to adapt to many different scenarios. You can mix and match however you like; there are more combinations than you are likely to exhaust.
According to real test data from the project, the results after enabling ultra-minimal Classical Chinese mode are as follows:
Scenario
Original output tokens
Classical Chinese mode
Savings
Code review
850
420
51%
Technical Q&A
620
380
39%
Solution suggestions
1100
680
38%
Average
-
-
30-50%
The data comes from actual usage statistics in the HagiCode project, and exact results vary by scenario. Still, the saved tokens add up, and your wallet will appreciate it.
The HagiCode SOUL system offers an innovative way to optimize AI output: reduce token consumption by constraining expression rather than compressing the information itself. As its most representative approach, ultra-minimal Classical Chinese mode has delivered 30-50% token savings in real-world use.
The core value of this approach lies in the following:
Preserve information quality: instead of simply truncating output, it expresses the same content more efficiently.
Flexible and composable: supports 500+ combinations of personas and expression styles.
Easy to use: Soul Builder provides a visual interface, so no coding is required.
Production-grade stability: validated in the project and capable of large-scale use.
If you are also building AI applications, or if you are interested in the HagiCode project, feel free to reach out. The meaning of open source lies in progressing together, and we also look forward to seeing your own innovative uses. The saying may be old, but it remains true: one person may go fast, but a group goes farther.
Thank you for reading. If you found this article useful, you are welcome to like, bookmark, and share it.
This content was created with AI-assisted collaboration, and the final version was reviewed and confirmed by the author.
AI coding assistants are powerful, but they often generate code that does not match real requirements or violates project conventions. This article shares how the HagiCode project uses the OpenSpec workflow to implement specification-driven development and significantly reduce the risk of AI hallucinations through a structured proposal mechanism.
Anyone who has used GitHub Copilot or ChatGPT to write code has probably had this experience: the code generated by AI looks polished, but once you actually use it, problems show up everywhere. Maybe it uses the wrong component from the project, maybe it ignores the team’s coding standards, or maybe it writes a large chunk of logic based on assumptions that do not even exist.
This is the so-called “AI hallucination” problem. In programming, it appears as code that seems reasonable on the surface but does not actually fit the real state of the project.
There is also something a bit frustrating about this. As AI coding assistants become more widespread, the problem becomes more serious. After all, AI lacks an understanding of project history, architectural decisions, and coding conventions, and when given too much freedom it can “creatively” generate code that does not match reality. It is a bit like writing an article: without structure, it is easy to wander off into imagination, even though the real situation is far more grounded.
To solve these pain points, we made a bold decision: instead of trying to make AI smarter, we put it inside a “specification” cage. The change this decision brought was probably bigger than you might expect, and I will explain that shortly.
The approach shared in this article comes from our practical experience in the HagiCode project. HagiCode is an open-source AI coding assistant project dedicated to solving real problems in AI programming through structured engineering practices.
Before diving into the solution, let us first look at where the problem actually comes from. After all, if you understand both yourself and your opponent, you can fight a hundred battles without defeat. Applied to AI, that saying is still surprisingly fitting.
AI models are trained on public code repositories, but your project has its own history, conventions, and architectural decisions. AI cannot directly access this kind of “implicit knowledge,” so the code it generates is often disconnected from the actual project.
This is not entirely the AI’s fault. It has never lived inside your project, so how could it know all of your unwritten rules? Like a brand-new intern, not understanding the local customs is normal. The only issue is that the cost can be rather high.
When you ask AI, “Help me implement a user authentication feature,” it may generate code in almost any form. Without clear constraints, AI will implement things in the way it “thinks” is reasonable instead of following your project’s requirements.
That is like asking someone who has never learned your project standards to improvise freely. How could that not cause trouble? It is not even that the AI is being irresponsible; it just has no idea what responsibility means in this context.
After AI generates code, if there is no structured review process, code based on false assumptions can go directly into the repository. By the time the problem is discovered in testing or even in production, the cost is already far too high.
That is like trying to mend the pen after the sheep are already gone. The principle is obvious, but in practice people often still find the extra work bothersome. Before things go wrong, who really wants to spend more time up front?
OpenSpec: The Answer to Specification-driven Development
HagiCode chose OpenSpec as the solution. The core idea is simple: all code changes must go through a structured proposal workflow, turning abstract ideas into executable implementation plans.
That may sound grand, but in plain terms it just means making AI write the requirements document before writing the code. As the old saying goes, preparation leads to success, and lack of preparation leads to failure.
OpenSpec is an npm-based command-line tool (@fission-ai/openspec) that defines a standard proposal file structure and validation mechanism. Put simply, it makes AI “write the requirements document” before it writes code.
OpenSpec ensures proposal quality through a three-step workflow:
Step 1: Initialize the proposal - Set the session state to Openspecing
Step 2: Intermediate processing - Keep the Openspecing state while gradually refining the artifacts
Step 3: Complete the proposal - Transition to the Reviewing state
There is a clever detail in this design: the first step uses the ProposalGenerationStart type, and completing it does not trigger a state transition. This ensures that the review stage is not entered too early before the entire multi-step workflow is finished.
This detail is actually quite interesting. It is like cooking: if you lift the lid before the heat is right, nothing will turn out well. Only by moving step by step with a bit of patience can you end up with a good dish.
// Implementation in the HagiCode project
publicenumMessageAssociationType
{
ProposalGeneration =2,
ProposalExecution =3,
/// <summary>
/// Marks the start of the three-step proposal generation workflow
/// Does not transition to the Reviewing state when completed
Every OpenSpec proposal follows the same directory structure:
openspec/
├── changes/ # Active and archived changes
│ ├── {change-name}/
│ │ ├── proposal.md # Proposal description
│ │ ├── design.md # Design document
│ │ ├── specs/ # Technical specifications
│ │ └── tasks.md # Executable task list
│ └── archive/ # Archived changes
└── specs/ # Standalone specification library
According to statistics from the HagiCode project, there are already more than 4,000 archived changes and over 150,000 lines of specification files. This historical accumulation not only gives AI clear guidance to follow, but also provides the team with a valuable knowledge base.
It is a bit like the classics left behind by earlier generations. Read enough of them and patterns begin to emerge. The only difference is that these classics are stored in files instead of written on bamboo slips.
The system implements multiple layers of validation to ensure proposal quality:
// Validate that required files exist
ValidateProposalFiles()
// Validate prerequisites before execution
ValidateExecuteAsync()
// Validate start conditions
ValidateStartAsync()
// Validate archive conditions
ValidateArchiveAsync()
// Validate proposal name format (kebab-case)
ValidateNameFormat()
These validations are like gatekeepers at multiple checkpoints. Only truly qualified proposals can pass through. It may look tedious, but it is still much better than letting poor code enter the repository.
When AI runs inside HagiCode, it uses predefined Handlebars templates. These templates contain explicit step-by-step instructions and protective guardrails. For example:
Do not continue before understanding the user’s intent
Do not generate unvalidated code
Require the user to provide the name again if it is invalid
If the change already exists, suggest using the continue command instead of recreating it
This way of “dancing in shackles” actually helps AI focus more on understanding requirements and generating code that follows standards. Constraints are not always a bad thing. Sometimes too much freedom is exactly what creates chaos.
The openspec/ folder structure will be created automatically in the project root.
There is not much mystery in this step. It is just tool installation, which everyone understands. Just remember to use @fission-ai/openspec@1; newer versions may have pitfalls, and stability matters most.
In the HagiCode conversation interface, use the shortcut command:
/opsx:new
Or specify a change name and target repository:
/opsx:new "add-user-auth" --repos "repos/web"
Creating a proposal is like outlining an article before writing it. Once you have an outline, the rest becomes much easier. Many people prefer to jump straight into writing, only to realize halfway through that the idea does not hold together. That is when the real headache begins.
Use /opsx:continue to generate the required artifacts step by step:
proposal.md - Describes the purpose and scope of the change
# Proposal: Add User Authentication
## Why
The current system lacks user authentication and cannot protect sensitive APIs.
## What Changes
- Add JWT authentication middleware
- Implement login/registration APIs
- Update frontend integration
design.md - Detailed technical design
# Design: Add User Authentication
## Context
The system currently uses public APIs, so anyone can access them...
## Decisions
1. Choose JWT instead of Session...
2. Use the HS256 algorithm...
## Risks
- Risk of token leakage...
- Mitigation measures...
specs/ - Technical specifications and test scenarios
# user-auth Specification
## Requirements
### Requirement: JWT Token Generation
The system SHALL use the HS256 algorithm to generate JWT tokens.
#### Scenario: Valid login
- WHEN the user provides valid credentials
- THEN the system SHALL return a valid JWT token
tasks.md - Executable task list
# Tasks: Add User Authentication
## 1. Backend Changes
- [ ] 1.1 Create AuthController
- [ ] 1.2 Implement JWT middleware
- [ ] 1.3 Add unit tests
These artifacts are a lot like drafts for an article. Once the draft is complete, the main text flows naturally. Many people dislike writing drafts because they think it wastes time, but in reality that is often where the clearest thinking happens.
AI will read all context files and execute tasks step by step according to the checklist in tasks.md. At this point, because the specification is already clear, the quality of the generated code is much higher.
By this stage, half the work is already done. Once there is a clear task list, the rest is simply executing it step by step. The problem is that many people skip the earlier steps and jump straight here, and then quality naturally becomes hard to guarantee.
Move the completed change into the archive/ directory so it can be reviewed and reused later.
Archiving matters. It is like carefully storing away a finished article. When a similar problem appears in the future, looking back through old records may provide the answer. Many people find it troublesome, but these accumulated materials are often the most valuable assets.
Use kebab-case, start with a letter, and include only lowercase letters, numbers, and hyphens:
✅ add-user-auth
❌ AddUserAuth
❌ add--user-auth
Naming rules may seem minor, but consistency is always worth something. In software, consistency matters even when people do not always pay attention to it.
Using the wrong type in step 1 of the three-step workflow - This causes the state to transition too early
Forgetting to trigger the state transition in the final step - This leaves the workflow stuck in the Openspecing state
Skipping review and executing directly - You should validate that all artifacts are complete first
These mistakes are all common for beginners. Experienced people naturally know how to avoid them. Still, everyone becomes experienced eventually, and taking a few detours is part of the process. The only hope is to avoid taking too many.
OpenSpec supports managing multiple proposals at the same time, which is especially useful for large features:
Terminal window
# View all active changes
openspeclist
# Switch to a specific change
openspecapply"add-user-auth"
# View change status
openspecstatus--change"add-user-auth"
Managing multiple changes is like writing several articles at once. It takes some technique and patience, but once you get used to it, it becomes natural enough.
Reviewing: Under review (artifacts can be revised repeatedly)
Executing: In execution (applying tasks.md)
A state machine is, in the end, just a set of rules. Rules can feel annoying at times, but more often they are useful. As the saying goes, without rules, nothing can be accomplished properly.
Faster collaboration - Archived changes provide references for future development
Traceability - Every change has a complete record of proposal, design, specification, and tasks
This approach does not make AI smarter. It puts AI inside a “specification” cage. Practice has shown that dancing in shackles can actually lead to a better performance.
The principle is simple. Constraints are not necessarily bad. Like writing, having a format to follow often makes it easier to produce good work. Many people dislike constraints because they think constraints limit creativity, but creativity also needs the right soil to grow.
If you are also using AI coding assistants and have run into similar problems, give OpenSpec a try. Specification-driven development may seem to add extra steps, but that early investment pays back many times over in code quality and maintenance efficiency.
Sometimes slowing down a little is actually the fastest way forward. Many people just do not realize it yet.
If this article helped you, feel free to give us a Star on GitHub. The HagiCode public beta has already started, and you can join the experience by installing it now.
That is about enough for this article. There is nothing especially profound here, just a summary of a few practical lessons. I hope it is useful to everyone. Sharing is a good thing: you learn something yourself, and others learn something too.
Still, an article is only an article. Practice is what really matters. Knowledge from the page always feels shallow until you apply it yourself.
Thank you for reading. If you found this article useful, feel free to like, bookmark, and share it.
This content was created with AI-assisted collaboration, and the final content was reviewed and approved by the author.
This article introduces two major recent updates to the HagiCode platform: full support for the Zhipu AI GLM-5.1 model and the successful integration of Gemini CLI as the tenth Agent CLI. Together, these updates further strengthen the platform’s multi-model capabilities and multi-CLI ecosystem.
Time really does move fast. The development of large language models has been rising like bamboo in spring. Not long ago, we were still cheering for “an AI that can write code.” Now we are already in an era of multi-model collaboration and multi-tool integration. Is that exciting? Perhaps. After all, what developers need has never been just the tool itself, but the ease of adapting to different scenarios and switching flexibly when needed.
As an AI-assisted coding platform, HagiCode has recently welcomed two important developments: first, the full integration of Zhipu AI’s GLM-5.1 model; second, the official addition of Gemini CLI as the tenth supported Agent CLI. These two updates may not sound earth-shaking, but they are unquestionably good news for the platform’s continued maturation.
GLM-5.1 is Zhipu AI’s latest flagship model. Compared with GLM-5.0, it offers stronger reasoning, deeper code understanding, and smoother tool calling. More importantly, it is the first GLM model to support image input. What does that mean? It means users can let the AI look directly at a screenshot instead of struggling to describe the problem in words. Once you’ve used that convenience, you immediately understand its value.
At the same time, through the HagiCode.Libs.Providers architecture, HagiCode successfully integrated Gemini CLI into the platform. This is now the tenth Agent CLI. To be honest, getting to this point does bring a modest sense of accomplishment.
It is also worth mentioning that HagiCode’s image upload feature lets users communicate with AI directly through screenshots. Even when running GLM 4.7, the platform still works well and has already helped complete many important build tasks. As for GLM-5.1, naturally, it goes one step further.
The approach shared in this article comes from our practical experience in the HagiCode project. HagiCode is an open-source AI-assisted coding platform designed to provide developers with a flexible and powerful AI programming assistant through a multi-model, multi-CLI architecture. Project repository: github.com/HagiCode-org/site
One of HagiCode’s core strengths is its support for multiple AI programming CLI tools through a unified abstraction layer. The advantage of this design is actually quite simple: new tools can come in, old tools can stay, and the codebase does not turn into chaos. To be fair, that is how everyone would like life to work.
The platform defines supported CLI provider types through the AIProviderType enum:
publicenumAIProviderType
{
ClaudeCodeCli =0, // Claude Code CLI
CodexCli =1, // GitHub Copilot Codex
GitHubCopilot =2, // GitHub Copilot
CodebuddyCli =3, // Codebuddy CLI
OpenCodeCli =4, // OpenCode CLI
IFlowCli =5, // IFlow CLI
HermesCli =6, // Hermes CLI
QoderCli =7, // Qoder CLI
KiroCli =8, // Kiro CLI
KimiCli =9, // Kimi CLI
GeminiCli =10, // Gemini CLI (new)
}
As you can see, Gemini CLI joins this family as the tenth member. Each CLI has its own distinct characteristics and usage scenarios, so users can choose flexibly based on their needs. After all, many roads lead to Rome; some are simply easier than others.
HagiCode.Libs.Providers provides a unified Provider interface that makes each CLI integration standardized and concise. Taking Gemini CLI as an example:
This means users can invoke Gemini CLI with either gemini or gemini-cli, and the system will recognize it automatically. It is like a friend with both a formal name and a nickname - either way, people know who you mean.
GLM-5.0 carries the “Legacy” label because it is an old version identifier retained for backward compatibility. GLM-5.1, by contrast, is a brand-new standalone version with no historical burden. Some things stay in the past; others travel lighter and move faster.
HagiCode’s image support is implemented through the SupportsImage property on SecondaryProfession:
publicclassHeroSecondaryProfessionSettingDto
{
publicbool SupportsImage { get; set; }
}
In the Secondary Professions Catalog, the GLM-5.1 configuration looks like this:
{
"id": "secondary-glm-5-1",
"supportsImage": true
}
This means users can upload screenshots directly for AI analysis, such as:
Screenshots of error messages
Problems in a UI screen
Data visualization charts
Code execution results
There is no longer any need to describe everything manually - just upload the screenshot. The convenience of this feature is obvious once you have used it. Sometimes one look says more than a long explanation.
Gemini CLI supports a rich set of configuration options:
publicclassGeminiOptions
{
publicstring? ExecutablePath { get; set; }
publicstring? WorkingDirectory { get; set; }
publicstring? SessionId { get; set; }
publicstring? Model { get; set; }
publicstring? AuthenticationMethod { get; set; }
publicstring? AuthenticationToken { get; set; }
public Dictionary<string, string?> AuthenticationInfo { get; set; }
public Dictionary<string, string?> EnvironmentVariables { get; set; }
publicstring[] ExtraArguments { get; set; }
public TimeSpan? StartupTimeout { get; set; }
public CliPoolSettings? PoolSettings { get; set; }
}
These options cover everything from basic setup to advanced features, giving users the flexibility to configure the CLI around their own needs. Everyone’s workflow is different, so a little flexibility is always welcome.
Gemini CLI supports the ACP (Agent Communication Protocol), which is HagiCode’s unified CLI communication standard. Through ACP, different CLIs can interact with the platform in a consistent way, greatly simplifying integration work. In short, it standardizes the complicated parts so everyone can work more easily.
Once configured, HagiCode can call the GLM-5.1 model normally. It is neither especially hard nor especially easy - you just need to follow the setup as intended.
Speaking of real-world practice, the best example is the HagiCode platform’s own build workflow. HagiCode’s development process has already made full use of AI capabilities.
HagiCode’s platform design is well optimized, so it can still provide a good development experience even with GLM 4.7. The platform has already helped complete multiple important build projects, including:
Integration of multiple CLI Providers
Implementation of the image upload feature
Documentation generation and content publishing
That is actually a good thing. Not everyone needs the newest thing all the time. What suits you best is often what matters most.
HagiCode.Libs.Providers provides a unified mechanism for registration and usage:
services.AddHagiCodeLibs();
var gemini =serviceProvider.GetRequiredService<ICliProvider<GeminiOptions>>();
var codebuddy =serviceProvider.GetRequiredService<ICliProvider<CodebuddyOptions>>();
var hermes =serviceProvider.GetRequiredService<ICliProvider<HermesOptions>>();
This dependency injection design keeps usage across different CLIs very concise and also makes unit testing and mocking more convenient. Clean code is a way of being responsible to yourself.
With full support for GLM-5.1 and the successful integration of Gemini CLI, HagiCode further strengthens its capabilities as a multi-model, multi-CLI AI programming platform. These updates not only give users more choices, but also demonstrate HagiCode’s forward-looking architecture and scalability.
GLM-5.1’s image support, combined with HagiCode’s screenshot upload feature, makes it possible to let the AI “understand from the image” and greatly reduces the cost of describing problems. And with support for ten CLIs, users can flexibly choose the AI programming assistant that best fits their preferences and scenarios. More choice is almost always a good thing.
Most importantly, HagiCode’s own build practice proves that the platform can already run well and complete complex tasks even with GLM 4.7, while upgrading to GLM-5.1 can further improve development efficiency. Life is often like that too: you do not always need the absolute best, only what suits you. Of course, if what suits you can become even better, then so much the better.
If you are interested in a multi-model, multi-CLI AI programming platform, give HagiCode a try - open source, free, and still evolving. Trying it costs nothing, and it may turn out to be exactly what you need.
Thank you for reading. If you found this article useful, feel free to like, bookmark, and share it to show your support.
This content was created with AI-assisted collaboration, and the final content was reviewed and confirmed by the author.
In the Hagicode project, users can choose from multiple CLI tools to drive AI programming assistants, including Claude Code CLI, GitHub Copilot, OpenCode CLI, Codebuddy CLI, Hermes CLI, and more. These CLI tools are general-purpose AI programming tools on their own, but through Hagicode’s abstraction layer, they can flexibly connect to different AI model providers.
Zhipu AI (ZAI) provides an interface compatible with the Anthropic Claude API, allowing these CLI tools to directly use domestic GLM series models. Among them, GLM-5.1 is Zhipu’s latest large language model release, with significant improvements over GLM-5.0.
Through a unified abstraction layer, Hagicode enables flexible integration between GLM-5.1 and multiple CLIs. Developers can choose the CLI tool that best fits their preferences and usage scenarios, then use the latest GLM-5.1 model through simple configuration.
As Zhipu’s latest model version, GLM-5.1 offers clear improvements over GLM-5.0:
An independent version identifier with no legacy burden
Stronger reasoning and code understanding
Broad multi-CLI compatibility
Flexible reasoning level configuration
With the correct environment variables and Hero equipment configured, users can fully unlock the power of GLM-5.1 across different CLI environments.
Thank you for reading. If you found this article useful, feel free to like, bookmark, and share it to show your support.
This content was created with AI-assisted collaboration, and the final content was reviewed and confirmed by the author.
I held this article back for a long time before finally writing it, and I am still not sure whether it reads well. Technical writing is easy enough to produce, but hard to make truly engaging. Then again, I am no great literary master, so I might as well just set down this plain explanation.
Teams building desktop applications will all run into the same headache sooner or later: how do you distribute large files?
It is an awkward problem. Traditional HTTP/HTTPS direct downloads can still hold up when files are small and the number of users is limited. But time is rarely kind. As a project keeps growing, the installation packages grow with it: Desktop ZIP packages, portable packages, web deployment archives, and more. Then the issues start to surface:
Download speed is limited by origin bandwidth: no matter how much bandwidth a single server has, it still struggles when everyone downloads at once.
Resume support is nearly nonexistent: if an HTTP download is interrupted, you often have to start over from the beginning. That wastes both time and bandwidth.
The origin server takes all the pressure: all traffic flows back to a central server, bandwidth costs keep rising, and scalability becomes a real problem.
The HagiCode Desktop project was no exception. When we designed the distribution system, we kept asking ourselves: can we introduce a hybrid distribution approach without changing the existing index.json control plane? In other words, can we use the distributed nature of P2P networks to accelerate downloads while still keeping HTTP origin fallback so the system remains usable in constrained environments such as enterprise networks?
The impact of that decision turned out to be larger than you might expect. Let us walk through it step by step.
The approach shared in this article comes from our real-world experience in the HagiCode project. HagiCode is an open-source AI coding assistant project focused on helping development teams improve engineering efficiency. The project spans multiple subsystems, including the frontend, backend, desktop launcher, documentation, build pipeline, and server deployment.
The Desktop hybrid distribution architecture is exactly the kind of solution HagiCode refined through real operational experience and repeated optimization. If this design proves useful, then perhaps it also shows that HagiCode itself is worth paying attention to.
The project’s GitHub repository is HagiCode-org/site. If it interests you, feel free to give it a Star and save it for later.
At its heart, the hybrid distribution model can be summarized in a single sentence: P2P first, HTTP fallback.
The key lies in the word “hybrid.” This is not about simply adding BitTorrent and calling it a day. The point is to make the two delivery methods work together and complement each other:
The P2P network provides distributed acceleration. The more people download, the more peers join, and the faster the transfer becomes.
WebSeed/HTTP fallback guarantees availability, so downloads can still work in enterprise firewalls and internal network environments.
The control plane remains simple. We do not change the core logic of index.json; we only add a few optional metadata fields.
The real benefit is straightforward: users feel that “downloads are faster,” while the engineering team does not have to shoulder too much extra complexity. After all, the BT protocol is already mature, and there is little reason to reinvent the wheel.
Let us start with the overall architecture diagram to build a high-level mental model:
┌─────────────────────────────────────┐
│ Renderer (UI layer) │
├─────────────────────────────────────┤
│ IPC/Preload (bridge layer) │
├─────────────────────────────────────┤
│ VersionManager (version manager) │
├─────────────────────────────────────┤
│ HybridDownloadCoordinator (coord.) │
│ ├── DistributionPolicyEvaluator │
│ ├── DownloadEngineAdapter │
│ ├── CacheRetentionManager │
│ └── SHA256 Verifier │
├─────────────────────────────────────┤
│ WebTorrent (download engine) │
└─────────────────────────────────────┘
As the diagram shows, the system uses a layered design. The reason for separating responsibilities this clearly is simple: testability and replaceability.
The UI layer is responsible for displaying download progress and the sharing acceleration toggle. It is the surface.
The coordination layer is the core. It contains policy evaluation, engine adaptation, cache management, and integrity verification.
The engine layer encapsulates the concrete download implementation. At the moment, it uses WebTorrent.
The engine layer is abstracted behind the DownloadEngineAdapter interface. If we ever want to swap in a different BT engine later, or move the implementation into a sidecar process, that becomes much easier.
HagiCode Desktop keeps index.json as the sole control plane, and that design is critical. The control plane is responsible for version discovery, channel selection, and centralized policy, while the data plane is where the actual file transfer happens.
All of these fields are optional. If they are missing, the client falls back to the traditional HTTP download mode. The advantage of this design is backward compatibility: older clients are completely unaffected.
thrownewError(`sha256 verification failed for ${version.id}`);
}
// 4. Mark as trusted cache and begin controlled seeding
awaitthis.cacheRetentionManager.markTrusted({
versionId: version.id,
cachePath,
cacheSize,
}, settings);
return { cachePath, policy, verified };
}
}
There is one especially important point here: SHA256 verification is a hard gate. A downloaded file must pass verification before it can enter the installation flow. If verification fails, the cache is discarded to ensure that an incorrect file never causes installation problems.
A pluggable engine design makes future optimization much easier. For example, V2 could run the engine in a helper process to avoid bringing down the main process if the engine crashes.
At the UI layer, the thing users care about most is simple: “am I currently downloading through P2P or through HTTP fallback?” InProcessTorrentEngineAdapter determines that by checking the types inside torrent.wires:
The logic looks simple, but it is a key part of the user experience. Users can clearly see whether the current state is “sharing acceleration” or “origin backfilling,” which makes the behavior easier to understand.
This implementation is especially friendly for large files. Imagine downloading a 2 GB installation package and then trying to load the whole thing into memory just to verify it. Streaming solves that cleanly.
The flow is very clear end to end, and every step has a well-defined responsibility. When something goes wrong, it is much easier to pinpoint the failing stage.
Even the best technical design will fall flat if the user experience is poor. HagiCode Desktop invested a fair amount of effort in productizing this capability.
When new users launch the desktop app for the first time, they see a wizard page introducing sharing acceleration:
To improve download speed, we share the portions you have already downloaded with other users while your own download is in progress. This is completely optional, and you can turn it off at any time in Settings.
It is enabled by default, but users are given a clear way to opt out. If enterprise users do not want it, they can simply disable it during onboarding.
The settings page exposes three tunable parameters:
Parameter
Default
Description
Upload limit
2 MB/s
Prevents excessive upstream bandwidth usage
Cache limit
10 GB
Controls disk space consumption
Retention days
7 days
Automatically cleans old cache after this period
These parameters all have sensible defaults. Most users never need to change them, while advanced users can adjust them based on their own network environment.
Why not start with a sidecar or helper process right away? The reason is simple: ship quickly. An in-process design has a shorter development cycle and is easier to debug. The first priority is to get the feature running, then improve stability afterward.
Of course, this decision comes with a cost: if the engine crashes, it can affect the main process. We reduce that risk through adapter boundaries and timeout controls, and we also keep a migration path open so V2 can move into a separate process more easily.
We use SHA256 instead of MD5 or CRC32 because SHA256 is more secure. The collision cost for MD5 and CRC32 is too low. If someone maliciously crafted a fake installation package, the consequences could be severe. SHA256 costs more to compute, but the security gain is worth it.
Scenarios such as GitHub downloads and local folder sources do not use hybrid distribution. This is not a technical limitation; it is about avoiding unnecessary complexity. BT protocols add limited value inside private network scenarios and would only increase code complexity.
There is no perfect solution, and hybrid distribution is no exception. These are the main trade-offs:
Crash isolation is weaker than a sidecar: V1 uses an in-process engine, so an engine crash can affect the main process. Adapter boundaries and timeout controls reduce the risk, but they are not a fundamental fix. V2 includes a planned migration path to a helper process.
Enabled-by-default resource usage: the default settings of 2 MB/s upload, 10 GB cache, and 7-day retention do consume some machine resources. User expectations are managed through onboarding copy and transparent settings.
Enterprise network compatibility: automatic WebSeed/HTTPS fallback preserves usability in enterprise networks, but it can reduce the acceleration gains from P2P. This is an intentional trade-off that prioritizes availability.
Backward-compatible metadata: all new fields are optional. If they are missing, the system falls back to HTTP mode. Older clients are completely unaffected, making upgrades smooth.
This article walked through the hybrid distribution architecture used in the HagiCode Desktop project. The key takeaways are:
Layered architecture: the control plane and data plane are separated, and the engine is abstracted behind a pluggable interface for easier testing and extension.
Policy-driven behavior: not every file uses P2P. Hybrid distribution is enabled only for large files that meet the required conditions.
Integrity verification: SHA256 serves as a hard gate, and streaming verification avoids memory pressure.
Productized presentation: BT terminology is hidden behind the phrase “sharing acceleration,” and the feature is enabled by default during onboarding.
User control: upload limits, cache limits, retention days, and other parameters remain user-adjustable.
This architecture has already been implemented in the HagiCode Desktop project. If you try it out, we would love to hear your feedback after installation and real-world use.
Public beta is now open, and you are welcome to install and try it
Maybe we are all just ordinary people making our way through the world of technology, but that is fine. Ordinary people can still be persistent, and that persistence matters.
Thank you for reading. If you found this article useful, feel free to like, save, and share it.
This content was created with AI-assisted collaboration, with the final version reviewed and approved by the author.
Integrating AI coding tools like Claude Code, Codex, and OpenCode into containerized environments sounds simple, but there are hidden complexities everywhere. This article takes a deep dive into how the HagiCode project solves core challenges in Docker deployments, including user permissions, configuration persistence, and version management, so you can avoid the common pitfalls.
When we decided to run AI coding CLI tools inside Docker containers, the most intuitive thought was probably: “Aren’t containers just root? Why not install everything directly and call it done?” In reality, that seemingly simple idea hides several core problems that must be solved.
First, security restrictions are the first hurdle. Take Claude CLI as an example: it explicitly forbids running as the root user. This is a mandatory security check, and if root is detected, it refuses to start. You might think, can’t I just switch users with the USER directive? It is not that simple. There is still a mapping problem between the non-root user inside the container and the user permissions on the host machine.
Second, state persistence is the second trap. Claude Code requires login, Codex has its own configuration, and OpenCode also has a cache directory. If you have to reconfigure everything every time the container restarts, the whole idea of “automation” loses its meaning. We need these configurations to persist beyond the lifecycle of the container.
The third problem is permission consistency. Can processes inside the container access configuration files created by the host user? UID/GID mismatches often cause file permission errors, and this is extremely common in real deployments.
These problems may look independent, but in practice they are tightly connected. During HagiCode’s development, we gradually worked out a practical solution. Next, I will share the technical details and the lessons learned from those pitfalls.
The solution shared in this article comes from our practical experience in the HagiCode project. HagiCode is an open-source AI-assisted programming platform that integrates multiple mainstream AI coding assistants, including Claude Code, Codex, and OpenCode. As a project that needs cross-platform and highly available deployment, HagiCode has to solve the full range of challenges involved in containerized deployment.
If you find the technical solution in this article valuable, that is a sign HagiCode has something real to offer in engineering practice. In that case, the HagiCode official website and GitHub repository are both worth following.
There is a common misunderstanding here: Docker containers run as root by default, so why not just install the tools as root? If you think that way, Claude CLI will quickly teach you otherwise.
Terminal window
# Run Claude CLI directly as root? No.
dockerrun--rm-it--userrootmyimageclaude
# Output: Error: This command cannot be run as root user
This is a hard security restriction in Claude CLI. The reason is simple: these CLI tools read and write sensitive user configuration, including API tokens, local caches, and even scripts written by the user. Running them with root privileges introduces too much risk.
So the question becomes: how can we satisfy the CLI’s security requirements while keeping container management flexible? We need to change the way we think about it: instead of switching users at runtime, create a dedicated user during the image build stage.
Creating a dedicated user: more than just changing a name
But this only solves the built-in user inside the image. What if the host user is UID 1001? You still need to support dynamic mapping when the container starts.
docker-entrypoint.sh contains the key logic:
Terminal window
if [ -n"$PUID" ] && [ -n"$PGID" ]; then
if!idhagicode>/dev/null2>&1; then
groupadd-g"$PGID"hagicode
useradd-u"$PUID"-g"$PGID"-s/bin/bash-mhagicode
fi
fi
The advantage of this design is clear: use the default UID 1000 at image build time, then adjust dynamically at runtime through the PUID and PGID environment variables. No matter what UID the host user has, ownership of configuration files remains correct.
Pay attention to the user field here: PUID and PGID are injected through environment variables to ensure that processes inside the container run with an identity that matches the host user. This detail matters because permission issues are painful to debug once they appear.
Version management: baked-in versions with runtime overrides
Pinning Docker image versions is essential for reproducibility. But in real development, we often need to test a newer version or urgently fix a bug. If we had to rebuild the image every time, the workflow would be far too inefficient.
HagiCode’s strategy is fixed versions as the default, with runtime overrides as an extension mechanism. It is a pragmatic engineering compromise between stability and flexibility.
Dockerfile.template pins versions here:
USER hagicode
WORKDIR /home/hagicode
# Configure the global npm install path
RUN mkdir -p /home/hagicode/.npm-global && \
npm config set prefix '/home/hagicode/.npm-global'
# Install CLI tools using pinned versions
RUN npm install -g @anthropic-ai/claude-code@2.1.71 && \
In addition to configuring CLI tools manually, some scenarios require automatic configuration injection. The most typical example is an API token.
Terminal window
if [ -n"$ANTHROPIC_AUTH_TOKEN" ]; then
mkdir-p/home/hagicode/.claude
cat>/home/hagicode/.claude/settings.json<<EOF
{
"env": {
"ANTHROPIC_AUTH_TOKEN": "${ANTHROPIC_AUTH_TOKEN}"
}
}
EOF
chown-Rhagicode:hagicode/home/hagicode/.claude
fi
Two things matter here: pass sensitive information through environment variables instead of hard-coding it into the image, and make sure the ownership of configuration files is set correctly, otherwise the CLI tools will not be able to read them.
This is the easiest trap to fall into. The host user has UID 1001, while the container uses 1000, so files created on one side cannot be accessed on the other.
Terminal window
# Correct approach: make the container match the host user
dockerrun\
-ePUID=$(id-u) \
-ePGID=$(id-g) \
myimage
This issue is very common, and it can be frustrating the first time you run into it.
Running AI CLI tools in Docker containers involves three core challenges: user permissions, configuration persistence, and version management. By combining dedicated users, named-volume isolation, and environment-variable-based overrides, the HagiCode project built a deployment architecture that is both secure and flexible.
Key design points:
User isolation: Create a dedicated user during the image build stage, with runtime support for dynamic PUID/PGID mapping.
Persistence strategy: Each CLI tool gets its own named volume, so restarts do not affect configuration.
Version flexibility: Fixed defaults ensure reproducibility, while runtime overrides provide room for testing.
Automated configuration: Sensitive configuration can be injected automatically through environment variables.
This solution has been running stably in the HagiCode project for some time, and I hope it offers useful reference points for developers with similar needs.
Thank you for reading. If you found this article useful, you are welcome to like, bookmark, and share it.
This content was created with AI-assisted collaboration, and the final content was reviewed and confirmed by the author.
Writing technical articles is not really such a grand thing. It is mostly just a matter of organizing the pitfalls you have run into and the detours you have taken. We have all been inexperienced before, after all. This article takes an in-depth look at the design philosophy, architectural evolution, and core technical implementation of Soul in the HagiCode project, and explores how an independent platform can provide a more focused experience for creating and sharing Agent personas.
In the practice of building AI Agents, we often run into a question that looks simple but is actually crucial: how do we give different Agents stable and distinctive language styles and personality traits?
It is a slightly frustrating question, honestly. In the early Hero system of HagiCode, different Heroes (Agent instances) were mainly distinguished through profession settings and generic prompts. That approach came with some fairly obvious pain points, and anyone who has tried something similar has probably felt the same.
First, language style was difficult to keep consistent. The same “developer engineer” role might sound professional and rigorous one day, then casual and loose the next. This was not a model problem so much as the absence of an independent personality configuration layer to constrain and guide the output style.
Second, the sense of character was generally weak. When we described an Agent’s traits, we often had to rely on vague adjectives like “friendly,” “professional,” or “humorous,” without concrete language rules to support those abstract descriptions. Put plainly, it sounded nice in theory, but there was little to hold onto in practice.
Third, persona configurations were almost impossible to reuse. Suppose we carefully designed the speaking style of a “catgirl waitress” and wanted to reuse that expression style in another business scenario. In practice, we would almost have to configure it again from scratch. Sometimes you do not want to possess something beautiful, only reuse it a little… and even that turns out to be hard.
To solve those real problems, we introduced the Soul mechanism: an independent language style configuration layer separate from equipment and descriptions. Soul can define an Agent’s speaking habits, tone preferences, and wording boundaries, can be shared and reused across multiple Heroes, and can also be injected into the system prompt automatically on the first Session call.
Some people might say that this is just configuring a few prompts. But sometimes the real question is not whether something can be done; it is how to do it more elegantly. As Soul matured, we realized it had enough depth to develop independently. A dedicated Soul platform could let users focus on creating, sharing, and browsing interesting persona configurations without being distracted by the rest of the Hero system. That is how the standalone platform at soul.hagicode.com came into being.
HagiCode is an open-source AI coding assistant project built with a modern technology stack and aimed at giving developers a smooth intelligent programming experience. The Soul platform approach shared in this article comes from our own hands-on exploration while building HagiCode to solve the practical problem of Agent persona management. If you find the approach valuable, then it probably means we have accumulated a certain amount of engineering judgment in practice, and the HagiCode project itself may also be worth a closer look.
The earliest Soul implementation existed as a functional module inside the Hero workspace. We added an independent SOUL editing area to the Hero UI, supporting both preset application and text fine-tuning.
Preset application let users choose from classic persona templates such as “professional developer engineer” and “catgirl waitress.” Text fine-tuning let users personalize those presets further. On the backend, the Hero entity gained a Soul field, with SoulCatalogId used to identify its source.
This stage solved the question of whether the capability existed at all, and it grew forward somewhat awkwardly, like anything young does. But as Soul content became richer, the limitations of an architecture tightly coupled with the Hero system started to show.
To provide a better Soul discovery and reuse experience, we built a SOUL Marketplace catalog page with support for browsing, searching, viewing details, and favoriting.
At this stage, we introduced a combinatorial design built from 50 main Catalogs (base roles) and 10 orthogonal rules (expression styles). The main Catalogs defined the Agent’s core persona, with abstract character settings such as “Mistport Traveler” and “Night Hunter.” The orthogonal rules defined how the Agent expressed itself, with language style traits such as “Concise & Professional” and “Verbose & Friendly.”
50 x 10 = 500 possible combinations gave users a wide configuration space for personas. It is not an overwhelming number, but it is not small either. There are many roads to Rome, after all; some are simply easier to walk than others. On the backend, the full SOUL catalog was generated through catalog-sources.json, while the frontend presented those catalog entries as an interactive card list.
The in-site Marketplace was a good transitional solution, but only that: transitional. It was still attached to the main system, and for users who only wanted Soul functionality, the access path remained too deep. Not everyone wants to take the scenic route just to do something simple.
In the end, we decided to move Soul into an independent repository (repos/soul). The Marketplace in the original main system was changed into an external jump guide, while the new platform adopted a Builder-first design philosophy: the homepage is the creation workspace by default, so users can start building their own persona configuration the moment they open the site.
The technology stack was also comprehensively upgraded in this stage: Vite 8 + React 19 + TypeScript 5.9, a unified design language through the shadcn/ui component system, and Tailwind CSS 4 theme variables. The improvement in frontend engineering laid a solid foundation for future feature iteration.
Everything faded away… no, actually, everything was only just beginning.
One core design principle of the Soul platform is local-first. That means the homepage must remain fully functional without a backend, and failure to load remote materials must never block page entry.
There is nothing especially miraculous about that. It simply means thinking one step further when designing the system. Using a local snapshot as the baseline and remote data as enhancement lets the product remain basically usable under any network condition. Concretely, we implemented a two-layer material architecture:
Local materials come from build-time snapshots of the main system documentation and include the complete data for 50 base roles and 10 expression rules. Remote materials come from Souls published by users and fetched through the Marketplace API. Together, they give users a full spectrum of materials, from official templates to community creativity. If that sounds dramatic, it really is just local plus remote.
The group field distinguishes fragment types: the main catalog defines the character core, orthogonal rules define expression style, and user-published Souls are marked as published-soul. The localized field supports multilingual presentation, allowing the same fragment to display different titles and descriptions in different language environments. Internationalization is something you really want to think about early, and in this case we actually did.
The Builder draft state encapsulates the user’s current editing state:
exporttype SoulBuilderDraft = {
draftId:string
name:string
selectedMainFragmentId:string|null
selectedRuleFragmentId:string|null
inspirationSoulId:string|null
mainSlotText:string
ruleSlotText:string
customPrompt:string
previewText:string
updatedAt:string
}
Each fragment selected in the editor has its content concatenated into the corresponding slot, forming the final preview text. mainSlotText corresponds to the main role content, ruleSlotText corresponds to the expression rule content, and customPrompt is the user’s additional instruction text.
Preview compilation is the core capability of Soul Builder. It assembles user-selected fragments and custom text into a system prompt that can be copied directly:
// Assembly logic: main role + expression rule + inspiration reference + custom content
}
The compilation result is shown in the central preview panel, where users can see the final effect in real time and copy it to the clipboard with one click. It sounds simple, and it is. But simple things are often the most useful.
Frontend state management in Soul Builder follows one important principle: clear separation of state boundaries. More specifically, drawer state is not persisted and does not write directly into the draft. Only explicit Builder actions trigger meaningful state changes.
// Domain state (useSoulBuilder)
exportfunctionuseSoulBuilder() {
// Material loading and caching
// Slot aggregation and preview compilation
// Copy actions and feedback messages
// Locale-safe descriptors
}
// Presentation state (useHomeEditorState)
exportfunctionuseHomeEditorState() {
// activeSlot, drawerSide, drawerOpen
// default focus behavior
}
That separation ensures both edit-state safety and responsive UI behavior. Opening and closing the drawer is purely a UI interaction and should not trigger complicated persistence logic. It may sound obvious, but it matters: UI state and business state should be separated clearly so interface interactions do not pollute the core data model.
Soul Builder uses a single-drawer mode: only one slot drawer may be open at a time. Clicking the mask, pressing the ESC key, or switching slots automatically closes the current drawer. This simplifies state management and also matches common drawer interaction patterns on mobile.
Closing the drawer does not clear the current editing content, so when users come back, their context is preserved. This kind of “lightweight” drawer design avoids interrupting the user’s flow. Nobody wants carefully written content to disappear because of one accidental click.
Internationalization is an important capability of the Soul platform. System copy fully supports bilingual switching, while user draft text is never rewritten when the language changes, because draft text is user-authored free input rather than system-translated content.
Official inspiration cards (Marketplace Souls) keep the upstream display name while also providing a best-effort English summary. For Souls with Chinese names, we generate English versions through predefined mapping rules:
// English name mapping for main roles
const mainNameEnglishMap = {
"雾港旅人": "Mistport Traveler",
"夜航猎手": "Night Hunter",
// ...
}
// English name mapping for orthogonal rules
const ruleNameEnglishMap = {
"简洁干练": "Concise & Professional",
"啰嗦亲切": "Verbose & Friendly",
// ...
}
The mapping table itself looks simple enough, but keeping it in good shape still takes care. There are 50 main roles and 10 orthogonal rules, which means 500 combinations in total. That is not huge, but it is enough to deserve respect.
Soul snapshot assembly follows a fixed template format that combines the main role core, signature traits, expression rule core, and output constraints together:
After splitting Soul from the main system into an independent platform, one important challenge was handling existing user data. It is a familiar problem: splitting things apart is easy, migration is not. We adopted three safeguards:
Backward compatibility protection. Previously saved Hero SOUL snapshots remain visible, and historical snapshots can still be previewed even if they no longer have a Marketplace source ID. In other words, none of the user’s prior configurations are lost; only where they appear has changed.
Main system API deprecation. The in-site Marketplace API returns HTTP status 410 Gone together with a migration notice that guides users to soul.hagicode.com.
Hero SOUL form refactoring. A migration notice block was added to the Hero Soul editing area to clearly tell users that the Soul platform is now independent and to provide a one-click jump button:
Looking back at the development of the Soul platform as a whole, there are a few practical lessons worth sharing. They are not grand principles, just things learned from real mistakes.
Local-first runtime assumptions. When designing features that depend on remote data, always assume the network may be unavailable. Using local snapshots as the baseline and remote data as enhancement ensures the product remains basically usable under any network condition.
Clear separation of state boundaries. UI state and business state should be distinguished clearly so interface interactions do not pollute the core data model. Drawer toggles are purely UI state and should not be mixed with draft persistence.
Design for internationalization early. If your product has multilingual requirements, it is best to think about them during the data model design phase. The localized field adds some structural complexity, but it greatly reduces the long-term maintenance cost of multilingual content.
Automate the material synchronization workflow. Local materials for the Soul platform come from the main system documentation. When upstream documentation changes, there needs to be a mechanism to sync it into frontend snapshots. We designed the npm run materials:sync script to automate that process and keep materials aligned with upstream.
Based on the current architecture, the Soul platform could move in several directions in the future. These are only tentative ideas, but perhaps they can be useful as a starting point.
Community sharing ecosystem. Support user uploads and sharing of custom Souls, with rating, commenting, and recommendation mechanisms so excellent Soul configurations can be discovered and reused by more people.
Multimodal expansion. Beyond text style, the platform could also support dimensions such as voice style configuration, emoji usage preferences, and code style and formatting rules. It sounds attractive in theory; implementation may tell a more complicated story.
Intelligent assistance. Automatically recommend Souls based on usage scenarios, support style transfer and fusion, and even run A/B tests on the real-world effectiveness of different Souls. There is no better way to know than to try.
Cross-platform synchronization. Support importing persona configurations from other AI platforms, provide a standardized Soul export format, and integrate with mainstream Agent frameworks.
This article shares the full evolution of the HagiCode Soul platform from its earliest emerging need to an independent platform. We discussed why a Soul mechanism is needed to solve Agent persona consistency, analyzed the three stages of architectural evolution (embedded configuration, in-site Marketplace, and independent platform), examined the core data model, state management, preview compilation, and internationalization design in depth, and summarized practical migration lessons.
The essence of Soul is an independent persona configuration layer separated from business logic. It makes the language style of AI Agents definable, reusable, and shareable. From a technical perspective, the design itself is not especially complicated, but the problem it solves is real and broadly relevant.
If you are also building AI Agent products, it may be worth asking whether your persona configuration solution is flexible enough. The Soul platform’s practical experience may offer a few useful ideas.
Perhaps one day you will run into a similar problem as well. If this article can help a little when that happens, that is probably enough.
If you found this article helpful, feel free to give the project a Star on GitHub. The public beta has already started, and you are welcome to install it and try it out.
Thank you for reading. If you found this article useful, likes, bookmarks, and shares are all appreciated.
This content was created with AI-assisted collaboration, and the final content was reviewed and confirmed by the author.
This article takes an in-depth look at the architecture and implementation of the Skill management system in the HagiCode project, covering the technical details behind four core capabilities: local global management, marketplace search, intelligent recommendations, and trusted provider management.
In the field of AI coding assistants, how to extend the boundaries of AI capabilities has always been a core question. Claude Code itself is already strong at code assistance, but different development teams and different technology stacks often need specialized capabilities for specific scenarios, such as handling Docker deployments, database optimization, or frontend component generation. That is exactly where a Skill system becomes especially important.
During the development of the HagiCode project, we ran into a similar challenge: how do we let Claude Code “learn” new professional skills like a person would, while still maintaining a solid user experience and good engineering maintainability? This problem is both hard and simple in its own way. Around that question, we designed and implemented a complete Skill management system.
This article walks through the technical architecture and core implementation of the system in detail. It is intended for developers interested in AI extensibility and command-line tool integration. It might be useful to you, or it might not, but at least it is written down now.
The approach shared in this article comes from our practical experience in the HagiCode project. HagiCode is an open-source AI coding assistant project designed to help development teams improve engineering efficiency. The project’s stack includes ASP.NET Core, the Orleans distributed framework, a TanStack Start + React frontend, and the Skill management subsystem introduced in this article.
The GitHub repository is HagiCode-org/site. If you find the technical approach in this article valuable, feel free to give it a Star. More Stars tend to improve the mood, after all.
The Skill system uses a frontend-backend separated architecture. There is nothing especially mysterious about that.
Frontend uses TanStack Start + React to build the user interface, with Redux Toolkit managing state. The four main capabilities map directly to four Tab components: Local Skills, Skill Gallery, Intelligent Recommendations, and Trusted Providers. In the end, the design is mostly about making the user experience better.
Backend is based on ASP.NET Core + ABP Framework, using Orleans Grain for distributed state management. The online API client wraps the IOnlineApiClient interface to communicate with the remote skill catalog service.
The overall architectural principle is to separate command execution from business logic. Through the adapter pattern, the implementation details of npm/npx command execution are hidden inside independent modules. After all, nobody really wants command-line calls scattered all over the codebase.
Local global management is the most basic module. It is responsible for listing installed skills and supporting uninstall operations. There is nothing overly complicated here; it is mostly about doing the basics well.
The implementation lives in LocalSkillsTab.tsx and LocalSkillCommandAdapter.cs. The core idea is to wrap the npx skills command, parse its JSON output, and convert it into internal data structures. It sounds simple, and in practice it mostly is.
The data flow is very clear: the frontend sends a request -> SkillGalleryAppService receives it -> LocalSkillCommandAdapter executes the npx command -> the JSON result is parsed -> a DTO is returned. Each step follows naturally from the previous one.
Skill uninstallation uses the npx skills remove -g <skillName> -y command, and the system automatically handles dependencies and cleanup. Installation metadata is stored in managed-install.json inside the skill directory, recording information such as install time and source version for later updates and auditing. Some things are simply worth recording.
Several key design patterns are used here: the reference normalizer converts different input formats, such as tanweai/pua and @opencode/docker-skill, into a unified internal representation; the installation lock mechanism ensures only one installation operation can run for the same skill at a time; and streaming output pushes installation progress to the frontend in real time through Server-Sent Events, so users can watch terminal-like logs as they happen.
In the end, all of these patterns are there for one purpose: to keep the system simpler to use and maintain.
Marketplace search lets users discover and install skills from the community. One person’s ability is always limited; collective knowledge goes much further.
The search feature relies on the online API https://api.hagicode.com/v1/skills/search. To improve response speed, the system implements caching. Cache is a bit like memory: if you keep useful things around, you do not have to think so hard the next time.
Search results support filtering by trusted sources, so users only see skill sources they trust. Seed queries such as popular and recent are used to initialize the catalog, allowing users to see recommended popular skills the first time they open it. First impressions still matter.
Intelligent recommendations are the most complex part of the system. They can automatically recommend the most suitable skills based on the current project context. Complex as it is, it is still worth building.
The full recommendation flow is divided into five stages:
1. Build project context
↓
2. AI generates search queries
↓
3. Search the online catalog in parallel
↓
4. AI ranks the candidates
↓
5. Return the recommendation list
First, the system analyzes characteristics such as the project’s technology stack, programming languages, and domain structure to build a “project profile.” That profile is a bit like a resume, recording the key traits of the project.
Then an AI Grain is used to generate targeted search queries. This design is actually quite interesting: instead of directly asking the AI, “What skills should I recommend?”, we first ask it to think about “What search terms are likely to find relevant skills?” Sometimes the way you ask the question matters more than the answer itself:
var queryGeneration =awaitaiGrain.GenerateSkillRecommendationQueriesAsync(
projectContext, // Project context
locale, // User language preference
maxQueries, // Maximum number of queries
effectiveSearchHero); // AI model selection
Next, those search queries are executed in parallel to gather a candidate skill list. Parallel processing is, at the end of the day, just a way to save time.
Finally, another AI Grain ranks the candidate skills. This step considers factors such as skill relevance to the project, trust status, and user historical preferences:
var ranking =awaitaiGrain.RankSkillRecommendationsAsync(
AI models can respond slowly or become temporarily unavailable. Even the best systems stumble sometimes. For that reason, the system includes a deterministic fallback mechanism: when the AI service is unavailable, it uses a rule-based heuristic algorithm to generate recommendations, such as inferring likely required skills from dependencies in package.json.
Put plainly, this fallback mechanism is simply a backup plan for the system.
Trusted provider management allows users to control which skill sources are considered trustworthy. Trust is still something users should be able to define for themselves.
Built-in trusted providers include well-known organizations and projects such as Vercel, Azure, anthropics, Microsoft, and browser-use. Custom providers can be added through configuration files by specifying a provider ID, display name, badge label, matching rules, and more. The world is large enough that only trusting a few built-ins would never be enough.
public Task<TrustedSkillProviderSnapshot> GetConfigurationAsync()
{
returnTask.FromResult(State.Snapshot);
}
}
The benefit of this approach is that configuration changes are automatically synchronized across all nodes, without any need to refresh caches manually. Automation is, ultimately, about letting people worry less.
The Skill system needs to execute various npx commands. If that logic were scattered everywhere, the code would quickly become difficult to maintain. That is why we designed an adapter interface. Design patterns, in the end, exist to make code easier to maintain:
Different commands have different executor implementations, but all of them implement the same interface, making testing and replacement straightforward.
On the frontend, users can see terminal-like output in real time, which makes the experience very intuitive. Real-time feedback helps people feel at ease.
Take installing the pua skill as an example (it is a popular community skill):
Open the Skills drawer and switch to the Skill Gallery tab
Enter pua in the search box
Click the search result to view the skill details
Click the Install button
Switch to the Local Skills tab to confirm the installation succeeded
The installation command is npx skills add tanweai/pua -g -y, and the system handles all the details automatically. There are not really that many steps once you take them one by one.
If your team has its own skill repository, you can add it as a trusted source:
providerId: "my-team"
displayName: "My Team Skills"
badgeLabel: "MyTeam"
isEnabled: true
sortOrder: 100
matchRules:
- matchType: "prefix"
value: "my-team/"
- matchType: "exact"
value: "my-team/special-skill"
This way, all skills from your team will display a trusted badge, making users more comfortable installing them. Labels and signals do help people feel more confident.
This article introduced the complete implementation of the Skill management system in the HagiCode project. Through a frontend-backend separated architecture, the adapter pattern, Orleans-based distributed state management, and related techniques, the system delivers:
Local global management: a unified skill management interface built by wrapping npx skills commands
Marketplace search: rapid discovery of community skills through the online API and caching mechanisms
Intelligent recommendations: AI-powered skill recommendations based on project context
Trust management: a flexible configuration system that lets users control trust boundaries
This design approach is not only applicable to Skill management. It is also useful as a reference for any scenario that needs to integrate command-line tools while balancing local storage and online services.
If this article helped you, feel free to give us a Star on GitHub: github.com/HagiCode-org/site. You can also visit the official site to learn more: hagicode.com.
You may think this system is well designed, or you may not. Either way, that is fine. Once code is written, someone will use it, and someone will not.
Thank you for reading. If you found this article useful, feel free to like, bookmark, and share it.
This content was produced with AI-assisted collaboration, and the final content was reviewed and approved by the author.
Quantifying AI replacement risk with data: a deep dive into how the HagiCode team uses six core formulas to redefine how knowledge workers evaluate their competitiveness.
With AI technology advancing at breakneck speed, every knowledge worker is facing an urgent question: In the AI era, will I be replaced?
It sounds a little alarmist, but plenty of people are quietly uneasy about it. You just finish learning a new framework, and AI is already telling you your role might be automated away; you finally master a language, and then discover that someone using AI is producing three times as much as you. If you are reading this, you have probably felt at least some of that anxiety.
And honestly, that anxiety is not irrational. No one wants to admit that the skills they spent years building could be outperformed by a single ChatGPT session. Still, anxiety is one thing; life goes on.
Traditional discussions usually start from the question of “what AI can do,” but that framing misses two critical dimensions:
The business perspective: whether a company is willing to equip an employee with AI tools depends on whether AI costs make economic sense relative to labor costs. It is not enough for AI to be capable of replacing a role; the company also has to run the numbers. Capital is not a charity, and every dollar has to count.
The efficiency perspective: AI-driven productivity gains need to be quantified instead of being reduced to the vague claim that “using AI makes you stronger.” Maybe your efficiency doubles with AI, but someone else gets a 5x improvement. That gap matters. It is like school: everyone sits in the same class, but some score 90 while others barely pass.
So the real question is: how do we turn this fuzzy anxiety into measurable indicators?
It is always better to know where you stand than to fumble around in the dark. That is what we are talking about today: the design logic behind the AI productivity calculator built by the HagiCode team.
HagiCode is an open-source AI coding assistant project built to help developers code more efficiently.
What is interesting is that while building their own product, the HagiCode team accumulated a lot of hands-on experience around AI productivity. They realized that the value of an AI tool cannot be assessed in isolation from a company’s employment costs. Based on that insight, the team decided to build a productivity calculator to help knowledge workers evaluate their competitiveness in the AI era more scientifically.
Plenty of people could build something like this. The difference is that very few are willing to do it seriously. The HagiCode team spent time on it as a way of giving something back to the developer community.
The design shared in this article is a summary of HagiCode’s experience applying AI in real engineering work. If you find this evaluation framework valuable, it suggests that HagiCode really does have something to offer in engineering practice. In that case, the HagiCode project itself is also worth paying attention to.
A company’s real cost for an employee is far more than salary alone. A lot of people only realize this when changing jobs: you negotiate a monthly salary of 20,000 CNY, but take home only 14,000. On the company side, the spend is not just 20,000 either. Social insurance, housing fund contributions, training, and recruiting costs all have to be included.
According to the implementation in calculate-ai-risk.ts:
Total annual employment cost = Annual salary x (1 + city coefficient) + Annual salary / 12
The city coefficient reflects differences in hiring and retention costs across cities:
City tier
Representative cities
Coefficient
Tier 1
Beijing / Shanghai / Shenzhen / Guangzhou
0.4
New Tier 1
Hangzhou / Chengdu / Suzhou / Nanjing
0.3
Tier 2
Wuhan / Xi’an / Tianjin / Zhengzhou
0.2
Other
Yichang / Luoyang and others
0.1
A Tier 1 city coefficient of 0.4 means the company needs to pay roughly 40% extra in recruiting, training, insurance, and similar overhead. The all-in cost of hiring someone in Beijing really is much higher than in a Tier 2 city.
The cost of living in major cities is high too. You could think of it as another version of a “drifter tax.”
Different AI models have separate input and output pricing, and the gap can be huge. In coding scenarios, the input/output ratio is roughly 3:1. You might give the AI a block of code to review, while its analysis is usually much shorter than the input.
The blended unit price formula is:
Blended unit price = (input-output ratio x input price + output price) / (input-output ratio + 1)
Take GPT-5 as an example:
Input: $2.5/1M tokens
Output: $15/1M tokens
Blended = (3 x 2.5 + 15) / 4 = $5.625/1M tokens
For models priced in USD, you also need to convert using an exchange rate. The HagiCode team currently sets that rate to 7.25 and updates it as the market changes.
Exchange rates are like the stock market: no one can predict them exactly. You just follow the trend.
Average daily AI cost = Average daily token demand (M) x blended unit price (CNY/1M)
Annual AI cost = Average daily AI cost x 264 working days
264 = 22 days/month x 12 months, which is the number of working days in a standard year. Why not use 365? Because you have to account for weekends, holidays, sick leave, and so on.
We are not robots, after all. AI may not need rest, but people still need room to breathe.
This is the heart of the whole evaluation system, and also where the HagiCode team’s insight shows most clearly.
Affordable workflow count = Total annual employment cost / Annual AI cost
Affordability ratio = min(affordable workflow count, 1)
Equivalent headcount = 1 + (productivity multiplier - 1) x affordability ratio
That formula looks a little abstract, so let me unpack it.
The traditional view would simply say, “your efficiency improved by 2x.” But this formula introduces a crucial constraint: is the company’s AI budget sustainable?
For example, Xiao Ming improves his efficiency by 3x, but his annual AI usage costs 300,000 CNY while the company is only paying him a salary of 200,000 CNY. In that case, his personal productivity may be impressive, but it is not sustainable. No company is going to lose money just to keep him operating at peak efficiency.
That is what the affordability ratio means. If the company can only afford 0.5 of an AI workflow, then Xiao Ming’s equivalent headcount is 1 + (3 - 1) x 0.5 = 2 people, not 3.
The key insight: what matters is not just how large your productivity multiplier is, but whether the company can afford the AI investment required to sustain that multiplier.
The logic is simple once you see it. Most people just do not think from that angle. We are used to looking at the world from our own side, not from the boss’s side, where money does not come out of thin air either.
AI cost ratio = Annual AI cost / Total annual employment cost
Productivity gain = Productivity multiplier - 1
Cost-benefit ratio = Productivity gain / AI cost ratio
Cost-benefit ratio < 1: the AI investment is not worth it; the productivity gain does not justify the cost
Cost-benefit ratio 1-2: barely worth it
Cost-benefit ratio > 2: high return, strongly recommended
This metric is especially useful for managers because it helps them quickly judge whether a given role is worth equipping with AI tools.
At the end of the day, ROI is what matters. You can talk about higher efficiency all you want, but if the cost explodes, no one is going to buy the argument.
Risk is categorized according to equivalent headcount:
Equivalent headcount
Risk level
Conclusion
>= 2.0
High risk
If your coworkers gain the same conditions, they become a serious threat to you
1.5 - 2.0
Warning
Coworkers have begun to build a clear productivity advantage
< 1.5
Safe
For now, you can still maintain a gap
After seeing that table, you probably have a rough sense of where you stand. Still, there is no point in panicking. Anxiety does not solve problems. It is better to think about how to raise your own productivity multiplier.
To make the results more fun, the calculator introduces a system of seven special titles. These titles are persisted through localStorage, allowing users to unlock and display their own “achievements.”
Title ID
Name
Unlock condition
craftsman-spirit
Craftsman Spirit
Average daily token usage = 0
prompt-alchemist
Prompt Alchemist
Daily tokens <= 20M and productivity multiplier >= 6
all-in-operator
All-In Operator
Daily tokens >= 150M and productivity multiplier >= 3
minimalist-runner
Minimalist Runner
Daily tokens <= 5M and productivity multiplier >= 2
cost-tamer
Cost Tamer
Cost-benefit ratio >= 2.5 and AI cost ratio <= 15%
danger-oracle
Danger Oracle
Equivalent headcount >= 2.5 or entering the high-risk zone
budget-coordinator
Budget Coordinator
Affordable workflow count >= 8
Each title also carries a hidden meaning:
Title
Hidden meaning
Craftsman Spirit
You can still do fine without AI, but you need unique competitive strengths
Prompt Alchemist
You achieve high output with very few tokens; a classic power-user profile
All-In Operator
High input, high output; suitable for high-frequency scenarios
Minimalist Runner
Lightweight AI usage; suitable for light-assistance scenarios
Cost Tamer
Extremely high ROI; the kind of employee companies love
Danger Oracle
You are already, or soon will be, in a high-risk group
Budget Coordinator
You can operate multiple AI workflows at the same time
Gamification is really just a way to make dry data a little more entertaining. After all, who does not like collecting achievements? Like badges in a game, they may not have much practical value, but they still feel good to earn.
This data is updated regularly, with the latest refresh on 2026-03-19.
Data only matters when it is current. Once it is outdated, it stops being useful. On that front, the HagiCode team has been quite responsible about keeping things updated.
Suppose you are a developer in Beijing with an annual salary of 400,000 CNY, using Claude Sonnet 4.6, consuming 50M tokens per day on average, and estimating that AI gives you a 3x productivity boost. The simulated input looks like this:
const input = {
annualIncomeCny: 400000,
cityTier: "tier1", // Beijing
modelId: "claude-sonnet-4-6",
performanceMultiplier: 3.0,
dailyTokenUsageM: 50,
}
// Calculation process
// Total annual employment cost = 400k x (1 + 0.4) + 400k/12 ~= 603.3k
// Equivalent headcount = 1 + (3 - 1) x 1 = 3 people
Conclusion: if one of your coworkers has the same conditions, their output would be equivalent to three people. You are already in the high-risk zone.
If you discover that your current AI usage is “not worth it” (cost-benefit ratio < 1), you can consider:
Reducing token usage: use more efficient prompts and cut down ineffective requests
Choosing a more cost-effective model: for example, DeepSeek-V3 (priced in CNY and cheaper)
Increasing your productivity multiplier: learn advanced Agent usage techniques and truly turn AI into productivity
In the end, all of this comes down to the art of balance. Use too much and you waste money; use too little and nothing changes. The key is finding the sweet spot.
When designing this calculator, the HagiCode team made several engineering decisions worth learning from:
Pure frontend computation: all calculations run in the browser, with no backend API dependency, which protects user privacy
Configuration-driven: all formulas, pricing, and role data are centralized in configuration files, so future updates do not require changing core code logic
Multilingual support: supports both Chinese and English
Instant feedback: results update in real time as soon as the user changes inputs
Detailed formula display: every result includes the full calculation formula to help users understand it
This design makes the calculator easy to maintain and extend, while also serving as a reference template for similar data-driven applications.
Good architecture, like good code, takes time to build up. The HagiCode team put real thought into it.
The core value of the AI productivity calculator is that it turns the vague anxiety of an “AI replacement threat” into metrics that can be quantified and compared.
The equivalent headcount formula, 1 + (productivity multiplier - 1) x affordability ratio, is the core innovation of the entire framework. It considers not only productivity gains, but also whether a company can afford the AI cost, making the evaluation much closer to reality.
This framework tells us one thing clearly: in the AI era, not knowing where you stand is the most dangerous position of all.
Instead of worrying, let the data speak.
A lot of fear comes from the unknown. Once you quantify everything, the situation no longer feels quite so terrifying. At worst, you improve yourself or change tracks. Life is long, and there is no need to hang everything on a single tree.
In the end, a line of poetry came to mind: “This feeling might have become a thing to remember, yet even then one was already lost.”
The AI era is much the same. Instead of waiting until you are replaced and filled with regret, it is better to start taking action now…
Thank you for reading. If you found this article useful, likes, bookmarks, and shares are all welcome.
This content was created with AI-assisted collaboration, and the final version was reviewed and confirmed by the author.
When building an AI-assisted coding platform, choosing the right Agent core directly determines the upper limit of the system’s capabilities. Some things simply cannot be forced; pick the wrong framework, and no amount of effort will make it feel right. This article shares the thinking behind HagiCode’s technical selection and our hands-on experience integrating Hermes Agent.
When building an AI-assisted coding product, one of the hardest parts is choosing the underlying Agent framework. There are actually quite a few options on the market, but some are too limited in functionality, some are overly complex to deploy, and others simply do not scale well enough. What we needed was a solution that could run on a $5 VPS while also being able to connect to a GPU cluster. That requirement may not sound extreme, but it is enough to scare plenty of teams away.
In practice, many so-called “all-in-one Agents” either only run in the cloud or require absurdly high local deployment costs. After spending two weeks researching different approaches, we made a bold decision: rebuild the entire Agent core around Hermes as the underlying engine for our integrated Agent.
Everything that followed may simply have been fate.
The approach shared in this article comes from real-world experience in the HagiCode project. HagiCode is an AI-assisted coding platform that provides developers with an intelligent coding assistant through a VSCode extension, a desktop client, and web services. You may have used similar tools before and felt they were just missing that final touch; we understand that feeling well.
Before diving into Hermes itself, it helps to explain why HagiCode needed something like it in the first place. Things rarely work exactly the way you want, so you need a practical reason to commit to a technical direction.
As an AI coding assistant, HagiCode needs to support several usage scenarios at the same time:
Local development environments: developers want to run it on their own machines so data never leaves the local environment. These days, data security is never a trivial concern.
Team collaboration environments: small teams should be able to share an Agent deployment running on a server. Saving money matters, and everyone has limits.
Elastic cloud expansion: when handling complex tasks, the system should automatically scale out to a GPU cluster. It is always better to be prepared.
This “we want everything at once” requirement is what led us to Hermes. Whether it was the perfect choice, I cannot say for sure, but at the time we did not see a better option.
Hermes Agent is an autonomous AI Agent created by Nous Research. Some readers may not be familiar with Nous Research; they are the lab behind open-source large models such as Hermes, Nomos, and Psyché. They have built many excellent things, even if they are still more underappreciated than they deserve.
Unlike traditional IDE coding assistants or simple API chat wrappers, Hermes has a defining trait: the longer it runs, the more capable it becomes. It is not designed to complete a task once and stop; it keeps learning and accumulating experience over long-running operation. In that sense, it feels a little like a person.
Several of Hermes’s core capabilities happen to align very closely with HagiCode’s needs.
This means HagiCode can choose the most suitable deployment model based on each user’s scenario: individuals run it locally, teams deploy it on servers, and complex tasks use GPU resources. One codebase handles all of it. In a world this busy, saving one layer of complexity is already a win.
Multi-platform messaging gateway
Hermes natively supports Telegram, Discord, Slack, WhatsApp, and more. For HagiCode, this means we can support AI assistants on those channels much more easily in the future. More paths forward are always welcome.
Rich tool system
Hermes comes with 40+ built-in tools and supports MCP (Model Context Protocol) extensions. This is essential for a coding assistant: executing shell commands, working with the file system, and calling Git all depend on tool support. An Agent without tools is like a bird without wings.
Cross-session memory
Hermes includes a persistent memory system and uses FTS5 full-text search to recall historical conversations. That allows the Agent to remember prior context instead of “losing its memory” every time. Sometimes people wish they could forget things that easily, but reality is usually less generous.
SupportsSystemMessages =true, // Supports system prompts
SupportsArtifacts =false
};
}
This abstraction layer allows HagiCode to switch seamlessly between different AI Providers. Whether the backend is OpenAI, Claude, or Hermes, the upper-layer calling pattern stays exactly the same. In plain terms, it keeps things simple.
Hermes communicates through ACP (Agent Communication Protocol). This protocol is designed specifically for Agent communication, and its main methods include:
Method
Description
initialize
Initialize the connection and obtain the protocol version and client capabilities
authenticate
Handle authentication and support multiple authentication methods
session/new
Create a new session and configure the working directory and MCP servers
session/prompt
Send a prompt and receive a response
HagiCode implements the ACP transport layer through StdioAcpTransport, launching a Hermes subprocess and communicating with it over standard input and output. It may sound complicated, but in practice it is manageable as long as you have enough patience.
That is roughly what a “health check” looks like here. In some ways, people are not so different: it helps to check in from time to time, even if no one tells us exactly what to look for.
Hermes supports multiple authentication methods, including API keys and tokens, so you need to choose based on the actual deployment scenario. Misconfiguration can cause connection failures, and the resulting error messages are not always intuitive. Sometimes the reported error is far away from the real root cause, which means slow and careful debugging is unavoidable.
Each session must specify a working directory so Hermes can access project files correctly. In multi-project scenarios, the working directory needs to switch dynamically. It sounds straightforward, but there are more edge cases than you might think.
Hermes responses may be split across session/update notifications and the final result, so they must be merged correctly. Otherwise, content may be lost.
Runtime errors should be returned explicitly instead of silently falling back to another Provider. That way, users know the issue came from Hermes rather than wondering why the system suddenly switched models behind the scenes.
HagiCode’s decision to use Hermes as its integrated Agent core was not a casual impulse. It was a careful choice based on practical requirements and the technical characteristics of the framework. Whether it proves to be the perfect long-term answer is still too early to say, but so far it has been serving us well.
Hermes gives HagiCode the flexibility to adapt to a wide range of scenarios. Its powerful tool system and MCP support allow the AI assistant to do real work, while the ACP protocol and Provider abstraction layer keep the integration process clear and controllable.
If you are choosing an Agent framework for your own AI project, I hope this article offers a useful reference. Picking the right underlying architecture can make everything that follows much easier.
Thank you for reading. If you found this article useful, you are welcome to support it with a like, bookmark, or share.
This content was created with AI-assisted collaboration, and the final content was reviewed and confirmed by the author.
In modern software development, a single AI Agent is no longer enough for complex needs. How can multiple AI assistants from different companies collaborate within the same project? This article shares the multi-Agent collaboration configuration approach that the HagiCode project developed through real-world practice.
Many developers have likely had this experience: bringing an AI assistant into a project really does improve coding efficiency. But as requirements grow more complex, one AI Agent starts to fall short. You want it to handle code review, documentation generation, unit tests, and more at the same time, but the result is often that it cannot balance everything well, and output quality becomes inconsistent.
What is even more frustrating is that once you try to introduce multiple AI assistants, things get more complicated. Each Agent has its own configuration method, API interface, and execution logic, and they may even conflict with one another. It is like a sports team where every player is individually strong, but nobody knows how to coordinate, so the whole match turns into chaos.
The HagiCode project ran into the same problem during development. As a complex project involving a frontend VSCode extension, backend AI services, and a cross-platform desktop client, in the 2026-03 version at that time we needed to integrate multiple AI assistants from different companies at once: Claude Code, Codex, CodeBuddy, iFlow, and more. Figuring out how to let them coexist harmoniously in the same project while making the best use of their individual strengths became a critical problem we had to solve.
That alone would already be enough trouble. After all, who wants to deal with a group of AI tools fighting each other every day?
The approach shared in this article is the multi-Agent collaboration configuration practice we developed in the HagiCode project through real trial and error and repeated optimization. If you are also struggling with multiple AI assistants working together, this article may give you some ideas. Maybe. Every project is different, after all.
HagiCode is an AI coding assistant project that adopts an “adventure party” model in which multiple AI engines work together. Project repository: github.com/HagiCode-org/site.
The multi-Agent configuration approach shared here is one of the core techniques that allows HagiCode to maintain efficient development in complex projects. There is nothing especially mystical about it - it just turns a group of AIs into an adventure party that can actually coordinate.
In the early days of the HagiCode project, we also tried using a single AI Agent to handle everything. We quickly discovered a clear bottleneck in that approach: different tasks demand different strengths. Some tasks require stronger contextual understanding, while others need more precise code editing. One Agent has a hard time excelling at all of them.
That made us realize that multiple Agents had to work together. But the problem was this: how do you let AI products from different companies coexist peacefully in the same project? We needed to solve several core issues:
Configuration management complexity: each Agent has different configuration methods, API interfaces, and execution modes
Unified communication protocol: we need a standardized way for different Agents to exchange data
Task coordination and division of labor: how do we assign work reasonably so each Agent can play to its strengths
With those questions in mind, we started designing HagiCode’s multi-Agent architecture. It was not really that complicated in the end; we just had to think it through clearly.
The core idea is to let different AI Agents be managed by the same code through a unified Provider interface. At the same time, the factory pattern is used to dynamically create and configure these Providers, ensuring scalability and flexibility across the system.
It is like division of labor in daily life. Everyone has a role; here we simply turned that idea into code architecture.
Based on HagiCode’s real-world experience, we assigned different responsibilities to each Agent:
Agent
Provider
Model
Primary Use
ClaudeCodeCli
Anthropic
glm-5-turbo
Generate technical solutions and Proposals
CodexCli
OpenAI/Zed
gpt-5.4
Execute precise code changes
CodebuddyCli
Zhipu
glm-4.7
Refine proposal descriptions and documentation
IFlowCli
Zhipu
glm-4.7
Archive proposals and historical records (configuration at the time; now legacy-compatible only)
OpenCodeCli
-
-
General-purpose code editing
GitHubCopilot
Microsoft
-
Assisted programming and code completion
The logic behind this division of labor is simple: every Agent has its own area of strength. Claude Code performs well at understanding and analyzing complex requirements, so it handles early solution design. Codex is more precise when modifying code, so it is better suited for concrete implementation work. CodeBuddy offers strong cost performance, which makes it a great fit for refining documentation.
After all, the right tool for the right job is usually the best choice. There are many roads to Rome; some are simply easier to walk than others.
The interface looks simple, but it is the foundation of the entire multi-Agent system. With a unified interface, we can call AI products from different companies in exactly the same way, no matter what is underneath.
This is really just a matter of making complex things simple. Simple is beautiful, after all.
This uses dependency injection through ActivatorUtilities.CreateInstance, which can dynamically create Provider instances at runtime while automatically injecting dependencies. The benefit of this design is that when a new Agent type is added, you only need to add the corresponding Provider class and then add one more case branch in the factory method. There is no need to modify the existing code at all.
That is reason enough. Who wants to rewrite a pile of old code every time a new feature is added?
The purpose of this mapping table is to convert string-form Provider names into enum types. This allows configuration files to use intuitive string names, while the internal code uses type-safe enums for processing.
Configuration should be as intuitive as possible. Nobody wants to memorize a pile of obscure code names.
In practice, everything can be configured in appsettings.json:
AI:
Providers:
Providers:
ClaudeCodeCli:
Enabled: true
Model: glm-5-turbo
WorkingDirectory: /path/to/project
CodebuddyCli:
Enabled: true
Model: glm-4.7
CodexCli:
Enabled: true
Model: gpt-5.4
IFlowCli:
Enabled: true
Model: glm-4.7
Each Provider can independently configure parameters such as enablement, model version, and working directory. This design preserves flexibility while remaining easy to manage and maintain.
In some ways, configuration files are like life’s options: you can choose to enable or disable certain things. The only difference is that code choices are easier to regret later.
With the unified technical architecture in place, the next step is making multiple Agents work together. HagiCode designed a task flow mechanism so different Agents can handle different stages of the work:
└──────────────────────▶ [iFlow] ──archive──▶ Historical records
The benefit of this division of labor is that each Agent only needs to focus on the tasks it does best, rather than trying to do everything. Claude Code generates proposals from scratch. Codebuddy makes proposal descriptions clearer. Codex turns proposals into actual code changes. iFlow archives and preserves those changes.
This is really just teamwork, the same as in daily life. Everyone has a role, and only together can something big get done. Here, the team members just happen to be AIs.
In actual operation, we summarized the following lessons:
1. Agent selection strategy matters
Tasks should not be assigned casually; they should be matched to each Agent’s strengths:
Proposal generation: use Claude Code, because it has stronger contextual understanding
Code execution: use Codex, because it is more precise for code modification
Proposal refinement: use Codebuddy, because it offers strong cost performance
Archival storage: use iFlow, because it is stable and reliable
After all, putting the right person on the right task is a timeless principle.
2. Configuration isolation ensures stability
Each Agent’s configuration is managed independently, supports environment-variable overrides, and uses separate working directories. As a result, a configuration error in one Agent does not affect the others.
This is like personal boundaries in life. Everyone needs their own space; non-interference makes coexistence possible.
3. Error-handling mechanism
A failure in a single Agent should not affect the overall workflow. We implemented a fallback strategy: when one Agent fails, the system can automatically switch to a backup plan or skip that step and continue with later tasks. At the same time, complete logging makes troubleshooting easier afterward.
Nobody can guarantee that errors will never happen. The key is how you handle them. Life works much the same way.
4. Monitoring and observability
Through the ACP protocol (our custom communication protocol based on JSON-RPC 2.0), we can track the execution status of each Agent. Session isolation ensures concurrency safety, while dynamic caching improves performance.
The things you cannot see are often the ones most likely to go wrong. Some visibility is always better than flying blind.
After adopting this multi-Agent collaboration configuration, the HagiCode project’s development efficiency improved significantly. Specifically:
Task-handling capacity doubled: in the past, one Agent had to handle many kinds of tasks at once; now tasks can be processed in parallel, and throughput has increased dramatically
More stable output quality: each Agent focuses only on what it does best, so consistency and quality both improve
Lower maintenance cost: unified interfaces and configuration management make the whole system easier to maintain and extend
Adding new Agents is simple: to integrate a new AI product, you only need to implement the interface and add configuration, without changing the core logic
This approach not only solved HagiCode’s own problems, but also proved that multi-Agent collaboration is a viable architectural choice.
The gains were quite noticeable. The process was just a bit of a hassle.
Protocol unification: the ACP protocol provides standardized communication between Agents through a bidirectional mechanism based on JSON-RPC 2.0
Task routing: assign work reasonably across different Agents so each can play to its strengths, instead of expecting one Agent to do everything
This design not only solves the problem of “multiple Agents fighting each other,” but also uses the adventure party task flow mechanism to make the development process more automated and specialized.
If you are also considering introducing multiple AI assistants, I hope this article gives you some useful reference points. Of course, every project is different, and the specific approach still needs to be adjusted to the actual situation. There is no one-size-fits-all solution; the best solution is the one that fits you.
Beautiful things or people do not need to be possessed. As long as they remain beautiful, simply appreciating that beauty is enough. Technical solutions are the same: the one that suits you is the best one…
In modern software development, a single AI Agent is no longer enough to meet complex requirements. How can multiple AI assistants from different companies collaborate within the same project? This article shares the multi-Agent collaboration configuration approach that the HagiCode project developed through real-world practice.
Many developers have probably had this experience: after introducing an AI assistant into a project, productivity really does improve. But as requirements become more and more complex, one AI Agent starts to feel insufficient. You want it to handle code review, documentation generation, unit testing, and other tasks at the same time, but the result is often that it cannot keep everything balanced, and the output quality becomes inconsistent.
What is even more frustrating is that once you try to bring in multiple AI assistants, the problem becomes more complicated. Each Agent has its own configuration method, API interface, and execution logic, and they may even conflict with one another. It is like a sports team in which every player is talented, but nobody knows how to work together, so the match turns into a mess.
The HagiCode project ran into the same challenge during development. As a complex project involving a frontend VSCode extension, backend AI services, and a cross-platform desktop client, we needed to connect multiple AI assistants from different companies at the same time: Claude Code, Codex, CodeBuddy, iFlow, and more. How to let them coexist harmoniously in the same project and make the most of their strengths became a key problem we had to solve.
That alone would already be enough trouble. After all, who wants to deal with a bunch of fighting AIs every day?
The approach shared in this article is the multi-Agent collaboration configuration practice that we developed in the HagiCode project through real trial and error and repeated optimization. If you are also struggling with multiple AI assistants working together, this article may give you some inspiration. Maybe. Every project is different, after all.
HagiCode is an AI coding assistant project that adopts an “adventure party” model in which multiple AI engines work together. Project repository: github.com/HagiCode-org/site.
The multi-Agent configuration approach shared in this article is one of the core technologies that allows HagiCode to maintain efficient development in complex projects. There is nothing especially magical about it; it simply turns a group of AIs into an adventure party that can actually coordinate.
In the early days of the HagiCode project, we also tried using a single AI Agent to handle every task. We soon discovered a clear bottleneck in that approach: different tasks require different strengths. Some tasks need stronger contextual understanding, while others need more precise code modification capabilities. One Agent has a hard time excelling at everything.
That made us realize that multiple Agents had to work together. But the problem was this: how do you let AI products from different companies coexist peacefully in the same project? We needed to solve several core issues:
Configuration management complexity: each Agent has different configuration methods, API interfaces, and execution modes
Unified communication protocol: we need a standardized way for different Agents to exchange data
Task coordination and division of labor: how do we assign work reasonably so each Agent can play to its strengths
With those questions in mind, we started designing HagiCode’s multi-Agent architecture. It was not actually that complicated; we just had to think it through clearly.
The core idea is to let different AI Agents be managed by the same set of code through a unified Provider interface. At the same time, the factory pattern is used to dynamically create and configure these Providers, ensuring scalability and flexibility across the system.
It is like division of labor in everyday life. Everyone has their own role; here we simply turned that idea into code architecture.
Based on HagiCode’s real-world experience, we assigned different responsibilities to each Agent:
Agent
Provider
Model
Primary Use
ClaudeCodeCli
Anthropic
glm-5-turbo
Generate technical solutions and Proposals
CodexCli
OpenAI/Zed
gpt-5.4
Execute precise code changes
CodebuddyCli
Zhipu
glm-4.7
Refine proposal descriptions and documentation
IFlowCli
Zhipu
glm-4.7
Archive proposals and historical records
OpenCodeCli
-
-
General-purpose code editing
GitHubCopilot
Microsoft
-
Assisted programming and code completion
The logic behind this division of labor is simple: every Agent has its own area of strength. Claude Code performs well at understanding and analyzing complex requirements, so it handles early solution design. Codex is more precise when modifying code, so it is better suited for concrete implementation work. CodeBuddy offers strong cost performance, which makes it ideal for refining proposal text and documentation.
After all, the right tool for the right job is the best choice. There are many roads to Rome; some are simply easier to walk than others.
The interface looks simple, but it is the foundation of the entire multi-Agent system. With a unified interface, we can call AI products from different companies in the same way regardless of which company is behind them.
This is really just about making complex things simple. Simple is beautiful, after all.
This uses dependency injection through ActivatorUtilities.CreateInstance, which can dynamically create Provider instances at runtime while automatically injecting dependencies. The benefit of this design is that when a new Agent type is added, you only need to add the corresponding Provider class and then add one more case branch in the factory method. There is no need to modify the existing code at all.
That is reason enough. Who wants to rewrite a pile of old code every time a new feature is added?
The purpose of this mapping table is to convert string-form Provider names into enum types. This allows configuration files to use intuitive string names, while the internal code uses type-safe enums for processing.
Configuration should be as intuitive as possible. Nobody wants to memorize a pile of complicated code names.
In practice, everything can be configured in appsettings.json:
AI:
Providers:
Providers:
ClaudeCodeCli:
Enabled: true
Model: glm-5-turbo
WorkingDirectory: /path/to/project
CodebuddyCli:
Enabled: true
Model: glm-4.7
CodexCli:
Enabled: true
Model: gpt-5.4
IFlowCli:
Enabled: true
Model: glm-4.7
Each Provider can independently configure parameters such as enablement, model version, and working directory. This design preserves flexibility while remaining easy to manage and maintain.
Configuration files are a bit like life’s options: you can choose to enable or disable certain things. The only difference is that code choices are easier to regret later.
With the unified technical architecture in place, the next step is making multiple Agents work together. HagiCode designed a task flow mechanism so different Agents can handle different stages of the work:
└──────────────────────▶ [iFlow] ──archive──▶ Historical records
The benefit of this division of labor is that each Agent only needs to focus on the tasks it does best, rather than trying to do everything. Claude Code is responsible for generating proposals from scratch. Codebuddy makes proposal descriptions clearer. Codex turns proposals into actual code changes. iFlow archives and preserves those changes.
This is really just teamwork, much like in everyday life. Everyone has their own role, and only together can something big get done. The only difference is that the team members here happen to be AIs.
In actual operation, we summarized the following lessons:
1. Agent selection strategy matters
Tasks should not be assigned casually; they should be matched to each Agent’s strengths:
Proposal generation: use Claude Code, because it has stronger contextual understanding
Code execution: use Codex, because it is more precise for code modification
Proposal refinement: use Codebuddy, because it offers strong cost performance
Archival storage: use iFlow, because it is stable and reliable
After all, putting the right person on the right task is a timeless principle.
2. Configuration isolation ensures stability
Each Agent’s configuration is managed independently, supports environment-variable overrides, and uses separate working directories. As a result, a configuration error in one Agent does not affect the others.
This is like personal boundaries in life. Everyone needs their own space; non-interference makes harmonious coexistence possible.
3. Error-handling mechanism
A failure in a single Agent should not affect the overall workflow. We implemented a fallback strategy: when one Agent fails, the system can automatically switch to a backup plan or skip that step and continue with later tasks. At the same time, complete logging makes troubleshooting easier afterward.
Nobody can guarantee that errors will never happen. The key is how you handle them. Life works much the same way.
4. Monitoring and observability
Through the ACP protocol (our custom communication protocol based on JSON-RPC 2.0), we can track the execution status of each Agent. Session isolation ensures concurrency safety, while dynamic caching improves performance.
The things you cannot see are often the ones most likely to go wrong. Some visibility is always better than flying blind.
After adopting this multi-Agent collaboration configuration, the HagiCode project’s development efficiency improved significantly. Specifically:
Task-handling capacity doubled: in the past, one Agent had to handle many kinds of tasks at once; now tasks can be processed in parallel, and throughput has increased dramatically
More stable output quality: each Agent focuses only on what it does best, so consistency and quality both improve
Lower maintenance cost: unified interfaces and configuration management make the whole system easier to maintain and extend
Adding new Agents is simple: to integrate a new AI product, you only need to implement the interface and add configuration, without changing the core logic
This approach not only solved HagiCode’s own problems, but also proved that multi-Agent collaboration is a viable architectural choice.
The gains were quite noticeable. The process was just a bit of a hassle.
Protocol unification: the ACP protocol provides standardized communication between Agents through a bidirectional mechanism based on JSON-RPC 2.0
Task routing: assign work reasonably across different Agents so each can play to its strengths, instead of expecting one Agent to do everything
This design not only solves the problem of “multiple Agents fighting each other,” but also uses the adventure party task flow mechanism to make the development process more automated and specialized.
If you are also considering introducing multiple AI assistants, I hope this article gives you some useful reference points. Of course, every project is different, and the specific approach still needs to be adjusted to the actual situation. There is no one-size-fits-all solution; the best solution is the one that fits you.
Beautiful things or people do not need to be possessed. As long as they remain beautiful, simply appreciating that beauty is enough. Technical solutions are the same: the one that suits you is the best one…
If this article was helpful to you, feel free to give the project a Star on GitHub. Your support is what keeps us sharing more. The public beta has already started, and you are welcome to install it and give it a try.
Thank you for reading. If you found this article useful, please click the like button below so more people can discover it.
This content was created with AI-assisted collaboration, reviewed by me, and reflects my own views and position.
Traditional AI coding tools are actually quite powerful; they just lack a bit of warmth. When we were building HagiCode, we thought: if we are going to write code anyway, why not turn it into a game?
Anyone who has used an AI coding assistant has probably had this experience: at first it feels fresh and exciting, but after a while it starts to feel like something is missing. The tool itself is powerful, capable of code generation, autocomplete, and Bug fixes, but… it does not feel very warm, and over time it can become monotonous and dull.
That alone is enough to make you wonder who wants to stare at a cold, impersonal tool every day.
It is a bit like playing a game. If all you do is finish a task list, with no character growth, no achievement unlocks, and no team coordination, it quickly stops being fun. Beautiful things and people do not need to be possessed to be appreciated; their beauty is enough on its own. Programming tools do not even offer that kind of beauty, so it is easy to lose heart.
We ran into exactly this problem while developing HagiCode. As a multi-AI assistant collaboration platform, HagiCode needs to keep users engaged over the long term. But in reality, even a great tool is hard to stick with if it lacks any emotional connection.
To solve this pain point, we made a bold decision: turn programming into a game. Not the superficial kind with a simple points leaderboard, but a true role-playing gamified experience. The impact of that decision may be even bigger than you imagine.
After all, people need a bit of ritual in their lives.
The ideas shared in this article come from our practical experience on the HagiCode project. HagiCode is a multi-AI assistant collaboration platform that supports Claude Code, Codex, Copilot, OpenCode, and other AI assistants working together. If you are interested in multi-AI collaboration or gamified programming, visit github.com/HagiCode-org/site to learn more.
There is nothing especially mysterious about it. We simply turned programming into an adventure.
The essence of gamification is not just “adding a leaderboard.” It is about building a complete incentive system so users can feel growth, achievement, and social recognition while doing tasks.
HagiCode’s gamification design revolves around one core idea: every AI assistant is a “Hero,” and the user is the captain of this Hero team. You lead these Heroes to conquer various “Dungeons” (programming tasks). Along the way, Heroes gain experience, level up, unlock abilities, and your team earns achievements as well.
This is not a gimmick. It is a design grounded in human behavioral psychology. When tasks are given meaning and progress feedback, people’s engagement and persistence increase significantly.
As the old saying goes, “This feeling can become a memory, though at the time it left us bewildered.” We bring that emotional experience into the tool, so programming is no longer just typing code, but a journey worth remembering.
Hero is the core concept in HagiCode’s gamification system. Each Hero represents one AI assistant. For example, Claude Code is a Hero, and Codex is also a Hero.
A Hero has three equipment slots, and the design is surprisingly elegant:
CLI slot (main class): Determines the Hero’s base ability, such as whether it is Claude Code or Codex
Model slot (secondary class): Determines which model is used, such as Claude 4.5 or Claude 4.6
Style slot (style): Determines the Hero’s behavior style, such as “Fengluo Strategist” or another style
The combination of these three slots creates unique Hero configurations. Much like equipment builds in games, you choose the right setup based on the task. After all, what suits you best is what matters most. Life is similar: many roads lead to Rome, but some are smoother than others.
if (normalizedLevel <= 100) return 'rookieSprint'; // Rookie sprint
if (normalizedLevel <= 300) return 'growthRun'; // Growth run
if (normalizedLevel <= 700) return 'veteranClimb'; // Veteran climb
return 'legendMarathon'; // Legend marathon
};
From “rookie” to “legend,” this growth path gives users a clear sense of direction and achievement. It mirrors personal growth in life, from confusion to maturity, only made more tangible here.
To create a Hero, you need to configure three slots:
const heroDraft:HeroDraft = {
name: 'Athena',
icon: 'hero-avatar:storm-03',
description: 'A brilliant strategist',
executorType: AIProviderType.CLAUDE_CODE_CLI,
slots: {
cli: {
id: 'profession-claude-code',
parameters: { /* CLI-related parameters */ }
},
model: {
id: 'secondary-claude-4-sonnet',
parameters: { /* Model-related parameters */ }
},
style: {
id: 'fengluo-strategist',
parameters: { /* Style-related parameters */ }
}
}
};
Every Hero has a unique avatar, description, and professional identity, which gives what would otherwise be a cold AI assistant more personality and warmth. After all, who wants to work with a tool that has no character?
For example, you can use Athena for generating proposals because it is good at strategy, and Apollo for implementing code because it is good at execution. That way, every Hero can play to its strengths. It is like forming a band: each person has an instrument, and together they create something beautiful.
Dungeon uses fixed scriptKey values to identify different workflows:
// Script keys map to different workflows
const dungeonScripts = {
'proposal.generate': 'Proposal Generation',
'proposal.execute': 'Proposal Execution',
'proposal.archive': 'Proposal Archive'
};
The task state flow is: queued (waiting) -> dispatching (being assigned) -> dispatched (assigned). The whole process is automated and requires no manual intervention. That is also part of our lazy side, because who wants to manage this stuff by hand?
XP is the core feedback mechanism in the gamification system. Users gain XP by completing tasks, XP levels up Heroes, and leveling up unlocks new abilities, forming a positive feedback loop.
In HagiCode, XP can be earned through the following activities:
Completing code execution
Successfully calling tools
Generating proposals
Session management operations
Project operations
Every time a valid action is completed, the corresponding Hero gains XP. Just like growth in life, every step counts, only here that growth is quantified.
Users can always see each Hero’s level and progress, and that immediate feedback is the key to gamification design. People need feedback, otherwise how would they know they are improving?
When an achievement is unlocked, a celebration animation is triggered. That kind of positive reinforcement gives users the satisfying feeling of “I did it” and motivates them to keep going. Small rewards in life work the same way: they may be small, but the happiness can last a long time.
The MVP is the best-performing Hero of the day and is highlighted in the report. This is not just data statistics, but a form of honor and recognition. After all, who does not want to be recognized?
HagiCode’s gamification system uses a modern technology stack and design patterns. There is nothing especially magical about it; we just chose tools that fit the job.
Framer Motion handles all animation effects, shadcn/ui provides the foundational UI components, and Redux Toolkit manages the complex gamification state. Good tools make good work.
Framer Motion is used to create smooth entrance animations:
<motion.div
animate={{ opacity: 1, y: 0 }}
initial={{ opacity: 0, y: 18 }}
transition={{ duration: 0.35, ease: 'easeOut', delay: index *0.08 }}
className="card"
>
{/* Card content */}
</motion.div>
Each card enters one after another with a delay of 0.08 seconds, creating a fluid visual effect. Smooth animation improves the experience. That part is hard to argue with.
Gamification data is stored using the Grain storage system to ensure state consistency. Even fine-grained data like accumulated Hero XP can be persisted accurately. No one wants to lose the experience they worked hard to earn.
Gamification is not just a simple points leaderboard, but a complete incentive system. Through the Hero system, Dungeon system, XP and level system, achievement system, and Battle Report, HagiCode transforms programming work into a heroic journey full of adventure.
The core value of this system lies in:
Emotional connection: Giving cold AI assistants personality
Positive feedback: Every action produces immediate feedback
Long-term goals: Levels and achievements provide a growth path
Team identity: A sense of collaboration within Dungeon teams
Honor and recognition: Battle Report and MVP showcases
Gamification design makes programming no longer dull, but an interesting adventure. While completing coding tasks, users also experience the fun of character growth, team collaboration, and achievement unlocking, which improves retention and activity.
At its core, programming is already an act of creation. We just made the creative process a little more fun.
This article shares the technical approach we used under the Orleans Grain architecture to integrate two AI tools, iflow and OpenCode, through a unified IAIProvider interface, and compares the implementation differences between WebSocket and HTTP communication in detail.
There is nothing especially mysterious about it. While building HagiCode, we ran into a very practical problem: users wanted to work with different AI tools. That is hardly surprising, since everyone has their own habits. Some prefer Claude Code, some love GitHub Copilot, and some teams use tools they developed themselves.
Our initial solution was simple and direct: write dedicated integration code for each AI tool. But the drawbacks showed up quickly. The codebase filled up with if-else branches, every change required testing in multiple places, and every new tool meant writing another pile of logic from scratch.
Later, I realized it would be better to create a unified IAIProvider interface and abstract the capabilities shared by all AI providers. That way, no matter which tool is used underneath, the upper layers can call it in the same way.
Recently, the project needed to integrate two new tools: iflow and OpenCode. Both support the ACP protocol, but their communication styles are different. iflow uses WebSocket, while OpenCode uses an HTTP API. That became a useful architectural test: adapt two different transport modes behind one unified interface.
The approach shared in this article comes from our practical experience in the HagiCode project. HagiCode is an AI-assisted development platform built on the Orleans Grain architecture. It integrates with different AI providers through a unified IAIProvider interface, allowing users to flexibly choose the AI tools they prefer.
OpenCode provides its service through an HTTP API, so the architecture is slightly different:
OpenCodeCliProvider → OpenCodeRuntimeManager → OpenCodeClient → OpenCode HTTP API
↓
OpenCodeProcessManager → opencode process management
A notable feature of OpenCode is that it uses an SQLite database to persist session bindings. That makes session recovery and prompt-response recovery possible:
Which approach you choose depends mainly on your needs. WebSocket is better for scenarios with high real-time requirements, while an HTTP API is simpler and easier to debug.
The executable path is validated at startup, but runtime issues can still happen. PingAsync is a useful tool for verifying whether the configuration is correct:
// Check at startup
var provider =await_providerFactory.GetProviderAsync(providerType);
var result =awaitprovider.PingAsync(cancellationToken);
if (!result.Success)
{
_logger.LogError("Provider {ProviderType} is unavailable: {Error}", providerType, result.ErrorMessage);
This article shares the technical approach used by the HagiCode platform when integrating the two AI tools iflow and OpenCode. Through a unified IAIProvider interface, we adapted different communication styles, WebSocket and HTTP, while keeping the upper-layer calling pattern consistent.
The core idea is actually quite simple:
Define a unified interface abstraction.
Build adapter layers for different implementations.
Manage everything uniformly through the factory pattern.
That gives the system good extensibility. When a new AI tool needs to be integrated later, all we need to do is implement the IAIProvider interface without changing too much existing code.
If you are also working on multi-AI-tool integration, I hope this article is helpful.
In the modern developer-tooling ecosystem, developers often need to use different AI coding assistants to support their work. Anthropic’s Claude Code CLI and OpenAI’s Codex CLI each have their own strengths: Claude is known for outstanding code understanding and long-context handling, while Codex excels at code generation and tool usage.
This article takes an in-depth look at how the HagiCode project achieves seamless switching and interoperability across multiple AI providers, including the core architectural design, key implementation details, and practical considerations.
Session state persistence: Database bindings ensure session continuity
Desktop integration: AgentCliManager supports user selection and configuration
The advantages of this architecture are:
Extensibility: Adding a new AI provider only requires implementing the IAIProvider interface
Testability: Providers can be tested and mocked independently
Maintainability: Each provider implementation is isolated and has a single responsibility
User-friendliness: Support both scenario-based automatic selection and manual switching
With this design, HagiCode successfully enables seamless switching and interoperability between Claude Code CLI and Codex CLI, giving developers a flexible and powerful AI coding assistant experience.
This article explains in detail how to implement hotword support for Doubao speech recognition in the HagiCode project. By using both custom hotwords and platform hotword tables, you can significantly improve recognition accuracy for domain-specific vocabulary.
Speech recognition technology has developed for many years, yet one problem has consistently bothered developers. General-purpose speech recognition models can cover everyday language, but they often fall short when it comes to professional terminology, product names, and personal names. Think about it: a voice assistant in the medical field needs to accurately recognize terms like “hypertension,” “diabetes,” and “coronary heart disease”; a legal system needs to precisely capture terms such as “cause of action,” “defense,” and “burden of proof.” In these scenarios, a general-purpose model is trying its best, but that is often not enough.
We ran into the same challenge in the HagiCode project. As a multifunctional AI coding assistant, HagiCode needs to handle speech recognition for a wide range of technical terminology. However, the Doubao speech recognition API, in its default configuration, could not fully meet our accuracy requirements for specialized terms. It is not that Doubao is not good enough; rather, every domain has its own terminology system. After some research and technical exploration, we found that the Doubao speech recognition API actually provides hotword support. With a straightforward configuration, it can significantly improve the recognition accuracy of specific vocabulary. In a sense, once you tell it which words to pay attention to, it listens for them more carefully.
What this article shares is the complete solution we used in the HagiCode project to implement Doubao speech recognition hotwords. Both modes, custom hotwords and platform hotword tables, are available, and they can also be combined. With this solution, developers can flexibly configure hotwords based on business scenarios so the speech recognition system can better “recognize” professional, uncommon, yet critical vocabulary.
The solution shared in this article comes from our practical experience in the HagiCode project. HagiCode is an open-source AI coding assistant project with a modern technology stack, designed to provide developers with an intelligent programming assistance experience. As a complex multilingual, multi-platform project, HagiCode needs to handle speech recognition scenarios involving many technical terms, which in turn drove our research into and implementation of the hotword feature.
If you are interested in HagiCode’s technical implementation, you can visit the GitHub repository for more details, or check out our official documentation for the complete installation and usage guide.
The Doubao speech recognition API provides two ways to configure hotwords, and each one has its own ideal use cases and advantages.
Custom hotword mode lets us pass hotword text directly through the corpus.context field. This approach is especially suitable for scenarios where you need to quickly configure a small number of hotwords, such as temporarily recognizing a product name or a person’s name. In HagiCode’s implementation, we parse the multi-line hotword text entered by the user into a list of strings, then format it into the context_data array required by the Doubao API. This approach is very direct: you simply tell the system which words to pay attention to, and it does exactly that.
Platform hotword table mode uses the corpus.boosting_table_id field to reference a preconfigured hotword table in the Doubao self-learning platform. This approach is suitable for scenarios where you need to manage a large number of hotwords. We can create and maintain hotword tables on the Doubao self-learning platform, then reference them by ID. For a project like HagiCode, where specialized terms need to be continuously updated and maintained, this mode offers much better manageability. Once the number of hotwords grows, having a centralized place to manage them is far better than entering them manually every time.
Interestingly, these two modes can also be used together. The Doubao API supports including both custom hotwords and a platform hotword table ID in the same request, with the combination strategy controlled by the combine_mode parameter. This flexibility allows HagiCode to handle a wide range of complex professional terminology recognition needs. Sometimes, combining multiple approaches produces better results.
In HagiCode’s frontend implementation, we defined a complete set of hotword configuration types and validation logic. The first part is the type definition:
exportinterface HotwordConfig {
contextText:string; // Multi-line hotword text
boostingTableId:string; // Doubao platform hotword table ID
combineMode:boolean; // Whether to use both together
}
This simple interface contains all configuration items for the hotword feature. Among them, contextText is the part users interact with most directly: we allow users to enter one hotword phrase per line, which is very intuitive. Asking users to enter one term per line is much easier than making them understand a complicated configuration format.
Next comes the validation function. Based on the Doubao API limitations, we defined strict validation rules: at most 100 lines of hotword text, up to 50 characters per line, and no more than 5000 characters in total; boosting_table_id can be at most 200 characters and may contain only letters, numbers, underscores, and hyphens. These limits are not arbitrary; they come directly from the official Doubao documentation. API limits are API limits, and we have to follow them.
if (!boostingTableId || boostingTableId.trim().length===0) {
return { isValid: true, errors: [] };
}
const errors:string[] = [];
if (boostingTableId.length>200) {
errors.push(`boosting_table_id cannot exceed 200 characters; current count is ${boostingTableId.length}`);
}
if (!/^[a-zA-Z0-9_-]+$/.test(boostingTableId)) {
errors.push('boosting_table_id can contain only letters, numbers, underscores, and hyphens');
}
return { isValid: errors.length===0, errors };
}
These validation functions run immediately when the user configures hotwords, ensuring that problems are caught as early as possible. From a user experience perspective, this kind of instant feedback is very important. It is always better for users to know what is wrong while they are typing rather than after they submit.
In HagiCode’s frontend implementation, we chose to use the browser’s localStorage to store hotword configuration. There were several considerations behind this design decision. First, hotword configuration is highly personalized, and different users may have different domain-specific needs. Second, this approach simplifies the backend implementation because it does not require extra database tables or API endpoints. Finally, after users configure it once in the browser, the settings can be loaded automatically on subsequent uses, which is very convenient. Put simply, it is the easiest approach.
The logic in this code is straightforward and clear. We read from localStorage when loading configuration, and write to localStorage when saving it. We also provide a default configuration so the system can still work properly when no configuration exists yet. There has to be a sensible default, after all.
In HagiCode’s backend implementation, we needed to add hotword-related properties to the SDK configuration class. Taking C# language characteristics and usage patterns into account, we used List<string> to store custom hotword contexts:
public List<string>? HotwordContexts { get; set; }
/// <summary>
/// Doubao platform hotword table ID
/// </summary>
publicstring? BoostingTableId { get; set; }
}
The design of this configuration class follows HagiCode’s usual concise style. HotwordContexts is a nullable list type, and BoostingTableId is a nullable string, so when there is no hotword configuration, these properties have no effect on the request at all. If you are not using the feature, it should stay out of the way.
Payload construction is the core of the entire hotword feature. Once we have hotword configuration, we need to format it into the JSON structure required by the Doubao API. This process happens before the SDK sends the request:
// Add corpus to the request only when it is not empty
if (corpus.Count>0)
{
request["corpus"] = corpus;
}
}
This code shows how to dynamically construct the corpus field based on configuration. The key point is that we add the corpus field only when hotword configuration actually exists. This design ensures backward compatibility: when no hotwords are configured, the request structure remains exactly the same as before. Backward compatibility matters; adding a feature should not disrupt existing logic.
Between the frontend and backend, hotword parameters are passed through WebSocket control messages. HagiCode is designed so that when the frontend starts recording, it loads the hotword configuration from localStorage and sends it to the backend through a WebSocket message.
const controlMessage = {
type: 'control',
payload: {
command: 'StartRecognition',
contextText: '高血压\n糖尿病\n冠心病',
boosting_table_id: 'medical_table',
combineMode: false
}
};
There is one detail to note here: the frontend passes multi-line text separated by newline characters, and the backend needs to parse it. The backend WebSocket handler parses these parameters and passes them to the SDK:
privateasync Task HandleControlMessageAsync(
string connectionId,
DoubaoSession session,
ControlMessage message)
{
if (message.Payloadis SessionControlRequest controlRequest)
With this design, passing hotword configuration from frontend to backend becomes clear and efficient. There is nothing especially mysterious about it; the data is simply passed through layer by layer.
In real usage, configuring custom hotwords is very simple. Open the speech recognition settings page in HagiCode and find the “Hotword Configuration” section. In the “Custom Hotword Text” input box, enter one hotword phrase per line.
For example, if you are developing a medical-related application, you could configure it like this:
高血压
糖尿病
冠心病
心绞痛
心肌梗死
心力衰竭
After you save the configuration, these hotwords are automatically passed to the Doubao API every time speech recognition starts. In our tests, once hotwords were configured, the recognition accuracy for related professional terms improved noticeably. The improvement is real, and clearly better than before.
If you need to manage a large number of hotwords, or if the hotwords need frequent updates, the platform hotword table mode is a better fit. First, create a hotword table on the Doubao self-learning platform and obtain the generated boosting_table_id, then enter this ID on the HagiCode settings page.
The Doubao self-learning platform provides capabilities such as bulk import and categorized management for hotwords, which is very practical for teams that need to manage large sets of specialized terminology. By managing hotwords on the platform, you can maintain them centrally and roll out updates consistently. Once the hotword list becomes large, having a single place to manage it is much more practical than manual entry every time.
In some complex scenarios, you may need to use both custom hotwords and a platform hotword table at the same time. In that case, simply configure both in HagiCode and enable the “Combination Mode” switch.
In combination mode, the Doubao API considers both hotword sources at the same time, so recognition accuracy is usually higher than using either source alone. However, it is worth noting that combination mode increases request complexity, so it is best to decide whether to enable it after practical testing. More complexity is only worth it if the real-world results justify it.
There are several points that deserve special attention when implementing and using the hotword feature.
First is the character limit. The Doubao API has strict restrictions on hotwords, including line count, characters per line, and total character count. If any limit is exceeded, the API returns an error. In HagiCode’s frontend implementation, we check these constraints during user input through validation functions, which prevents invalid configurations from being sent to the backend. Catching problems early is always better than waiting for the API to fail.
Second is the format of boosting_table_id. This field allows only letters, numbers, underscores, and hyphens, and it cannot contain spaces or other special characters. When creating a hotword table on the Doubao self-learning platform, be sure to follow the naming rules. That kind of strict format validation is common for APIs.
Third is backward compatibility. Hotword parameters are entirely optional. If no hotwords are configured, the system behaves exactly as it did before. This design ensures that existing users are not affected in any way, and it also makes gradual migration and upgrades easier. Adding a feature should not disrupt the previous logic.
Finally, there is error handling. When hotword configuration is invalid, the Doubao API returns corresponding error messages. HagiCode’s implementation records detailed logs to help developers troubleshoot issues. At the same time, the frontend displays validation errors in the UI to help users correct the configuration. Good error handling naturally leads to a better user experience.
Through this article, we have provided a detailed introduction to the complete solution for implementing Doubao speech recognition hotwords in the HagiCode project. This solution covers the entire process from requirement analysis and technical selection to code implementation, giving developers a practical example they can use for reference.
The key points can be summarized as follows. First, the Doubao API supports both custom hotwords and platform hotword tables, and they can be used independently or in combination. Second, the frontend uses localStorage to store configuration in a simple and efficient way. Third, the backend passes hotword parameters by dynamically constructing the corpus field, preserving strong backward compatibility. Fourth, comprehensive validation logic ensures configuration correctness and avoids invalid requests. Overall, the solution is not complicated; it simply follows the API requirements carefully.
Implementing the hotword feature further strengthens HagiCode’s capabilities in the speech recognition domain. By flexibly configuring business-related professional terms, developers can help the speech recognition system better understand content from specific domains and therefore provide more accurate services. Ultimately, technology should serve real business needs, and solving practical problems is what matters most.
If you found this article helpful, feel free to give HagiCode a Star on GitHub. Your support motivates us to keep sharing technical practice and experience. In the end, writing and sharing technical content that helps others is a pleasure in itself.
In the software development process, committing code is a routine task every programmer faces every day. But have you ever run into this situation: at the end of a workday, you open Git, see dozens of unstaged modified files, and have no idea how to organize them into sensible commits?
The traditional approach is to manually stage files in batches, commit them one by one, and write commit messages. This process is both time-consuming and error-prone. We often waste quite a bit of time on this, and after all, nobody wants to worry about these tedious chores late at night when they are already tired.
In the HagiCode project, we introduced a new feature - AI Compose Commit - designed to completely transform this workflow. By using AI to intelligently analyze all uncommitted changes in the working tree, it automatically groups them into multiple logical commits and performs standards-compliant commit operations. In this article, we will take a deep dive into the implementation principles, technical architecture, and the challenges and solutions we encountered in practice.
As a version control system, Git gives developers powerful code management capabilities. But in real-world usage, committing often becomes a bottleneck in the development workflow:
Manual grouping is time-consuming: When there are many file changes, developers need to inspect each file one by one and decide which changes belong to the same feature. That takes a lot of mental effort.
Inconsistent commit message quality: Writing commit messages that follow the Conventional Commits specification requires experience and skill, and beginners often produce non-standard commits.
Complex multi-repository management: In a monorepo environment, switching between different repositories adds operational complexity.
Interrupted workflow: Committing code interrupts your train of thought and hurts coding efficiency.
These issues are especially obvious in large projects and collaborative team environments. A good development tool should let developers focus on core coding work instead of getting bogged down in a cumbersome commit workflow.
In recent years, AI has been used more and more widely in software development. From code completion and bug detection to automatic documentation generation, AI is gradually reaching every stage of the development process. In Git workflows, while some tools already support commit message generation, most are limited to single-commit scenarios and lack the ability to intelligently analyze and group changes across the entire working tree.
HagiCode encountered these pain points during development as well. We tried many tools, but each had one limitation or another. Either the functionality was incomplete, or the user experience was not good enough. That is why we ultimately decided to implement AI Compose Commit ourselves.
HagiCode’s AI Compose Commit feature was created to fill that gap. It does not just generate commit messages - it takes over the entire process from file analysis to commit execution.
While implementing AI Compose Commit, we faced several technical challenges:
File semantic understanding: The AI needs to understand semantic relationships between file changes and decide which files belong to the same functional module. This requires deep analysis of file content, directory structure, and change context.
Commit grouping strategy: How should a reasonable grouping standard be defined? By feature, by module, or by file type? Different projects may need different strategies.
Real-time feedback and asynchronous processing: Git operations can take a long time, especially when handling a large number of files. How can we complete complex operations while preserving a good user experience?
Multi-repository support: In a monorepo architecture, operations need to be routed correctly between the main repository and sub-repositories.
Error handling and rollback: If one commit fails, how should already executed commits be handled? Do already staged files need to be rolled back?
Commit message consistency: Generated commit messages need to match the project’s existing style and remain consistent with historical commits.
AI processing over a large number of file changes consumes significant time and compute resources. We needed to optimize in the following areas:
Reduce unnecessary AI calls
Optimize how file context is constructed
Implement efficient Git operation batching
These issues all appeared in real HagiCode usage, and we only arrived at a relatively complete solution through repeated iteration and optimization. If you are building a similar tool, we hope our experience gives you some inspiration.
GitController provides the POST /api/git/auto-compose-commit endpoint as the entry point. To optimize user experience, we adopted a fire-and-forget asynchronous pattern:
After the client sends a request, the server immediately returns HTTP 202 Accepted
The actual AI processing runs asynchronously in the background
When processing finishes, the client is notified through SignalR
This design ensures that even if AI processing takes several minutes, users still get an immediate response and do not feel that the system is frozen.
This method sets a 20-minute timeout to handle large change sets. In real-world HagiCode usage, we found that some projects can involve hundreds of changed files in a single pass, requiring more processing time.
Through the abstract IAIService interface, we implemented a pluggable AI service architecture. We currently use the Claude Helper service, but it can be easily switched to other AI providers.
The AI needs to understand the state of each file before it can make intelligent decisions. We build file context through the BuildFileChangesXml method:
/// <summary>
/// Build an XML representation of file changes to provide the AI with complete file context information
/// </summary>
/// <paramname="stagedFiles">List of staged files, including file path, status, and old path (for rename operations)</param>
/// <returns>A formatted XML string containing metadata for all files</returns>
This XML-based context includes file paths, statuses, and old paths for rename operations, giving the AI complete metadata. With a structured XML format, we ensure that the AI can accurately understand the state and change type of each file.
Prompt = prompt, // Complete prompt template, including task instructions and constraints
WorkingDirectory = projectPath ??GetTempDirectory(), // Working directory, ensuring the AI runs in the correct project context
AllowedTools = allowedTools, // Allowed tool set
PermissionMode =PermissionMode.bypassPermissions, // Bypass permission checks so Git operations can run directly
LanguagePreference = languagePreference // Language preference setting, ensuring commit messages match user expectations
};
Here we use PermissionMode.bypassPermissions, which allows the AI to execute Git commands directly without user confirmation. This is central to the feature design, but it also requires strict input validation to prevent abuse. In HagiCode’s production deployment, we ensured the safety of this mechanism through backend parameter validation and log monitoring.
The lock has a 20-minute timeout, matching the timeout used for AI operations. If the operation fails or times out, the system automatically releases the lock to avoid permanent blocking. In real HagiCode usage, we found this lock mechanism to be extremely important, especially in collaborative environments where multiple developers may trigger AI Compose Commit at the same time.
"Auto compose commit notification sent for project {ProjectId}: {SuccessCount}/{TotalCount} succeeded",
projectId, successCount, totalCount);
}
catch (Exception ex)
{
// Log notification errors without affecting the main operation flow
// A notification failure should not cause the entire operation to fail
logger.LogError(ex, "Failed to send auto compose commit notification for project {ProjectId}", projectId);
}
}
After the frontend receives the notification, it can update the UI to show whether the commit succeeded or failed, improving the user experience. This real-time feedback mechanism received strong feedback from HagiCode users, who can clearly see when the operation finishes and what the outcome is.
AI behavior is entirely determined by the prompt, so we carefully designed the prompt template for Auto Compose Commit. Taking the Chinese version as an example (auto-compose-commit.zh-CN.hbs):
At the beginning of the prompt, we explicitly declare support for non-interactive execution mode, which is a critical requirement for CI/CD and automation scripts:
**Important Note**: This prompt may run in a non-interactive environment (such as CI/CD or automation scripts).
**Non-Interactive Mode**:
- Do not use AskUserQuestion or any interactive tools
- When user input is required:
- Use sensible defaults (for example, use feat as the commit type)
- Skip optional confirmation steps
- Record any assumptions made
This design ensures that AI Compose Commit can be used not only in interactive IDE environments, but also integrated into CI/CD pipelines to deliver a fully automated commit workflow.
To prevent the AI from executing dangerous operations, we added strict branch protection rules to the prompt:
**Branch Protection**:
- Do not perform any branch switching operations (git checkout, git switch)
- All `git commit` commands must run on the current branch
- Do not create, delete, or rename branches
- Do not modify untracked files or unstaged changes
- If branch switching is required to complete the operation, return an error instead of executing it
By constraining the AI’s tool usage scope, these rules ensure operational safety. In HagiCode’s practical testing, we verified the effectiveness of these constraints: when the AI encounters a situation that would require a branch switch, it safely returns an error instead of taking dangerous action.
The prompt defines the decision logic for file grouping in detail:
**File Grouping Decision Tree**:
├── Is it a configuration file (package.json, tsconfig.json, .env, etc.)?
│ ├── Yes -> separate commit (type: chore or build)
│ └── No -> continue
├── Is it a documentation file (README.md, *.md, docs/**)?
│ ├── Yes -> separate commit (type: docs)
│ └── No -> continue
├── Is it related to the same feature?
│ ├── Yes -> merge into the same commit
│ └── No -> commit separately
└── Is it a cross-module change?
├── Yes -> group by module
└── No -> group by feature
This decision tree gives the AI clear grouping logic, ensuring the generated commits remain semantically reasonable. In real HagiCode usage, we found that this decision tree can handle the vast majority of common scenarios, and the grouping results match developer expectations.
To keep commit messages consistent with project history, the prompt requires the AI to analyze recent commit history before generation:
**Historical Format Consistency**: Before generating commit messages, you **must** analyze the current repository's commit history to match the existing style.
1. Use `git log -n 15 --pretty=format:"%H|%s|%b%n---%n"` to get the recent commit history
2. Analyze the commits to identify:
- Structural patterns: does the project use multi-paragraph messages? Are there `Changes:` or `Capabilities:` sections?
- Language patterns: are commit messages in English, Chinese, or mixed?
- Common types: which commit types are most often used (`feat`, `fix`, `docs`, etc.)?
- Special formatting: are there `Co-Authored-By` lines? Any other project-specific conventions?
3. Generate commit messages that follow the detected patterns
This analysis ensures that AI-generated commit messages do not feel out of place, but instead remain stylistically aligned with the project’s history. In HagiCode’s multilingual projects, this feature is especially important because it can automatically choose the appropriate language and format based on commit history.
Every commit must include Co-Authored-By information:
**Important**: Every commit must include Co-Authored-By information
- Use the following format: `git commit -m "type(scope): subject" -m "" -m "Co-Authored-By: Hagicode <noreply@hagicode.com>"`
- Or include the `Co-Authored-By` line directly in the commit message
This is not only for contribution compliance, but also for tracing AI-assisted commit history. HagiCode treats this as a mandatory rule to ensure that all AI-generated commits carry a clear source marker.
The full AI Compose Commit workflow is as follows:
User trigger: The user clicks the “AI Auto Compose Commit” button in the Git Status panel or Quick Actions Zone.
API request: The frontend sends a POST request to the /api/git/auto-compose-commit endpoint.
Immediate response: The server returns HTTP 202 Accepted without waiting for processing to finish.
Background processing:
GitAppService acquires the repository lock
Calls AIGrain.AutoComposeCommitAsync
Builds the file context XML
Executes the AI prompt so the AI can analyze and perform commits
AI execution:
Uses Git commands to obtain all unstaged changes
Reads file contents to understand the nature of the changes
Groups files by semantic relationship
Executes git add and git commit for each group
Result parsing: Parses the execution results returned by the AI.
Notification delivery: Notifies the frontend through SignalR.
Lock release: Releases the repository lock whether the operation succeeds or fails.
This workflow is designed so that users can continue with other work immediately after initiating the operation, without waiting for the AI to finish. Feedback from HagiCode users shows that this asynchronous processing model greatly improves the workflow experience.
If an error occurs during AI processing, the system performs a rollback operation and unstages files that were already staged, preventing an inconsistent state from being left behind. In real HagiCode usage, this mechanism saved us from multiple unexpected interruptions and ensured repository state integrity.
The 20-minute timeout ensures that long-running operations do not block resources indefinitely. After a timeout, the system releases the lock and notifies the user that the operation failed. In real HagiCode usage, we found that most operations complete within 2 to 5 minutes, and only extremely large change sets approach the timeout limit.
Although AI-powered intelligent grouping is powerful, developers should still review the generated commits:
Check whether the grouping matches expectations
Verify the accuracy of commit messages
Confirm that no files were omitted or incorrectly included
If you find an unreasonable grouping, you can use git reset --soft HEAD~N to undo it and regroup. HagiCode’s experience shows that even when AI grouping is smart, manual review is still valuable, especially for important feature commits.
Begin with single commit message generation, then gradually expand to multi-commit grouping. This makes it easier to validate and iterate. HagiCode followed the same path: early versions only supported single commits, and later expanded to intelligent grouping across multiple commits.
Do not implement AI invocation logic from scratch. Using an existing SDK reduces development time and potential bugs. We used the Claude Helper service, which provides a stable interface and robust error handling.
AI operations can fail for many reasons, such as network issues, API rate limits, or content moderation. Make sure your system can handle these errors gracefully and provide meaningful error information.
Do not automate everything completely. Leave users in control. Provide options to review grouping results, adjust groups, and manually edit commit messages to balance automation and flexibility. Although HagiCode supports automatic execution, it still preserves preview and adjustment capabilities.
Cache project commit history analysis results to avoid re-analyzing them every time. Historical format preferences can be stored in configuration files to reduce AI calls.
AI Compose Commit represents a deep application of AI technology in software development tools. By intelligently analyzing file changes, automatically grouping commits, and generating standards-compliant commit messages, it significantly improves the efficiency of Git workflows and allows developers to focus more on core coding work.
During implementation, we learned several important lessons:
User feedback is critical: Early versions used synchronous waiting, and users reported a poor experience. After switching to a fire-and-forget model, satisfaction improved significantly.
Prompt design determines quality: A carefully designed prompt does more to guarantee AI output quality than a complex algorithm.
Safety always comes first: Granting the AI permission to execute Git commands directly improves efficiency, but it must be paired with strict constraints and validation.
Progressive improvement works best: Starting with simple scenarios and gradually increasing complexity is more likely to succeed than trying to implement everything at once.
In the future, we plan to further optimize AI Compose Commit, including:
Supporting more commit grouping strategies (by time, by developer, and so on)
Integrating code review workflows to trigger review automatically before commits
Supporting custom commit message templates to meet the personalized needs of different projects
If you find the approach shared in this article valuable, give HagiCode a try and experience how this feature works in real development. After all, practice is the only criterion for testing truth.
Thank you for reading. If you found this article helpful, please click the like button below so more people can discover it.
This content was created with AI-assisted collaboration, reviewed by me, and reflects my own views and positions.
This article documents the full process of integrating GitHub Issues into the HagiCode platform. We will explore how to use a “frontend direct connection + minimal backend” architecture to achieve secure OAuth authentication and efficient issue synchronization while keeping the backend lightweight.
As an AI-assisted development platform, HagiCode’s core value lies in connecting ideas with implementation. But in actual use, we found that after users complete a Proposal in HagiCode, they often need to manually copy the content into GitHub Issues for project tracking.
This creates several obvious pain points:
Fragmented workflow: Users need to switch back and forth between two systems. The experience is not smooth, and key information can easily be lost during copy and paste.
Inconvenient collaboration: Other team members are used to checking tasks on GitHub and cannot directly see proposal progress inside HagiCode.
Repeated manual work: Every time a proposal is updated, someone has to manually update the corresponding issue on GitHub, adding unnecessary maintenance cost.
To solve this problem, we decided to introduce the GitHub Issues Integration feature, connecting HagiCode sessions with GitHub repositories to enable “one-click sync.”
We are building HagiCode — an AI-powered coding assistant that makes development smarter, easier, and more enjoyable.
Smarter — AI assists throughout the entire journey, from idea to code, multiplying development efficiency. Easier — Multi-threaded concurrent operations make full use of resources and keep the development workflow smooth. More enjoyable — Gamification and an achievement system make coding less tedious and more rewarding.
The project is iterating quickly. If you are interested in technical writing, knowledge management, or AI-assisted development, welcome to check us out on GitHub~
Technical Choice: Frontend Direct Connection vs Backend Proxy
When designing the integration approach, we had two options in front of us: the traditional “backend proxy model” and the more aggressive “frontend direct connection model.”
In the traditional backend proxy model, every request from the frontend must first go through our backend, which then calls the GitHub API. This centralizes the logic, but it also puts a significant burden on the backend:
Bloated backend: We would need to write a dedicated GitHub API client wrapper and also handle the complex OAuth state machine.
Token risk: The user’s GitHub token would have to be stored in the backend database. Even with encryption, this still increases the security surface.
Development cost: We would need database migrations to store tokens and an additional synchronization service to maintain.
The frontend direct connection model is much lighter. In this approach, we use the backend only for the most sensitive “secret exchange” step (the OAuth callback). After obtaining the token, we store it directly in the browser’s localStorage. Later operations such as creating issues and updating comments are sent directly from the frontend to GitHub over HTTP.
Comparison Dimension
Backend Proxy Model
Frontend Direct Connection Model
Backend complexity
Requires a full OAuth service and GitHub API client
Only needs an OAuth callback endpoint
Token management
Must be encrypted and stored in the database, with leakage risk
Stored in the browser and visible only to the user
Implementation cost
Requires database migrations and multi-service development
Primarily frontend work
User experience
Centralized logic, but server latency may be slightly higher
Extremely fast response with direct GitHub interaction
Because we wanted rapid integration and minimal backend changes, we ultimately chose the “frontend direct connection model”. It is like giving the browser a “temporary pass.” Once it gets the pass, the browser can go handle things on GitHub by itself without asking the backend administrator for approval every time.
After settling on the architecture, we needed to design the specific data flow. The core of the synchronization process is how to obtain the token securely and use it efficiently.
The key point is: only one small step in OAuth (exchanging the code for a token) needs to go through the backend. After that, the heavy lifting (creating issues) is handled directly between the frontend and GitHub.
Security is always the top priority when integrating third-party services. We made the following considerations:
Defend against CSRF attacks: Generate a random state parameter during the OAuth redirect and store it in sessionStorage. Strictly validate the state in the callback to prevent forged requests.
Isolated token storage: The token is stored only in the browser’s localStorage. Using the Same-Origin Policy, only HagiCode scripts can read it, avoiding the risk of a server-side database leak affecting users.
Error boundaries: We designed dedicated handling for common GitHub API errors (such as 401 expired token, 422 validation failure, and 429 rate limiting), so users receive friendly feedback.
High development efficiency: Backend changes are minimal, mainly one extra database field and a simple OAuth callback endpoint. Most logic is completed on the frontend.
Strong security: The token does not pass through the server database, reducing leakage risk.
Great user experience: Requests are initiated directly from the frontend, so response speed is fast and there is no need for backend forwarding.
There are a few pitfalls to keep in mind during real deployment:
OAuth App settings: Remember to enter the correct Authorization callback URL in your GitHub OAuth App settings (usually http://localhost:3000/settings?tab=github&oauth=callback).
Rate limits: GitHub API limits unauthenticated requests quite strictly, but with a token the quota is usually sufficient (5000 requests/hour).
URL parsing: Users enter all kinds of repository URLs, so make sure your regex can match .git suffixes, SSH formats, and similar cases.
The current feature is still one-way synchronization (HagiCode -> GitHub). In the future, we plan to implement two-way synchronization through GitHub Webhooks. For example, if an issue is closed on GitHub, the session state on the HagiCode side could also update automatically. That will require us to expose a webhook endpoint on the backend, which will be an interesting next step.
We hope this article gives you a bit of inspiration for your own third-party integration development. If you have questions, feel free to open an issue for discussion on HagiCode GitHub.