Skip to content

claude-code

2 posts with the tag “claude-code”

Hagicode and GLM-5.1 Multi-CLI Integration Guide

Hagicode and GLM-5.1 Multi-CLI Integration Guide

Section titled “Hagicode and GLM-5.1 Multi-CLI Integration Guide”

In the Hagicode project, users can choose from multiple CLI tools to drive AI programming assistants, including Claude Code CLI, GitHub Copilot, OpenCode CLI, Codebuddy CLI, Hermes CLI, and more. These CLI tools are general-purpose AI programming tools on their own, but through Hagicode’s abstraction layer, they can flexibly connect to different AI model providers.

Zhipu AI (ZAI) provides an interface compatible with the Anthropic Claude API, allowing these CLI tools to directly use domestic GLM series models. Among them, GLM-5.1 is Zhipu’s latest large language model release, with significant improvements over GLM-5.0.

Hagicode defines 11 CLI provider types through the AIProviderType enum, covering mainstream AI programming CLI tools:

public enum AIProviderType
{
ClaudeCodeCli = 0, // Claude Code CLI
CodexCli = 1, // GitHub Copilot Codex
GitHubCopilot = 2, // GitHub Copilot
CodebuddyCli = 3, // Codebuddy CLI
OpenCodeCli = 4, // OpenCode CLI
IFlowCli = 5, // IFlow CLI
HermesCli = 6, // Hermes CLI
QoderCli = 7, // Qoder CLI
KiroCli = 8, // Kiro CLI
KimiCli = 9, // Kimi CLI
GeminiCli = 10, // Gemini CLI
}

Each CLI has corresponding model parameter configuration and supports the model and reasoning parameters:

private static readonly IReadOnlyDictionary<AIProviderType, IReadOnlyList<string>> ManagedModelParameterKeysByProvider =
new Dictionary<AIProviderType, IReadOnlyList<string>>
{
[AIProviderType.ClaudeCodeCli] = ["model", "reasoning"],
[AIProviderType.CodexCli] = ["model", "reasoning"],
[AIProviderType.OpenCodeCli] = ["model", "reasoning"],
[AIProviderType.HermesCli] = ["model", "reasoning"],
[AIProviderType.CodebuddyCli] = ["model", "reasoning"],
[AIProviderType.QoderCli] = ["model", "reasoning"],
[AIProviderType.KiroCli] = ["model", "reasoning"],
[AIProviderType.GeminiCli] = ["model"], // Gemini does not support the reasoning parameter
// ...
};

Hagicode’s Secondary Professions Catalog defines complete support for the GLM model series:

Model IDNameDefault ReasoningCompatible CLI Families
glm-4.7GLM 4.7highclaude, codebuddy, hermes, qoder, kiro
glm-5GLM 5highclaude, codebuddy, hermes, qoder, kiro
glm-5-turboGLM 5 Turbohighclaude, codebuddy, hermes, qoder, kiro
glm-5.0GLM 5.0 (Legacy)highclaude, codebuddy, hermes, qoder, kiro
glm-5.1GLM 5.1highclaude, codebuddy, hermes, qoder, kiro

Key differences between GLM-5.1 and GLM-5.0

Section titled “Key differences between GLM-5.1 and GLM-5.0”

From the implementation in AcpSessionModelBootstrapper.cs, we can clearly see the differences between GLM-5.1 and GLM-5.0:

GLM-5.1 is a standalone new model identifier with no legacy handling logic:

private const string Glm51ModelValue = "glm-5.1";

Definition in the Secondary Professions Catalog:

{
"id": "secondary-glm-5-1",
"name": "GLM 5.1",
"family": "anthropic",
"summary": "hero.professionCopy.secondary.glm51.summary",
"sourceLabel": "hero.professionCopy.sources.aiSharedAnthropicModel",
"sortOrder": 64,
"supportsImage": true,
"compatiblePrimaryFamilies": [
"claude",
"codebuddy",
"hermes",
"qoder",
"kiro"
],
"defaultParameters": {
"model": "glm-5.1",
"reasoning": "high"
}
}

Zhipu AI provides the most complete GLM model support:

{
"providerId": "zai",
"name": "智谱 AI",
"description": "智谱 AI 提供的 Claude API 兼容服务",
"category": "china-providers",
"apiUrl": {
"codingPlanForAnthropic": "https://open.bigmodel.cn/api/anthropic"
},
"recommended": true,
"region": "cn",
"defaultModels": {
"sonnet": "glm-4.7",
"opus": "glm-5",
"haiku": "glm-4.5-air"
},
"supportedModels": [
"glm-4.7",
"glm-5",
"glm-4.5-air",
"qwen3-coder-next",
"qwen3-coder-plus"
],
"features": ["experimental-agent-teams"],
"authTokenEnv": "ANTHROPIC_AUTH_TOKEN",
"referralUrl": "https://www.bigmodel.cn/claude-code?ic=14BY54APZA",
"documentationUrl": "https://open.bigmodel.cn/dev/api"
}

Features:

  • Supports the widest variety of GLM model variants
  • Provides default mapping across the Sonnet/Opus/Haiku tiers
  • Supports the experimental-agent-teams feature

Claude Code CLI is one of Hagicode’s core CLIs and is configured through the Hero configuration system:

{
"primaryProfessionId": "profession-claude-code",
"secondaryProfessionId": "secondary-glm-5-1",
"model": "glm-5.1",
"reasoning": "high"
}

Corresponding HeroEquipmentCatalogItem configuration:

{
id: 'secondary-glm-5-1',
name: 'GLM 5.1',
family: 'anthropic',
kind: 'model',
primaryFamily: 'claude',
compatiblePrimaryFamilies: ['claude', 'codebuddy', 'hermes', 'qoder', 'kiro'],
defaultParameters: {
model: 'glm-5.1',
reasoning: 'high'
}
}

OpenCode CLI is the most flexible CLI and supports specifying any model in the provider/model format:

Method 1: Use the ZAI provider prefix

{
"primaryProfessionId": "profession-opencode",
"model": "zai/glm-5.1",
"reasoning": "high"
}

Method 2: Use the model ID directly

{
"model": "glm-5.1"
}

Method 3: Frontend configuration UI

In HeroModelEquipmentForm.tsx, OpenCode CLI has a dedicated placeholder hint:

const OPEN_CODE_MODEL_PLACEHOLDER = 'myprovider/glm-4.7';
const modelPlaceholder = primaryProviderType === PCode_Models_AIProviderType.OPEN_CODE_CLI
? OPEN_CODE_MODEL_PLACEHOLDER
: 'gpt-5.4';

Users can enter:

zai/glm-5.1
glm-5.1

OpenCode CLI model parsing logic:

internal OpenCodeModelSelection? ResolveModelSelection(string? rawModel)
{
var normalized = NormalizeOptionalValue(rawModel);
if (normalized == null) return null;
var slashIndex = normalized.IndexOf('/');
if (slashIndex < 0)
{
// No slash: use the model ID directly
return new OpenCodeModelSelection {
ProviderId = string.Empty,
ModelId = normalized,
};
}
// Slash exists: parse the provider/model format
var providerId = normalized[..slashIndex].Trim();
var modelId = normalized[(slashIndex + 1)..].Trim();
return new OpenCodeModelSelection {
ProviderId = providerId,
ModelId = modelId,
};
}

Codebuddy CLI has dedicated legacy handling logic:

{
"primaryProfessionId": "profession-codebuddy",
"model": "glm-5.1",
"reasoning": "high"
}

Note: Codebuddy retains special handling for GLM-5.0 and does not use legacy normalization:

return !string.Equals(providerName, "CodebuddyCli", StringComparison.OrdinalIgnoreCase)
&& string.Equals(normalizedModel, LegacyGlm5TurboModelValue, StringComparison.OrdinalIgnoreCase)
? Glm5TurboModelValue
: normalizedModel;
// For CodebuddyCli, glm-5.0 is not normalized to glm-5-turbo
Terminal window
# Set the API key
export ANTHROPIC_AUTH_TOKEN="***"
# Optional: specify the API endpoint (ZAI uses this endpoint by default)
export ANTHROPIC_BASE_URL="https://open.bigmodel.cn/api/anthropic"
Terminal window
# Set the API key
export ANTHROPIC_AUTH_TOKEN="your-a...-key"
# Specify the Alibaba Cloud endpoint
export ANTHROPIC_BASE_URL="https://coding.dashscope.aliyuncs.com/apps/anthropic"

Compared with GLM-5.0, GLM-5.1 brings the following significant improvements:

According to Zhipu’s official release information, improvements in GLM-5.1 include:

  • Stronger code understanding: More accurate analysis of complex code structures
  • Longer context comprehension: Supports longer conversational context
  • Enhanced tool calling: Higher success rate for MCP tool calls
  • Output stability: Reduces randomness and hallucinations

GLM-5.1 covers all mainstream CLIs supported by Hagicode:

compatiblePrimaryFamilies: [
"claude", // Claude Code CLI
"codebuddy", // Codebuddy CLI
"hermes", // Hermes CLI
"qoder", // Qoder CLI
"kiro" // Kiro CLI
]

Make sure the ANTHROPIC_AUTH_TOKEN environment variable is set correctly. It is the required credential for every CLI to connect to the model.

GLM-5.1 needs to be enabled by the corresponding model provider:

  • The Zhipu AI ZAI platform supports it by default
  • Alibaba Cloud DashScope may require a separate application

When using the provider/model format, make sure the provider ID is correct:

  • Zhipu AI: zai or zhipuai
  • Alibaba Cloud: aliyun or dashscope
  • high is recommended for the best code generation results
  • Gemini CLI does not support the reasoning parameter and will ignore this configuration automatically

Through a unified abstraction layer, Hagicode enables flexible integration between GLM-5.1 and multiple CLIs. Developers can choose the CLI tool that best fits their preferences and usage scenarios, then use the latest GLM-5.1 model through simple configuration.

As Zhipu’s latest model version, GLM-5.1 offers clear improvements over GLM-5.0:

  • An independent version identifier with no legacy burden
  • Stronger reasoning and code understanding
  • Broad multi-CLI compatibility
  • Flexible reasoning level configuration

With the correct environment variables and Hero equipment configured, users can fully unlock the power of GLM-5.1 across different CLI environments.

Thank you for reading. If you found this article useful, feel free to like, bookmark, and share it to show your support. This content was created with AI-assisted collaboration, and the final content was reviewed and confirmed by the author.

Running AI CLI Tools in Docker Containers: A Practical Guide to User Isolation and Persistent Volumes

Running AI CLI Tools in Docker Containers: A Practical Guide to User Isolation and Persistent Volumes

Section titled “Running AI CLI Tools in Docker Containers: A Practical Guide to User Isolation and Persistent Volumes”

Integrating AI coding tools like Claude Code, Codex, and OpenCode into containerized environments sounds simple, but there are hidden complexities everywhere. This article takes a deep dive into how the HagiCode project solves core challenges in Docker deployments, including user permissions, configuration persistence, and version management, so you can avoid the common pitfalls.

When we decided to run AI coding CLI tools inside Docker containers, the most intuitive thought was probably: “Aren’t containers just root? Why not install everything directly and call it done?” In reality, that seemingly simple idea hides several core problems that must be solved.

First, security restrictions are the first hurdle. Take Claude CLI as an example: it explicitly forbids running as the root user. This is a mandatory security check, and if root is detected, it refuses to start. You might think, can’t I just switch users with the USER directive? It is not that simple. There is still a mapping problem between the non-root user inside the container and the user permissions on the host machine.

Second, state persistence is the second trap. Claude Code requires login, Codex has its own configuration, and OpenCode also has a cache directory. If you have to reconfigure everything every time the container restarts, the whole idea of “automation” loses its meaning. We need these configurations to persist beyond the lifecycle of the container.

The third problem is permission consistency. Can processes inside the container access configuration files created by the host user? UID/GID mismatches often cause file permission errors, and this is extremely common in real deployments.

These problems may look independent, but in practice they are tightly connected. During HagiCode’s development, we gradually worked out a practical solution. Next, I will share the technical details and the lessons learned from those pitfalls.

The solution shared in this article comes from our practical experience in the HagiCode project. HagiCode is an open-source AI-assisted programming platform that integrates multiple mainstream AI coding assistants, including Claude Code, Codex, and OpenCode. As a project that needs cross-platform and highly available deployment, HagiCode has to solve the full range of challenges involved in containerized deployment.

If you find the technical solution in this article valuable, that is a sign HagiCode has something real to offer in engineering practice. In that case, the HagiCode official website and GitHub repository are both worth following.

There is a common misunderstanding here: Docker containers run as root by default, so why not just install the tools as root? If you think that way, Claude CLI will quickly teach you otherwise.

Terminal window
# Run Claude CLI directly as root? No.
docker run --rm -it --user root myimage claude
# Output: Error: This command cannot be run as root user

This is a hard security restriction in Claude CLI. The reason is simple: these CLI tools read and write sensitive user configuration, including API tokens, local caches, and even scripts written by the user. Running them with root privileges introduces too much risk.

So the question becomes: how can we satisfy the CLI’s security requirements while keeping container management flexible? We need to change the way we think about it: instead of switching users at runtime, create a dedicated user during the image build stage.

Creating a dedicated user: more than just changing a name

Section titled “Creating a dedicated user: more than just changing a name”

You might think that adding a single USER line to the Dockerfile is enough. That is indeed the simplest approach, but it is not robust enough.

HagiCode’s approach is to create a hagicode user with UID 1000, which usually matches the default user on most host machines:

RUN groupadd -o -g 1000 hagicode && \
useradd -o -u 1000 -g 1000 -s /bin/bash -m hagicode && \
mkdir -p /home/hagicode/.claude && \
chown -R hagicode:hagicode /home/hagicode

But this only solves the built-in user inside the image. What if the host user is UID 1001? You still need to support dynamic mapping when the container starts.

docker-entrypoint.sh contains the key logic:

Terminal window
if [ -n "$PUID" ] && [ -n "$PGID" ]; then
if ! id hagicode >/dev/null 2>&1; then
groupadd -g "$PGID" hagicode
useradd -u "$PUID" -g "$PGID" -s /bin/bash -m hagicode
fi
fi

The advantage of this design is clear: use the default UID 1000 at image build time, then adjust dynamically at runtime through the PUID and PGID environment variables. No matter what UID the host user has, ownership of configuration files remains correct.

The design philosophy of persistent volumes

Section titled “The design philosophy of persistent volumes”

Each AI CLI tool has its own preferred configuration directory, so they need to be mapped one by one:

CLI ToolPath in ContainerNamed Volume
Claude/home/hagicode/.claudeclaude-data
Codex/home/hagicode/.codexcodex-data
OpenCode/home/hagicode/.config/opencodeopencode-config-data

Why use named volumes instead of bind mounts? Three reasons:

  1. Simpler management: Named volumes are managed automatically by Docker, so you do not need to create host directories manually.
  2. Permission isolation: The initial contents of the volumes are created by the user inside the container, avoiding permission conflicts with the host.
  3. Independent migration: Volumes can exist independently of containers, so data is not lost when images are upgraded.

docker-compose-builder-web automatically generates the corresponding volume configuration:

volumes:
claude-data:
codex-data:
opencode-config-data:
services:
hagicode:
volumes:
- claude-data:/home/hagicode/.claude
- codex-data:/home/hagicode/.codex
- opencode-config-data:/home/hagicode/.config/opencode
user: "${PUID:-1000}:${PGID:-1000}"

Pay attention to the user field here: PUID and PGID are injected through environment variables to ensure that processes inside the container run with an identity that matches the host user. This detail matters because permission issues are painful to debug once they appear.

Version management: baked-in versions with runtime overrides

Section titled “Version management: baked-in versions with runtime overrides”

Pinning Docker image versions is essential for reproducibility. But in real development, we often need to test a newer version or urgently fix a bug. If we had to rebuild the image every time, the workflow would be far too inefficient.

HagiCode’s strategy is fixed versions as the default, with runtime overrides as an extension mechanism. It is a pragmatic engineering compromise between stability and flexibility.

Dockerfile.template pins versions here:

USER hagicode
WORKDIR /home/hagicode
# Configure the global npm install path
RUN mkdir -p /home/hagicode/.npm-global && \
npm config set prefix '/home/hagicode/.npm-global'
# Install CLI tools using pinned versions
RUN npm install -g @anthropic-ai/claude-code@2.1.71 && \
npm install -g @openai/codex@0.112.0 && \
npm install -g opencode-ai@1.2.25 && \
npm cache clean --force

docker-entrypoint.sh supports runtime overrides:

Terminal window
install_cli_override_if_needed() {
local package_name="$2"
local override_version="$5"
if [ -n "$override_version" ]; then
gosu hagicode npm install -g "${package_name}@${override_version}"
fi
}
# Example usage
install_cli_override_if_needed "" "@anthropic-ai/claude-code" "" "" "${CLAUDE_CODE_CLI_VERSION}"

This lets you test a new version through an environment variable without rebuilding the image:

Terminal window
docker run -e CLAUDE_CODE_CLI_VERSION=2.2.0 myimage

This design is practical because nobody wants to rebuild an image every time they test a new feature.

In addition to configuring CLI tools manually, some scenarios require automatic configuration injection. The most typical example is an API token.

Terminal window
if [ -n "$ANTHROPIC_AUTH_TOKEN" ]; then
mkdir -p /home/hagicode/.claude
cat > /home/hagicode/.claude/settings.json <<EOF
{
"env": {
"ANTHROPIC_AUTH_TOKEN": "${ANTHROPIC_AUTH_TOKEN}"
}
}
EOF
chown -R hagicode:hagicode /home/hagicode/.claude
fi

Two things matter here: pass sensitive information through environment variables instead of hard-coding it into the image, and make sure the ownership of configuration files is set correctly, otherwise the CLI tools will not be able to read them.

This is the easiest trap to fall into. The host user has UID 1001, while the container uses 1000, so files created on one side cannot be accessed on the other.

Terminal window
# Correct approach: make the container match the host user
docker run \
-e PUID=$(id -u) \
-e PGID=$(id -g) \
myimage

This issue is very common, and it can be frustrating the first time you run into it.

Configuration disappears after container restart

Section titled “Configuration disappears after container restart”

If you find yourself logging in again after every restart, check whether you forgot to mount a persistent volume:

volumes:
- claude-data:/home/hagicode/.claude

Nothing is more frustrating than carefully setting up a configuration only to see it disappear.

Do not run npm install -g directly inside a running container. The correct approaches are:

  1. Set an environment variable to trigger override installation.
  2. Or rebuild the image.
Terminal window
# Option 1: runtime override
docker run -e CLAUDE_CODE_CLI_VERSION=2.2.0 myimage
# Option 2: rebuild the image
docker build -t myimage:v2 .

There is more than one road to Rome, but some roads are smoother than others.

  • Pass API tokens through environment variables instead of writing them into the image.
  • Set configuration file permissions to 600.
  • Always run the application as a non-root user.
  • Update CLI versions regularly to fix security vulnerabilities.

Security is always important, but the real challenge is consistently enforcing it in practice.

If you want to support a new CLI tool in the future, there are only three steps:

  1. Dockerfile.template: add the installation step.
  2. docker-entrypoint.sh: add the version override logic.
  3. docker-compose-builder-web: add the persistent volume mapping.

This template-based design makes extension simple without changing the core logic.

Running AI CLI tools in Docker containers involves three core challenges: user permissions, configuration persistence, and version management. By combining dedicated users, named-volume isolation, and environment-variable-based overrides, the HagiCode project built a deployment architecture that is both secure and flexible.

Key design points:

  • User isolation: Create a dedicated user during the image build stage, with runtime support for dynamic PUID/PGID mapping.
  • Persistence strategy: Each CLI tool gets its own named volume, so restarts do not affect configuration.
  • Version flexibility: Fixed defaults ensure reproducibility, while runtime overrides provide room for testing.
  • Automated configuration: Sensitive configuration can be injected automatically through environment variables.

This solution has been running stably in the HagiCode project for some time, and I hope it offers useful reference points for developers with similar needs.

Thank you for reading. If you found this article useful, you are welcome to like, bookmark, and share it. This content was created with AI-assisted collaboration, and the final content was reviewed and confirmed by the author.