Skip to content

AI coding

2 posts with the tag “AI coding”

The Hallucination Problem in AI Coding Assistants: How to Achieve Specification-driven Development with OpenSpec

The Hallucination Problem in AI Coding Assistants: How to Achieve Specification-driven Development with OpenSpec

Section titled “The Hallucination Problem in AI Coding Assistants: How to Achieve Specification-driven Development with OpenSpec”

AI coding assistants are powerful, but they often generate code that does not match real requirements or violates project conventions. This article shares how the HagiCode project uses the OpenSpec workflow to implement specification-driven development and significantly reduce the risk of AI hallucinations through a structured proposal mechanism.

Anyone who has used GitHub Copilot or ChatGPT to write code has probably had this experience: the code generated by AI looks polished, but once you actually use it, problems show up everywhere. Maybe it uses the wrong component from the project, maybe it ignores the team’s coding standards, or maybe it writes a large chunk of logic based on assumptions that do not even exist.

This is the so-called “AI hallucination” problem. In programming, it appears as code that seems reasonable on the surface but does not actually fit the real state of the project.

There is also something a bit frustrating about this. As AI coding assistants become more widespread, the problem becomes more serious. After all, AI lacks an understanding of project history, architectural decisions, and coding conventions, and when given too much freedom it can “creatively” generate code that does not match reality. It is a bit like writing an article: without structure, it is easy to wander off into imagination, even though the real situation is far more grounded.

To solve these pain points, we made a bold decision: instead of trying to make AI smarter, we put it inside a “specification” cage. The change this decision brought was probably bigger than you might expect, and I will explain that shortly.

The approach shared in this article comes from our practical experience in the HagiCode project. HagiCode is an open-source AI coding assistant project dedicated to solving real problems in AI programming through structured engineering practices.

Before diving into the solution, let us first look at where the problem actually comes from. After all, if you understand both yourself and your opponent, you can fight a hundred battles without defeat. Applied to AI, that saying is still surprisingly fitting.

AI models are trained on public code repositories, but your project has its own history, conventions, and architectural decisions. AI cannot directly access this kind of “implicit knowledge,” so the code it generates is often disconnected from the actual project.

This is not entirely the AI’s fault. It has never lived inside your project, so how could it know all of your unwritten rules? Like a brand-new intern, not understanding the local customs is normal. The only issue is that the cost can be rather high.

When you ask AI, “Help me implement a user authentication feature,” it may generate code in almost any form. Without clear constraints, AI will implement things in the way it “thinks” is reasonable instead of following your project’s requirements.

That is like asking someone who has never learned your project standards to improvise freely. How could that not cause trouble? It is not even that the AI is being irresponsible; it just has no idea what responsibility means in this context.

After AI generates code, if there is no structured review process, code based on false assumptions can go directly into the repository. By the time the problem is discovered in testing or even in production, the cost is already far too high.

That is like trying to mend the pen after the sheep are already gone. The principle is obvious, but in practice people often still find the extra work bothersome. Before things go wrong, who really wants to spend more time up front?

OpenSpec: The Answer to Specification-driven Development

Section titled “OpenSpec: The Answer to Specification-driven Development”

HagiCode chose OpenSpec as the solution. The core idea is simple: all code changes must go through a structured proposal workflow, turning abstract ideas into executable implementation plans.

That may sound grand, but in plain terms it just means making AI write the requirements document before writing the code. As the old saying goes, preparation leads to success, and lack of preparation leads to failure.

OpenSpec is an npm-based command-line tool (@fission-ai/openspec) that defines a standard proposal file structure and validation mechanism. Put simply, it makes AI “write the requirements document” before it writes code.

A three-step workflow to prevent hallucinations

Section titled “A three-step workflow to prevent hallucinations”

OpenSpec ensures proposal quality through a three-step workflow:

Step 1: Initialize the proposal - Set the session state to Openspecing Step 2: Intermediate processing - Keep the Openspecing state while gradually refining the artifacts Step 3: Complete the proposal - Transition to the Reviewing state

There is a clever detail in this design: the first step uses the ProposalGenerationStart type, and completing it does not trigger a state transition. This ensures that the review stage is not entered too early before the entire multi-step workflow is finished.

This detail is actually quite interesting. It is like cooking: if you lift the lid before the heat is right, nothing will turn out well. Only by moving step by step with a bit of patience can you end up with a good dish.

// Implementation in the HagiCode project
public enum MessageAssociationType
{
ProposalGeneration = 2,
ProposalExecution = 3,
/// <summary>
/// Marks the start of the three-step proposal generation workflow
/// Does not transition to the Reviewing state when completed
/// </summary>
ProposalGenerationStart = 5
}

Every OpenSpec proposal follows the same directory structure:

openspec/
├── changes/ # Active and archived changes
│ ├── {change-name}/
│ │ ├── proposal.md # Proposal description
│ │ ├── design.md # Design document
│ │ ├── specs/ # Technical specifications
│ │ └── tasks.md # Executable task list
│ └── archive/ # Archived changes
└── specs/ # Standalone specification library

According to statistics from the HagiCode project, there are already more than 4,000 archived changes and over 150,000 lines of specification files. This historical accumulation not only gives AI clear guidance to follow, but also provides the team with a valuable knowledge base.

It is a bit like the classics left behind by earlier generations. Read enough of them and patterns begin to emerge. The only difference is that these classics are stored in files instead of written on bamboo slips.

The system implements multiple layers of validation to ensure proposal quality:

// Validate that required files exist
ValidateProposalFiles()
// Validate prerequisites before execution
ValidateExecuteAsync()
// Validate start conditions
ValidateStartAsync()
// Validate archive conditions
ValidateArchiveAsync()
// Validate proposal name format (kebab-case)
ValidateNameFormat()

These validations are like gatekeepers at multiple checkpoints. Only truly qualified proposals can pass through. It may look tedious, but it is still much better than letting poor code enter the repository.

When AI runs inside HagiCode, it uses predefined Handlebars templates. These templates contain explicit step-by-step instructions and protective guardrails. For example:

  • Do not continue before understanding the user’s intent
  • Do not generate unvalidated code
  • Require the user to provide the name again if it is invalid
  • If the change already exists, suggest using the continue command instead of recreating it

This way of “dancing in shackles” actually helps AI focus more on understanding requirements and generating code that follows standards. Constraints are not always a bad thing. Sometimes too much freedom is exactly what creates chaos.

Practice: How to Use OpenSpec in a Project

Section titled “Practice: How to Use OpenSpec in a Project”
Terminal window
npm install -g @fission-ai/openspec@1
openspec --version # Verify the installation

The openspec/ folder structure will be created automatically in the project root.

There is not much mystery in this step. It is just tool installation, which everyone understands. Just remember to use @fission-ai/openspec@1; newer versions may have pitfalls, and stability matters most.

In the HagiCode conversation interface, use the shortcut command:

/opsx:new

Or specify a change name and target repository:

/opsx:new "add-user-auth" --repos "repos/web"

Creating a proposal is like outlining an article before writing it. Once you have an outline, the rest becomes much easier. Many people prefer to jump straight into writing, only to realize halfway through that the idea does not hold together. That is when the real headache begins.

Use /opsx:continue to generate the required artifacts step by step:

proposal.md - Describes the purpose and scope of the change

# Proposal: Add User Authentication
## Why
The current system lacks user authentication and cannot protect sensitive APIs.
## What Changes
- Add JWT authentication middleware
- Implement login/registration APIs
- Update frontend integration

design.md - Detailed technical design

# Design: Add User Authentication
## Context
The system currently uses public APIs, so anyone can access them...
## Decisions
1. Choose JWT instead of Session...
2. Use the HS256 algorithm...
## Risks
- Risk of token leakage...
- Mitigation measures...

specs/ - Technical specifications and test scenarios

# user-auth Specification
## Requirements
### Requirement: JWT Token Generation
The system SHALL use the HS256 algorithm to generate JWT tokens.
#### Scenario: Valid login
- WHEN the user provides valid credentials
- THEN the system SHALL return a valid JWT token

tasks.md - Executable task list

# Tasks: Add User Authentication
## 1. Backend Changes
- [ ] 1.1 Create AuthController
- [ ] 1.2 Implement JWT middleware
- [ ] 1.3 Add unit tests

These artifacts are a lot like drafts for an article. Once the draft is complete, the main text flows naturally. Many people dislike writing drafts because they think it wastes time, but in reality that is often where the clearest thinking happens.

After all artifacts are complete:

/opsx:apply

AI will read all context files and execute tasks step by step according to the checklist in tasks.md. At this point, because the specification is already clear, the quality of the generated code is much higher.

By this stage, half the work is already done. Once there is a clear task list, the rest is simply executing it step by step. The problem is that many people skip the earlier steps and jump straight here, and then quality naturally becomes hard to guarantee.

After the change is completed:

/opsx:archive

Move the completed change into the archive/ directory so it can be reviewed and reused later.

Archiving matters. It is like carefully storing away a finished article. When a similar problem appears in the future, looking back through old records may provide the answer. Many people find it troublesome, but these accumulated materials are often the most valuable assets.

Use kebab-case, start with a letter, and include only lowercase letters, numbers, and hyphens:

  • add-user-auth
  • AddUserAuth
  • add--user-auth

Naming rules may seem minor, but consistency is always worth something. In software, consistency matters even when people do not always pay attention to it.

  1. Using the wrong type in step 1 of the three-step workflow - This causes the state to transition too early
  2. Forgetting to trigger the state transition in the final step - This leaves the workflow stuck in the Openspecing state
  3. Skipping review and executing directly - You should validate that all artifacts are complete first

These mistakes are all common for beginners. Experienced people naturally know how to avoid them. Still, everyone becomes experienced eventually, and taking a few detours is part of the process. The only hope is to avoid taking too many.

OpenSpec supports managing multiple proposals at the same time, which is especially useful for large features:

Terminal window
# View all active changes
openspec list
# Switch to a specific change
openspec apply "add-user-auth"
# View change status
openspec status --change "add-user-auth"

Managing multiple changes is like writing several articles at once. It takes some technique and patience, but once you get used to it, it becomes natural enough.

Understanding state transitions helps with troubleshooting:

Init → Drafting → Openspecing → Reviewing → Executing → ExecutionCompleted → Completed → Archived
  • Openspecing: Generating the plan
  • Reviewing: Under review (artifacts can be revised repeatedly)
  • Executing: In execution (applying tasks.md)

A state machine is, in the end, just a set of rules. Rules can feel annoying at times, but more often they are useful. As the saying goes, without rules, nothing can be accomplished properly.

Through the OpenSpec workflow, the HagiCode project has achieved significant results in addressing the AI hallucination problem:

  1. Fewer hallucinations - AI must follow a structured specification instead of generating code arbitrarily
  2. Higher quality - Multi-layer validation ensures changes comply with project standards
  3. Faster collaboration - Archived changes provide references for future development
  4. Traceability - Every change has a complete record of proposal, design, specification, and tasks

This approach does not make AI smarter. It puts AI inside a “specification” cage. Practice has shown that dancing in shackles can actually lead to a better performance.

The principle is simple. Constraints are not necessarily bad. Like writing, having a format to follow often makes it easier to produce good work. Many people dislike constraints because they think constraints limit creativity, but creativity also needs the right soil to grow.

If you are also using AI coding assistants and have run into similar problems, give OpenSpec a try. Specification-driven development may seem to add extra steps, but that early investment pays back many times over in code quality and maintenance efficiency.

Sometimes slowing down a little is actually the fastest way forward. Many people just do not realize it yet.


If this article helped you, feel free to give us a Star on GitHub. The HagiCode public beta has already started, and you can join the experience by installing it now.


That is about enough for this article. There is nothing especially profound here, just a summary of a few practical lessons. I hope it is useful to everyone. Sharing is a good thing: you learn something yourself, and others learn something too.

Still, an article is only an article. Practice is what really matters. Knowledge from the page always feels shallow until you apply it yourself.

Thank you for reading. If you found this article useful, feel free to like, bookmark, and share it. This content was created with AI-assisted collaboration, and the final content was reviewed and approved by the author.

How Gamification Design Makes AI Coding More Fun

How Gamification Design Makes AI Coding More Fun

Section titled “How Gamification Design Makes AI Coding More Fun”

Traditional AI coding tools are actually quite powerful; they just lack a bit of warmth. When we were building HagiCode, we thought: if we are going to write code anyway, why not turn it into a game?

Anyone who has used an AI coding assistant has probably had this experience: at first it feels fresh and exciting, but after a while it starts to feel like something is missing. The tool itself is powerful, capable of code generation, autocomplete, and Bug fixes, but… it does not feel very warm, and over time it can become monotonous and dull.

That alone is enough to make you wonder who wants to stare at a cold, impersonal tool every day.

It is a bit like playing a game. If all you do is finish a task list, with no character growth, no achievement unlocks, and no team coordination, it quickly stops being fun. Beautiful things and people do not need to be possessed to be appreciated; their beauty is enough on its own. Programming tools do not even offer that kind of beauty, so it is easy to lose heart.

We ran into exactly this problem while developing HagiCode. As a multi-AI assistant collaboration platform, HagiCode needs to keep users engaged over the long term. But in reality, even a great tool is hard to stick with if it lacks any emotional connection.

To solve this pain point, we made a bold decision: turn programming into a game. Not the superficial kind with a simple points leaderboard, but a true role-playing gamified experience. The impact of that decision may be even bigger than you imagine.

After all, people need a bit of ritual in their lives.

The ideas shared in this article come from our practical experience on the HagiCode project. HagiCode is a multi-AI assistant collaboration platform that supports Claude Code, Codex, Copilot, OpenCode, and other AI assistants working together. If you are interested in multi-AI collaboration or gamified programming, visit github.com/HagiCode-org/site to learn more.

There is nothing especially mysterious about it. We simply turned programming into an adventure.

The essence of gamification is not just “adding a leaderboard.” It is about building a complete incentive system so users can feel growth, achievement, and social recognition while doing tasks.

HagiCode’s gamification design revolves around one core idea: every AI assistant is a “Hero,” and the user is the captain of this Hero team. You lead these Heroes to conquer various “Dungeons” (programming tasks). Along the way, Heroes gain experience, level up, unlock abilities, and your team earns achievements as well.

This is not a gimmick. It is a design grounded in human behavioral psychology. When tasks are given meaning and progress feedback, people’s engagement and persistence increase significantly.

As the old saying goes, “This feeling can become a memory, though at the time it left us bewildered.” We bring that emotional experience into the tool, so programming is no longer just typing code, but a journey worth remembering.

Hero is the core concept in HagiCode’s gamification system. Each Hero represents one AI assistant. For example, Claude Code is a Hero, and Codex is also a Hero.

A Hero has three equipment slots, and the design is surprisingly elegant:

  1. CLI slot (main class): Determines the Hero’s base ability, such as whether it is Claude Code or Codex
  2. Model slot (secondary class): Determines which model is used, such as Claude 4.5 or Claude 4.6
  3. Style slot (style): Determines the Hero’s behavior style, such as “Fengluo Strategist” or another style

The combination of these three slots creates unique Hero configurations. Much like equipment builds in games, you choose the right setup based on the task. After all, what suits you best is what matters most. Life is similar: many roads lead to Rome, but some are smoother than others.

Each Hero has its own XP and level:

type HeroProgressionSnapshot = {
currentLevel: number; // Current level
totalExperience: number; // Total experience
currentLevelStartExperience: number; // Experience at the start of the current level
nextLevelExperience: number; // Experience required for the next level
experienceProgressPercent: number; // Progress percentage
remainingExperienceToNextLevel: number; // Experience still needed for the next level
lastExperienceGain: number; // Most recent experience gained
lastExperienceGainAtUtc?: string | null; // Time when experience was gained
};

Levels are divided into four stages, and each stage has an immersive name:

export const resolveHeroProgressionStage = (level?: number | null): HeroProgressionStage => {
const normalizedLevel = Math.max(1, level ?? 1);
if (normalizedLevel <= 100) return 'rookieSprint'; // Rookie sprint
if (normalizedLevel <= 300) return 'growthRun'; // Growth run
if (normalizedLevel <= 700) return 'veteranClimb'; // Veteran climb
return 'legendMarathon'; // Legend marathon
};

From “rookie” to “legend,” this growth path gives users a clear sense of direction and achievement. It mirrors personal growth in life, from confusion to maturity, only made more tangible here.

To create a Hero, you need to configure three slots:

const heroDraft: HeroDraft = {
name: 'Athena',
icon: 'hero-avatar:storm-03',
description: 'A brilliant strategist',
executorType: AIProviderType.CLAUDE_CODE_CLI,
slots: {
cli: {
id: 'profession-claude-code',
parameters: { /* CLI-related parameters */ }
},
model: {
id: 'secondary-claude-4-sonnet',
parameters: { /* Model-related parameters */ }
},
style: {
id: 'fengluo-strategist',
parameters: { /* Style-related parameters */ }
}
}
};

Every Hero has a unique avatar, description, and professional identity, which gives what would otherwise be a cold AI assistant more personality and warmth. After all, who wants to work with a tool that has no character?

A “Dungeon” is a classic game concept representing a challenge that requires a team to clear. In HagiCode, each workflow is a Dungeon.

Dungeon organizes workflows into different “Dungeons”:

  • Proposal generation dungeon: Responsible for generating technical proposals
  • Proposal execution dungeon: Responsible for executing tasks in proposals
  • Proposal archive dungeon: Responsible for organizing and archiving completed proposals

Each dungeon has its own Captain Hero, and the captain is automatically chosen as the first enabled Hero.

This is really just division of labor, like in everyday life, except turned into a game mechanic.

You can configure different Hero squads for different dungeons:

const dungeonRoster: HeroDungeonRoster = {
scriptKey: 'proposal.generate',
displayName: 'Proposal Generation',
members: [
{ heroId: 'hero-1', name: 'Athena', executorType: 'ClaudeCode' },
{ heroId: 'hero-2', name: 'Apollo', executorType: 'Codex' }
]
};

For example, you can use Athena for generating proposals because it is good at strategy, and Apollo for implementing code because it is good at execution. That way, every Hero can play to its strengths. It is like forming a band: each person has an instrument, and together they create something beautiful.

Dungeon uses fixed scriptKey values to identify different workflows:

// Script keys map to different workflows
const dungeonScripts = {
'proposal.generate': 'Proposal Generation',
'proposal.execute': 'Proposal Execution',
'proposal.archive': 'Proposal Archive'
};

The task state flow is: queued (waiting) -> dispatching (being assigned) -> dispatched (assigned). The whole process is automated and requires no manual intervention. That is also part of our lazy side, because who wants to manage this stuff by hand?

XP is the core feedback mechanism in the gamification system. Users gain XP by completing tasks, XP levels up Heroes, and leveling up unlocks new abilities, forming a positive feedback loop.

In HagiCode, XP can be earned through the following activities:

  • Completing code execution
  • Successfully calling tools
  • Generating proposals
  • Session management operations
  • Project operations

Every time a valid action is completed, the corresponding Hero gains XP. Just like growth in life, every step counts, only here that growth is quantified.

XP and level progress are visualized in real time:

type HeroDungeonMember = {
heroId: string;
name: string;
icon?: string | null;
executorType: PCode_Models_AIProviderType;
currentLevel?: number; // Current level
totalExperience?: number; // Total experience
experienceProgressPercent?: number; // Progress percentage
};

Users can always see each Hero’s level and progress, and that immediate feedback is the key to gamification design. People need feedback, otherwise how would they know they are improving?

Achievements are another important element in gamification. They provide long-term goals and milestone-driven satisfaction.

HagiCode supports multiple types of achievements:

  • Code generation achievements: Generate X lines of code, generate Y files
  • Session management achievements: Complete Z conversations
  • Project operation achievements: Work across W projects

These achievements are really like milestones in life, except we have turned them into a game mechanic.

Achievements have three states:

type AchievementStatus = 'unlocked' | 'in-progress' | 'locked';

The three states have clear visual distinctions:

  • Unlocked: Gold gradient with a halo effect
  • In progress: Blue pulse animation
  • Locked: Gray, with unlock conditions shown

Each achievement clearly displays its trigger condition, so users know what to do next. When people feel lost, a little guidance always helps.

When an achievement is unlocked, a celebration animation is triggered. That kind of positive reinforcement gives users the satisfying feeling of “I did it” and motivates them to keep going. Small rewards in life work the same way: they may be small, but the happiness can last a long time.

Battle Report is one of HagiCode’s signature features. At the end of each day, it generates a full-screen battle-style report.

Battle Report displays the following information:

type HeroBattleReport = {
reportDate: string;
summary: {
totalHeroCount: number; // Total number of Heroes
activeHeroCount: number; // Number of active Heroes
totalBattleScore: number; // Total battle score
mvp: HeroBattleHero; // Most valuable Hero
};
heroes: HeroBattleHero[]; // Detailed data for all Heroes
};
  • Total team score
  • Number of active Heroes
  • Number of tool calls
  • Total working time
  • MVP (Most Valuable Hero)
  • Detailed card for each Hero

The MVP is the best-performing Hero of the day and is highlighted in the report. This is not just data statistics, but a form of honor and recognition. After all, who does not want to be recognized?

Each Hero card includes:

  • Level progress
  • XP gained
  • Number of executions
  • Usage time

These metrics help users clearly understand how the team is performing. Seeing the results of your own effort is satisfying in itself.

HagiCode’s gamification system uses a modern technology stack and design patterns. There is nothing especially magical about it; we just chose tools that fit the job.

// React + TypeScript for the frontend
import React from 'react';
// Framer Motion for animations
import { AnimatePresence, motion } from 'framer-motion';
// Redux Toolkit for state management
import { useAppDispatch, useAppSelector } from '@/store';
// shadcn/ui for UI components
import { Dialog, DialogContent } from '@/components/ui/dialog';

Framer Motion handles all animation effects, shadcn/ui provides the foundational UI components, and Redux Toolkit manages the complex gamification state. Good tools make good work.

HagiCode uses a Glassmorphism + Tech Dark design style:

/* Primary gradient */
background: linear-gradient(135deg, #22C55E 0%, #25c2a0 50%, #06b6d4 100%);
/* Glass effect */
backdrop-filter: blur(12px);
/* Glow effect */
background: radial-gradient(circle at center, rgba(34, 197, 94, 0.15) 0%, transparent 70%);

The green gradient combined with glassmorphism creates a technical, futuristic atmosphere. Visual beauty is part of the user experience too.

Framer Motion is used to create smooth entrance animations:

<motion.div
animate={{ opacity: 1, y: 0 }}
initial={{ opacity: 0, y: 18 }}
transition={{ duration: 0.35, ease: 'easeOut', delay: index * 0.08 }}
className="card"
>
{/* Card content */}
</motion.div>

Each card enters one after another with a delay of 0.08 seconds, creating a fluid visual effect. Smooth animation improves the experience. That part is hard to argue with.

Gamification data is stored using the Grain storage system to ensure state consistency. Even fine-grained data like accumulated Hero XP can be persisted accurately. No one wants to lose the experience they worked hard to earn.

Creating your first Hero is actually quite simple:

  1. Go to the Hero management page
  2. Click the “Create Hero” button
  3. Configure the three slots (CLI, Model, Style)
  4. Give the Hero a name and description
  5. Save it, and your first Hero is born

It is like meeting a new friend: you give them a name, learn what makes them special, and then head off on an adventure together.

Building a team is also simple:

  1. Go to the Dungeon management page
  2. Choose the dungeon you want to configure, such as “Proposal Generation”
  3. Select members from your Hero list
  4. The system automatically selects the first enabled Hero as Captain
  5. Save the configuration

This is simply the process of forming a team, much like building a team in real life where everyone has their own role.

At the end of each day, you can view the day’s Battle Report:

  1. Click the “Battle Report” button
  2. View the day’s work results in a full-screen display
  3. Check the MVP and the detailed data for each Hero
  4. Share it with team members if you want

This is also a kind of ritual, a way to see how much effort you put in today and how far you still are from your goal.

Use React.memo to avoid unnecessary re-renders:

const HeroCard = React.memo(({ hero }: { hero: HeroDungeonMember }) => {
// Component implementation
});

Performance matters too. No one wants to use a laggy tool.

Detect the user’s motion preference settings and provide a simplified experience for motion-sensitive users:

const prefersReducedMotion = useReducedMotion();
const duration = prefersReducedMotion ? 0 : 0.35;

Not everyone likes animation, and respecting user preferences is part of good design.

Keep legacyIds to support migration from older versions:

type HeroDungeonMember = {
heroId: string;
legacyIds?: string[]; // Supports legacy ID mapping
// ...
};

No one wants to lose data just because of a version upgrade.

Use i18n translation keys for all text to make multi-language support easy:

const displayName = t(`dungeon.${scriptKey}`, { defaultValue: displayName });

Language should never be a barrier to using the product.

Gamification is not just a simple points leaderboard, but a complete incentive system. Through the Hero system, Dungeon system, XP and level system, achievement system, and Battle Report, HagiCode transforms programming work into a heroic journey full of adventure.

The core value of this system lies in:

  • Emotional connection: Giving cold AI assistants personality
  • Positive feedback: Every action produces immediate feedback
  • Long-term goals: Levels and achievements provide a growth path
  • Team identity: A sense of collaboration within Dungeon teams
  • Honor and recognition: Battle Report and MVP showcases

Gamification design makes programming no longer dull, but an interesting adventure. While completing coding tasks, users also experience the fun of character growth, team collaboration, and achievement unlocking, which improves retention and activity.

At its core, programming is already an act of creation. We just made the creative process a little more fun.

If this article helped you:


Thank you for reading. If you found this article useful, please click the like button below so more people can discover it.

This content was created with AI-assisted collaboration, reviewed by me, and reflects my own views and position.