Skip to content

Blog

.NET Core Dual-Database in Practice: Best Practices for Elegantly Combining PostgreSQL and SQLite

.NET Core Dual-Database in Practice: Let PostgreSQL and SQLite Coexist Peacefully

Section titled “.NET Core Dual-Database in Practice: Let PostgreSQL and SQLite Coexist Peacefully”

When building modern applications, we often face this trade-off: development environments want something lightweight and convenient, while production environments demand high concurrency and high availability. This article shares how to elegantly support both PostgreSQL and SQLite in a .NET Core project and implement the best practice of “SQLite for development, PostgreSQL for production.”

In software development, differences between environments have always been one of the hardest problems for engineering teams. Take the HagiCode platform we are building as an example: it is an AI-assisted development system based on ASP.NET Core 10 and React, with Orleans integrated internally for distributed state management. The stack is modern and fairly sophisticated.

Early in the project, we ran into a classic engineering pain point: developers wanted the local environment to work out of the box, without having to install and configure a heavy PostgreSQL database. But in production, we needed to handle high-concurrency writes and complex JSON queries, and that is exactly where lightweight SQLite starts to show its limits.

How can we keep a single codebase while allowing the application to benefit from SQLite’s portability like a desktop app, and also leverage PostgreSQL’s powerful performance like an enterprise-grade service? That is the core question this article explores.

The dual-database adaptation approach shared in this article comes directly from our hands-on experience in the HagiCode project. HagiCode is a next-generation development platform that integrates AI prompt management and the OpenSpec workflow. It was precisely to balance developer experience with production stability that we arrived at this proven architectural pattern.

Feel free to visit our GitHub repository to see the full project: HagiCode-org/site.

Core Topic 1: Architecture Design and Unified Abstraction

Section titled “Core Topic 1: Architecture Design and Unified Abstraction”

To support two databases in .NET Core, the key idea is to depend on abstractions rather than concrete implementations. We need to separate database selection from business code and let the configuration layer decide.

  1. Unified interface: All business logic should depend on the DbContext base class or custom interfaces, rather than a specific PostgreSqlDbContext.
  2. Configuration-driven: Use configuration items in appsettings.json to dynamically decide which database provider to load at application startup.
  3. Feature isolation: Add adaptation logic for PostgreSQL-specific capabilities, such as JSONB, so the application can still degrade gracefully on SQLite.

Code Implementation: Dynamic Context Configuration

Section titled “Code Implementation: Dynamic Context Configuration”

In ASP.NET Core’s Program.cs, we should not hard-code UseNpgsql or UseSqlite. Instead, we should read configuration and decide dynamically.

First, define the configuration class:

public class DatabaseSettings
{
public const string SectionName = "Database";
// Database type: PostgreSQL or SQLite
public string DbType { get; set; } = "PostgreSQL";
// Connection string
public string ConnectionString { get; set; } = string.Empty;
}

Then register the service in Program.cs based on configuration:

// Read configuration
var databaseSettings = builder.Configuration.GetSection(DatabaseSettings.SectionName).Get<DatabaseSettings>();
// Register DbContext
builder.Services.AddDbContext<ApplicationDbContext>(options =>
{
if (databaseSettings?.DbType?.ToLower() == "sqlite")
{
// SQLite configuration
options.UseSqlite(databaseSettings.ConnectionString);
// Handling SQLite's concurrent write limitations
// Note: in production, enabling WAL mode is recommended to improve concurrency performance
}
else
{
// PostgreSQL configuration (default)
options.UseNpgsql(databaseSettings.ConnectionString, npgsqlOptions =>
{
// Enable JSONB support, which is very useful when handling AI conversation records
npgsqlOptions.UseJsonNet();
});
// Configure connection pool retry policy
options.EnableRetryOnFailure(3);
}
});

Core Topic 2: Handling Differences and Migration Strategy

Section titled “Core Topic 2: Handling Differences and Migration Strategy”

Although PostgreSQL and SQLite both support the SQL standard, they differ significantly in specific capabilities and behavior. If these differences are not handled carefully, you can easily end up with the awkward situation where everything works locally but fails after deployment.

In HagiCode, we need to store a large amount of prompts and AI metadata, which usually involves JSON columns.

  • PostgreSQL: Has a native JSONB type with excellent query performance.
  • SQLite: Does not have a native JSON type (newer versions include the JSON1 extension, but object mapping still differs), so data is usually stored as TEXT.

Solution: In EF Core entity mapping, we configure it as a convertible type.

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
base.OnModelCreating(modelBuilder);
// Configure entity
modelBuilder.Entity<PromptTemplate>(entity =>
{
entity.Property(e => e.Metadata)
.HasColumnType("jsonb") // PG uses jsonb
.HasConversion(
v => JsonSerializer.Serialize(v, (JsonSerializerOptions)null),
v => JsonSerializer.Deserialize<Dictionary<string, object>>(v, (JsonSerializerOptions)null)
);
});
}

When SQLite is used, HasColumnType("jsonb") may be ignored or trigger a warning. However, because HasConversion is configured, the data is still serialized and deserialized correctly as strings stored in a TEXT field, ensuring compatibility.

Never try to make one set of Migration scripts work for both PostgreSQL and SQLite at the same time. Differences in primary key generation strategies, index syntax, and other database details will inevitably cause failures.

Recommended practice: Maintain two migration branches or projects. In the HagiCode development workflow, this is how we handle it:

  1. Development stage: Work mainly with SQLite. Use Add-Migration Init_Sqlite -OutputDir Migrations/Sqlite.
  2. Adaptation stage: After developing a feature, switch the connection string to PostgreSQL and run Add-Migration Init_Postgres -OutputDir Migrations/Postgres.
  3. Automation scripts: Write a simple PowerShell or Bash script to automatically apply the correct migration based on the current environment variable.
Terminal window
# Pseudocode for simple deployment logic
if [ "$DATABASE_PROVIDER" = "PostgreSQL" ]; then
dotnet ef database update --project Migrations.Postgres
else
dotnet ef database update --project Migrations.Sqlite
fi

Core Topic 3: Lessons Learned from HagiCode in Production

Section titled “Core Topic 3: Lessons Learned from HagiCode in Production”

While refactoring HagiCode from a single-database model to dual-database support, we hit a few pitfalls and gathered some important lessons that may help you avoid the same mistakes.

1. Differences in Concurrency and Transactions

Section titled “1. Differences in Concurrency and Transactions”

PostgreSQL uses a server-client architecture and supports high-concurrency writes with powerful transaction isolation levels. SQLite uses file locking, so write operations lock the entire database file unless WAL mode is enabled.

Recommendation: When writing business logic that involves frequent writes, such as real-time saving of a user’s editing state, you must take SQLite’s locking model into account. When designing the OpenSpec collaboration module in HagiCode, we introduced a “merge before write” mechanism to reduce the frequency of direct database writes, allowing us to maintain good performance on both databases.

2. Lifecycle Management of Connection Strings

Section titled “2. Lifecycle Management of Connection Strings”

Establishing a PostgreSQL connection is relatively expensive and depends on connection pooling. SQLite connections are very lightweight, but if they are not released promptly, file locks may cause later operations to time out.

In Program.cs, we can fine-tune behavior for each database:

if (databaseSettings?.DbType?.ToLower() == "sqlite")
{
// SQLite: keeping connections open can improve performance, but watch out for file locks
options.UseSqlite(connectionString, sqliteOptions =>
{
// Set command timeout
sqliteOptions.CommandTimeout(30);
});
}
else
{
// PG: make full use of connection pooling
options.UseNpgsql(connectionString, npgsqlOptions =>
{
npgsqlOptions.MaxBatchSize(100);
npgsqlOptions.CommandTimeout(30);
});
}

Many developers, including some early members of our team, tend to make one common mistake: running unit tests only in the development environment, which is usually SQLite.

In HagiCode’s CI/CD pipeline, we enforced a GitHub Actions step to make sure every pull request runs PostgreSQL integration tests.

# Example snippet from .github/workflows/test.yml
- name: Run Integration Tests (PostgreSQL)
run: |
docker-compose up -d db_postgres
dotnet test --filter "Category=Integration"

This helped us catch countless bugs related to SQL syntax differences and case sensitivity.

By introducing an abstraction layer and configuration-driven dependency injection, we successfully implemented a dual-track PostgreSQL and SQLite setup in the HagiCode project. This not only greatly lowered the onboarding barrier for new developers by removing the need to install PostgreSQL, but also provided strong performance guarantees for production.

To recap the key points:

  1. Abstraction first: Business code should not depend on concrete database implementations.
  2. Separate configuration: Use different appsettings.json files for development and production.
  3. Separate migrations: Do not try to make one Migration set work everywhere.
  4. Feature degradation: Prioritize compatibility in SQLite and performance in PostgreSQL.

This architectural pattern is not only suitable for HagiCode, but for any .NET project that needs to strike a balance between lightweight development and heavyweight production.


If this article helped you, feel free to give us a Star on GitHub, or experience the efficient development workflow brought by HagiCode directly:

The public beta has started. Welcome to install it and give it a try!


Thank you for reading. If you found this article useful, please click the like button below 👍 so more people can discover it.

This content was created with AI-assisted collaboration, reviewed by me, and reflects my own views and position.

Docusaurus 3.x to Astro 5.x Migration in Practice: Using Islands Architecture to Improve Both Performance and Build Speed

From Docusaurus 3.x to Astro 5.x: A Retrospective on the HagiCode Site Migration

Section titled “From Docusaurus 3.x to Astro 5.x: A Retrospective on the HagiCode Site Migration”

This article looks back on our full migration of the HagiCode official website from Docusaurus 3.x to Astro 5.x. We will take a deep dive into how Astro’s Islands Architecture helped us solve performance bottlenecks while preserving our existing React component assets, delivering improvements in both build speed and loading performance.

In January 2026, we performed a “heart transplant” on the HagiCode official site by fully migrating its core framework from Docusaurus 3.x to Astro 5.x. This was not an impulsive rewrite, but a carefully considered technical decision.

Before the migration, our site was functionally complete, but it had begun to show some classic “luxury problems”: bloated build artifacts, excessive JavaScript payloads, and less-than-ideal page load speed on complex documentation pages. As an AI coding assistant project, HagiCode needs frequent documentation and feature updates, so build efficiency directly affects release speed. At the same time, we wanted the site to be more search-engine-friendly (SEO) so more developers could discover the project.

To solve these pain points, we made a bold decision: rebuild the entire system on Astro. The impact of that decision may be even bigger than you expect. I will get into the details shortly.

The site migration approach shared in this article comes from our hands-on experience in the HagiCode project.

HagiCode is an AI coding assistant focused on improving development efficiency. We care not only about iterating on core features, but also about the developer experience. This site refactor was also meant to give users the fastest possible experience when browsing our docs and official website.

Why Leave the Mature Docusaurus Ecosystem?

Section titled “Why Leave the Mature Docusaurus Ecosystem?”

Within the React ecosystem, Docusaurus has long been the “standard answer” for documentation sites. It works out of the box, offers a rich plugin ecosystem, and has an active community. But as HagiCode gained more features, we also felt its limitations:

  1. Performance bottlenecks: Docusaurus is fundamentally a React SPA (single-page application). Even if you only write static pages, the client still needs to load the React runtime and hydrate the page, which is unnecessarily heavy for simple docs pages.
  2. Large asset size: Even when a page contains very little content, the bundled JS size stays relatively fixed. That is not ideal for mobile users or poor network conditions.
  3. Limited flexibility: Although it is extensible, we wanted more low-level control over the build pipeline.

Astro arrived at exactly the right time to solve these problems. It introduced a new “Islands Architecture”: by default, Astro generates static HTML with zero JavaScript, and only components that require interactivity are “activated” and load JS. That means most of our site becomes pure HTML and loads extremely fast.

Core Migration Strategy: A Smooth Architectural Transition

Section titled “Core Migration Strategy: A Smooth Architectural Transition”

Migration was not just copy and paste. It required a shift in mindset. We moved from Docusaurus’s “all React” model to Astro’s “Core + Islands” model.

First, we had to move from docusaurus.config.ts to astro.config.mjs. This was not just a file rename, but a rewrite of routing and build logic.

In Docusaurus, everything is a plugin. In Astro, everything is an integration. We needed to redefine the site’s base path, build output mode (static vs SSR), and asset optimization strategy.

Before migration:

docusaurus.config.ts
export default {
title: 'HagiCode',
url: 'https://hagicode.com',
baseUrl: '/',
// ... more configuration
};

After migration:

astro.config.mjs
import { defineConfig } from 'astro/config';
import react from '@astrojs/react';
export default defineConfig({
integrations: [react()],
site: 'https://hagicode.com',
base: '/',
// Optimization settings for static assets
build: {
inlineStylesheets: 'auto',
},
});

2. What to Keep and What to Refactor in React Components

Section titled “2. What to Keep and What to Refactor in React Components”

This was the most painful part of the migration. Our existing site had many React components, such as Tabs, code highlighting, feedback buttons, and more. Throwing them away would be wasteful, but keeping everything would make the JavaScript payload too heavy.

HagiCode adopted a progressive hydration strategy:

  • Pure static components: For presentational content such as headers, footers, and plain text documentation, we rewrote them as Astro components (.astro files) and rendered them directly to HTML at build time.
  • Interactive islands: For components that must remain interactive, such as theme switchers, tab switching, and code block copy buttons, we kept the React implementation and added client:load or client:visible directives.

For example, our commonly used Tabs component in the documentation:

src/components/Tabs.jsx
import { useState } from 'react';
import './Tabs.css'; // Import styles
export default function Tabs({ items }) {
const [activeIndex, setActiveIndex] = useState(0);
// ... state logic
return (
<div className="tabs-wrapper">
{/* Rendering logic */}
</div>
);
}

When used in Markdown, we explicitly tell Astro: “This component needs JS.”

src/content/docs/example.mdx
import Tabs from '../../components/Tabs.jsx';
<!-- Load JS only when the component enters the viewport -->
<Tabs client:visible items={...} />

This way, interactive components outside the viewport do not compete for bandwidth, which greatly improves first-screen loading speed.

3. Adapting the Styling System: From CSS Modules to Scoped CSS

Section titled “3. Adapting the Styling System: From CSS Modules to Scoped CSS”

Docusaurus supports CSS Modules by default, while Astro encourages Scoped CSS through the <style> tag. The core idea behind both is style isolation, but the syntax is different.

During the HagiCode migration, we converted most complex CSS Modules into Astro’s scoped styles. This actually turned out to be a good thing, because in .astro files the styles and templates live in the same file, which makes maintenance more intuitive.

Before refactoring:

Tabs.module.css
.wrapper { background: var(--ifm-background-color); }

After refactoring (Astro Scoped):

Tabs.astro
<div class="tabs-wrapper">
<slot />
</div>
<style>
.tabs-wrapper {
/* Use CSS variables directly to adapt to the theme */
background: var(--bg-color);
padding: 1rem;
}
</style>

At the same time, we unified the global CSS variable system and used Astro’s environment-aware capabilities to ensure dark mode switches smoothly across pages.

Pitfalls We Hit in Practice and How We Solved Them

Section titled “Pitfalls We Hit in Practice and How We Solved Them”

During the actual HagiCode migration, we ran into quite a few issues. Here are several of the most typical ones.

1. Path and Environment Variable Pain Points

Section titled “1. Path and Environment Variable Pain Points”

HagiCode supports subpath deployment, such as deployment under a GitHub Pages subdirectory. In Docusaurus, baseUrl is handled automatically. In Astro, however, we need to be more careful when handling image links and API requests.

We introduced an environment variable mechanism to manage this consistently:

// Handle paths in the build script
const getBasePath = () => import.meta.env.VITE_SITE_BASE || '/';

Be sure not to hardcode paths beginning with / in your code. In development versus production, or after configuring a base path, doing so can cause 404s for assets.

Our old site had some Node.js scripts used for tasks such as automatically fetching Metrics data and updating the sitemap, and they were written in CommonJS (require). Astro and modern build tools have fully embraced ES Modules (import/export).

If you also have similar scripts, remember to refactor them all to ES Modules. That is the direction the ecosystem is moving, and the sooner you make the change, the less trouble you will have later.

// Old way
const fs = require('fs');
// New way
import fs from 'fs';

Search engines have already indexed HagiCode’s old Docusaurus pages. If you switch directly to Astro and the URL structure changes, you may end up with a large number of 404s and a major drop in search ranking.

We configured redirect rules in Astro:

astro.config.mjs
export default defineConfig({
redirects: {
'/docs/old-path': '/docs/new-path',
// Map old links to new links in bulk
}
});

Or you can handle this at the server configuration layer. Make sure old links can be 301 redirected to the new addresses, because this is critical for SEO.

For HagiCode, migrating from Docusaurus to Astro was not just a framework upgrade. It was also a practical implementation of a “performance first” philosophy.

What we gained:

  • Outstanding Lighthouse scores: After the migration, the HagiCode site’s performance score easily approached a perfect score.
  • Faster build speed: Astro’s incremental build capabilities cut the release time for documentation updates in half.
  • Preserved flexibility: With Islands Architecture, we did not sacrifice any interactive features and could still use React where needed.

If you are also maintaining a documentation-oriented site and are struggling with bundle size or load speed, Astro is well worth trying. Although the migration process does require some surgery, such as renaming PCode to HagiCode and moving components over one by one, the silky-smooth user experience you get in return makes it absolutely worthwhile.

The build system shared in this article is the exact approach we developed through real trial and error while building HagiCode. If you find this approach valuable, that says something about our engineering strength, and HagiCode itself is probably worth a closer look too.

If this article helped you, feel free to give us a Star on GitHub. Public beta has already begun!


Thank you for reading. If you found this article useful, click the like button below 👍 so more people can discover it.

This content was created with AI-assisted collaboration, reviewed by me, and reflects my own views and position.

A Practical Guide to Optimizing Vite Build Performance with Worker Threads

From 120 Seconds to 45 Seconds: A Practical Guide to Optimizing Vite Build Performance with Worker Threads

Section titled “From 120 Seconds to 45 Seconds: A Practical Guide to Optimizing Vite Build Performance with Worker Threads”

When working with large frontend projects, production builds can feel painfully slow. This article shares how we used Node.js Worker Threads to reduce the obfuscation stage in a Vite build from 120 seconds to 45 seconds, along with the implementation details and lessons learned in the HagiCode project.

In our frontend engineering practice, build efficiency issues became increasingly prominent as the project grew. In particular, during the production build process, we usually introduce JavaScript obfuscation tools such as javascript-obfuscator to protect the source code logic. This step is necessary, but it is also computationally expensive and heavily CPU-bound.

During the early development stage of HagiCode, we ran into a very tricky performance bottleneck: production build times deteriorated rapidly as the codebase grew.

The specific pain points were:

  • Obfuscation tasks ran serially on a single thread, maxing out one CPU core while the others sat idle
  • Build time surged from the original 30 seconds to 110-120 seconds
  • The post-change build verification loop became extremely long, seriously slowing development iteration
  • In the CI/CD pipeline, the build stage became the most time-consuming part

Why did HagiCode need this? HagiCode is an AI-driven code assistant whose frontend architecture includes complex business logic and AI interaction modules. To ensure the security of our core code, we enforced high-intensity obfuscation in production releases. Faced with build waits approaching two minutes, we decided to carry out a deep performance optimization of the build system.

Since we have mentioned the project, let me say a bit more about it.

If you have run into frustrations like these during development:

  • Multiple projects and multiple tech stacks, with high maintenance costs for build scripts
  • Complicated CI/CD pipeline configuration, forcing you to check the docs every time you make a change
  • Endless cross-platform compatibility issues
  • Wanting AI to help write code, but finding existing tools not smart enough

Then HagiCode, which we are building, may interest you.

What is HagiCode?

  • An AI-driven code assistant
  • Supports multi-language, cross-platform code generation and optimization
  • Comes with built-in gamification so coding feels less tedious

Why mention it here? The parallel JavaScript obfuscation solution shared in this article is exactly what we refined while building HagiCode. If you find this engineering approach valuable, that suggests our technical taste is probably pretty good, and HagiCode itself may also be worth a look.

Want to learn more?


Analysis: Finding the Breakthrough Point in the Performance Bottleneck

Section titled “Analysis: Finding the Breakthrough Point in the Performance Bottleneck”

Before solving the performance issue, we first needed to clarify our thinking and identify the best technical solution.

There are three main ways to achieve parallel computation in Node.js:

  1. child_process: create independent child processes
  2. Web Workers: mainly used on the browser side
  3. worker_threads: native multithreading support in Node.js

After comparing the options, HagiCode ultimately chose Worker Threads for the following reasons:

  • Zero serialization overhead: Worker Threads run in the same process and can share memory through SharedArrayBuffer or transfer ownership, avoiding the heavy serialization cost of inter-process communication.
  • Native support: built into Node.js 12+ with no need for extra heavyweight dependencies.
  • Unified context: debugging and logging are more convenient than with child processes.

Task Granularity: How Should Obfuscation Tasks Be Split?

Section titled “Task Granularity: How Should Obfuscation Tasks Be Split?”

It is hard to parallelize the obfuscation of one huge JS bundle file because the code has dependencies, but Vite build output is composed of multiple chunks. That gives us a natural parallel boundary:

  • Independence: after Vite packaging, dependencies between different chunks are already decoupled, so they can be processed safely in parallel.
  • Appropriate granularity: projects usually have 10-30 chunks, which is an excellent scale for parallel scheduling.
  • Easy integration: the generateBundle hook in Vite plugins lets us intercept and process these chunks before the files are emitted.

We designed a parallel processing system with four core components:

  1. Task Splitter: iterates over Vite’s bundle object, filters out files that do not need obfuscation such as vendor chunks, and generates a task queue.
  2. Worker Pool Manager: manages the Worker lifecycle and handles task distribution, recycling, and retry on failure.
  3. Progress Reporter: outputs build progress in real time to reduce waiting anxiety.
  4. ObfuscationWorker: the Worker thread that actually performs the obfuscation logic.

Based on the analysis above, we started implementing this parallel obfuscation system.

First, we integrated the parallel obfuscation plugin in vite.config.ts. The configuration is straightforward. You only need to specify the number of Workers and the obfuscation rules.

import { defineConfig } from 'vite'
import { parallelJavascriptObfuscator } from './buildTools/plugin'
export default defineConfig(({ mode }) => {
const isProduction = mode === 'production'
return {
build: {
rollupOptions: {
...(isProduction
? {
plugins: [
parallelJavascriptObfuscator({
enabled: true,
// Automatically adjust based on CPU core count; leave one core for the main thread
workerCount: 4,
retryAttempts: 3,
fallbackToMainThread: true, // Automatically degrade to single-thread mode on failure
// Filter out vendor chunks; third-party libraries usually do not need obfuscation
isVendorChunk: (fileName: string) => fileName.includes('vendor-'),
obfuscationConfig: {
compact: true,
controlFlowFlattening: true,
deadCodeInjection: true,
disableConsoleOutput: true,
// ... more obfuscation options
},
}),
],
}
: {}),
},
},
}
})

A Worker is the unit that executes tasks. We need to define the input and output data structures clearly.

Note: although the code here is simple, there are several pitfalls to watch out for, such as checking whether parentPort is null and handling errors properly. In HagiCode’s implementation, we found that certain special ES6 syntax patterns could cause the obfuscator to crash, so we added try-catch protection.

import { parentPort } from 'worker_threads'
import javascriptObfuscator from 'javascript-obfuscator'
export interface ObfuscationTask {
chunkId: string
code: string
config: any
}
export interface ObfuscationResult {
chunkId: string
obfuscatedCode: string
error?: string
}
// Listen for tasks sent from the main thread
if (parentPort) {
parentPort.on('message', async (task: ObfuscationTask) => {
try {
// Perform obfuscation
const obfuscated = javascriptObfuscator.obfuscate(task.code, task.config)
const result: ObfuscationResult = {
chunkId: task.chunkId,
obfuscatedCode: obfuscated.getObfuscatedCode(),
}
// Send the result back to the main thread
parentPort?.postMessage(result)
} catch (error) {
// Handle exceptions so one Worker crash does not block the whole build
const result: ObfuscationResult = {
chunkId: task.chunkId,
obfuscatedCode: '',
error: error instanceof Error ? error.message : 'Unknown error',
}
parentPort?.postMessage(result)
}
})
}

This is the core of the whole solution. We need to maintain a fixed-size Worker pool and schedule tasks using a FIFO (first in, first out) strategy.

import { Worker } from 'worker_threads'
import os from 'os'
export class WorkerPool {
private workers: Worker[] = []
private taskQueue: Array<{
task: ObfuscationTask
resolve: (result: ObfuscationResult) => void
reject: (error: Error) => void
}> = []
constructor(options: WorkerPoolOptions = {}) {
// Default to core count - 1 so the main thread still has some breathing room
const workerCount = options.workerCount ?? Math.max(1, (os.cpus().length || 4) - 1)
for (let i = 0; i < workerCount; i++) {
this.createWorker()
}
}
private createWorker() {
const worker = new Worker('./worker.ts')
worker.on('message', (result) => {
// After one task finishes, take the next task from the queue
const nextTask = this.taskQueue.shift()
if (nextTask) {
this.dispatchTask(worker, nextTask)
} else {
// If there are no pending tasks, mark the Worker as idle
this.activeWorkers.delete(worker)
}
})
this.workers.push(worker)
}
// Submit a task to the pool
public runTask(task: ObfuscationTask): Promise<ObfuscationResult> {
return new Promise((resolve, reject) => {
const job = { task, resolve, reject }
const idleWorker = this.workers.find(w => !this.activeWorkers.has(w))
if (idleWorker) {
this.dispatchTask(idleWorker, job)
} else {
this.taskQueue.push(job)
}
})
}
private dispatchTask(worker: Worker, job: any) {
this.activeWorkers.set(worker, job.task)
worker.postMessage(job.task)
}
}

Waiting is painful, especially when you have no idea how much longer it will take. So we added a simple progress reporter to provide real-time feedback on the current status.

export class ProgressReporter {
private completed = 0
private readonly total: number
private readonly startTime: number
constructor(total: number) {
this.total = total
this.startTime = Date.now()
}
increment(): void {
this.completed++
this.report()
}
private report(): void {
const now = Date.now()
const elapsed = now - this.startTime
const percentage = (this.completed / this.total) * 100
// Simple ETA estimate
const avgTimePerChunk = elapsed / this.completed
const remaining = (this.total - this.completed) * avgTimePerChunk
console.log(
`[Parallel Obfuscation] ${this.completed}/${this.total} chunks completed (${percentage.toFixed(1)}%) | ETA: ${(remaining / 1000).toFixed(1)}s`
)
}
}

After deploying this solution, the build performance of the HagiCode project improved immediately.

We tested in the following environment:

  • CPU: Intel Core i7-12700K (12 cores / 20 threads)
  • RAM: 32GB DDR4
  • Node.js: v18.17.0
  • OS: Ubuntu 22.04

Results comparison:

  • Single-threaded (before optimization): 118 seconds
  • 4 Workers: 55 seconds (53% improvement)
  • 8 Workers: 48 seconds (60% improvement)
  • 12 Workers: 45 seconds (62% improvement)

As you can see, the gains were not linear. Once the Worker count exceeded 8, the improvement became smaller. This was mainly limited by the evenness of task distribution and memory bandwidth bottlenecks.

In HagiCode’s real-world use, we also ran into several pitfalls, so here they are for reference:

Q1: Build time did not decrease much and even became slower?

  • Reason: creating Workers has its own overhead, or too many Workers were configured, causing frequent context switching.
  • Solution: we recommend setting the Worker count to CPU core count - 1. Also check whether any single chunk is especially large, for example > 5MB. That kind of “monster” file will become the bottleneck, so you may need to optimize your code splitting strategy.

Q2: Workers occasionally crash and cause build failures?

  • Reason: some special code syntax patterns may cause internal errors inside the obfuscator.
  • Solution: we implemented an automatic degradation mechanism. When a Worker reaches the failure threshold, the plugin automatically falls back to single-thread mode to ensure the build does not stop. At the same time, it records the filename that caused the error so it can be fixed later.

Q3: Memory usage is too high (OOM)?

  • Reason: each Worker needs its own memory space to load the obfuscator and parse the AST.
  • Solution:
    • Reduce the number of Workers.
    • Increase the Node.js memory limit: NODE_OPTIONS="--max-old-space-size=4096" npm run build.
    • Make sure Workers do not keep unnecessary references to large objects.

By introducing Node.js Worker Threads, we successfully reduced the production build time of the HagiCode project from 120 seconds to around 45 seconds, greatly improving the development experience and CI/CD efficiency.

The core of this solution is:

  1. Split tasks properly: use Vite chunks as the parallel unit.
  2. Control resources: use a Worker pool to avoid resource exhaustion.
  3. Design for fault tolerance: an automatic degradation mechanism ensures build stability.

If you are also struggling with frontend build efficiency, or your project also does heavy code processing, this solution is worth trying. Of course, we would recommend taking a direct look at HagiCode, where these engineering details are already integrated.

If this article helped you, feel free to give us a Star on GitHub or join the public beta and try it out.


Thank you for reading. If you found this article useful, please click the like button below so more people can discover it.

This content was created with AI-assisted collaboration, reviewed by me, and reflects my own views and position.

Deep Integration and Practical Use of StreamJsonRpc in HagiCode

Deep Integration and Practical Use of StreamJsonRpc in HagiCode

Section titled “Deep Integration and Practical Use of StreamJsonRpc in HagiCode”

This article explains in detail how the HagiCode (formerly PCode) project successfully integrated Microsoft’s StreamJsonRpc communication library to replace its original custom JSON-RPC implementation, while also resolving the technical pain points and architectural challenges encountered during the integration process.

StreamJsonRpc is Microsoft’s officially maintained JSON-RPC communication library for .NET and TypeScript, known for its strong type safety, automatic proxy generation, and mature exception handling mechanisms. In the HagiCode project, the team decided to integrate StreamJsonRpc in order to communicate with external AI tools such as iflow CLI and OpenCode CLI through ACP (Agent Communication Protocol), while also eliminating the maintenance cost and potential bugs introduced by the earlier custom JSON-RPC implementation. However, the integration process ran into challenges specific to streaming JSON-RPC, especially when handling proxy target binding and generic parameter recognition.

To address these pain points, we made a bold decision: rebuild the entire build system from the ground up. The impact of that decision may be even bigger than you expect, and I will explain it in detail shortly.

Let me first introduce the “main project” featured in this article.

If you have run into these frustrations during development:

  • Multiple projects and multiple tech stacks make build scripts expensive to maintain
  • CI/CD pipeline configuration is cumbersome, and every change sends you back to the docs
  • Cross-platform compatibility issues keep surfacing
  • You want AI to help you write code, but existing tools are not intelligent enough

Then HagiCode, which we are building, may interest you.

What is HagiCode?

  • An AI-driven intelligent coding assistant
  • Supports multilingual, cross-platform code generation and optimization
  • Includes built-in gamification mechanisms so coding feels less dull

Why mention it here? The StreamJsonRpc integration approach shared in this article is distilled from our practical experience while developing HagiCode. If you find this engineering solution valuable, it probably means our technical taste is pretty solid, and HagiCode itself is worth checking out as well.

Want to learn more?

The current project is in a critical stage of ACP protocol integration and is facing the following technical pain points and architectural challenges:

1. Limitations of the Custom Implementation

Section titled “1. Limitations of the Custom Implementation”

The original JSON-RPC implementation is located in src/HagiCode.ClaudeHelper/AcpImp/ and includes components such as JsonRpcEndpoint and ClientSideConnection. Maintaining this custom codebase is costly, and it lacks the advanced capabilities of a mature library, such as progress reporting and cancellation support.

When attempting to migrate the existing CallbackProxyTarget pattern to StreamJsonRpc, we found that the _rpc.AddLocalRpcTarget(target) method could not recognize targets created through the proxy pattern. Specifically, StreamJsonRpc could not automatically split properties of the generic type T into RPC method parameters, causing the server side to fail when processing method calls initiated by the client.

The existing ClientSideConnection mixes the transport layer (WebSocket/Stdio), protocol layer (JSON-RPC), and business layer (ACP Agent interface), leading to unclear responsibilities. It also suffers from missing method bindings in AcpAgentCallbackRpcAdapter.

The WebSocket transport layer lacks raw JSON content logging, making it difficult to determine during RPC debugging whether a problem originates from serialization or from the network.

To address the problems above, we adopted the following systematic solution, optimizing from three dimensions: architectural refactoring, library integration, and enhanced debugging.

Delete JsonRpcEndpoint.cs, AgentSideConnection.cs, and related custom serialization converters such as JsonRpcMessageJsonConverter.

Introduce the StreamJsonRpc NuGet package and use its JsonRpc class to handle the core communication logic.

Define the IAcpTransport interface to handle both WebSocket and Stdio transport modes in a unified way, ensuring the protocol layer is decoupled from the transport layer.

// Definition of the IAcpTransport interface
public interface IAcpTransport
{
Task SendAsync(string message, CancellationToken cancellationToken = default);
Task<string> ReceiveAsync(CancellationToken cancellationToken = default);
Task CloseAsync(CancellationToken cancellationToken = default);
}
// WebSocket transport implementation
public class WebSocketTransport : IAcpTransport
{
private readonly WebSocket _webSocket;
public WebSocketTransport(WebSocket webSocket)
{
_webSocket = webSocket;
}
// Implement send and receive methods
// ...
}
// Stdio transport implementation
public class StdioTransport : IAcpTransport
{
private readonly StreamReader _reader;
private readonly StreamWriter _writer;
public StdioTransport(StreamReader reader, StreamWriter writer)
{
_reader = reader;
_writer = writer;
}
// Implement send and receive methods
// ...
}

Inspect the existing dynamic proxy generation logic and identify the root cause of why StreamJsonRpc cannot recognize it. In most cases, this happens because the proxy object does not publicly expose the actual method signatures, or it uses parameter types unsupported by StreamJsonRpc.

Split generic properties into explicit RPC method parameters. Instead of relying on dynamic properties, define concrete Request/Response DTOs (data transfer objects) so StreamJsonRpc can correctly recognize method signatures through reflection.

// Original generic property approach
public class CallbackProxyTarget<T>
{
public Func<T, Task> Callback { get; set; }
}
// Refactored concrete method approach
public class ReadTextFileRequest
{
public string FilePath { get; set; }
}
public class ReadTextFileResponse
{
public string Content { get; set; }
}
public interface IAcpAgentCallback
{
Task<ReadTextFileResponse> ReadTextFileAsync(ReadTextFileRequest request);
// Other methods...
}

In some complex scenarios, manually proxying the JsonRpc object and handling RpcConnection can be more flexible than directly adding a local target.

3. Implement Method Binding and Stronger Logging

Section titled “3. Implement Method Binding and Stronger Logging”

Ensure that this component explicitly implements the StreamJsonRpc proxy interface and maps methods defined by the ACP protocol, such as ReadTextFileAsync, to StreamJsonRpc callback handlers.

Intercept and record the raw text of JSON-RPC requests and responses in the WebSocket or Stdio message processing pipeline. Use ILogger to output the raw payload before parsing and after serialization so formatting issues can be diagnosed more easily.

// Transport wrapper with enhanced logging
public class LoggingAcpTransport : IAcpTransport
{
private readonly IAcpTransport _innerTransport;
private readonly ILogger<LoggingAcpTransport> _logger;
public LoggingAcpTransport(IAcpTransport innerTransport, ILogger<LoggingAcpTransport> logger)
{
_innerTransport = innerTransport;
_logger = logger;
}
public async Task SendAsync(string message, CancellationToken cancellationToken = default)
{
_logger.LogTrace("Sending message: {Message}", message);
await _innerTransport.SendAsync(message, cancellationToken);
}
public async Task<string> ReceiveAsync(CancellationToken cancellationToken = default)
{
var message = await _innerTransport.ReceiveAsync(cancellationToken);
_logger.LogTrace("Received message: {Message}", message);
return message;
}
public async Task CloseAsync(CancellationToken cancellationToken = default)
{
_logger.LogDebug("Closing connection");
await _innerTransport.CloseAsync(cancellationToken);
}
}

Wrap the StreamJsonRpc connection and make it responsible for InvokeAsync calls and connection lifecycle management.

public class AcpRpcClient : IDisposable
{
private readonly JsonRpc _rpc;
private readonly IAcpTransport _transport;
public AcpRpcClient(IAcpTransport transport)
{
_transport = transport;
_rpc = new JsonRpc(new StreamRpcTransport(transport));
_rpc.StartListening();
}
public async Task<TResponse> InvokeAsync<TResponse>(string methodName, object parameters)
{
return await _rpc.InvokeAsync<TResponse>(methodName, parameters);
}
public void Dispose()
{
_rpc.Dispose();
_transport.Dispose();
}
// StreamRpcTransport is the StreamJsonRpc adapter for IAcpTransport
private class StreamRpcTransport : IDuplexPipe
{
// Implement the IDuplexPipe interface
// ...
}
}

Protocol Layer (IAcpAgentClient / IAcpAgentCallback)

Section titled “Protocol Layer (IAcpAgentClient / IAcpAgentCallback)”

Define clear client-to-agent and agent-to-client interfaces. Remove the cyclic factory pattern Func<IAcpAgent, IAcpClient> and replace it with dependency injection or direct callback registration.

Based on StreamJsonRpc best practices and project experience, the following are the key recommendations for implementation:

1. Strongly Typed DTOs Are Better Than Dynamic Objects

Section titled “1. Strongly Typed DTOs Are Better Than Dynamic Objects”

The core advantage of StreamJsonRpc lies in strong typing. Do not use dynamic or JObject to pass parameters. Instead, define explicit C# POCO classes as parameters for each RPC method. This not only solves the proxy target recognition issue, but also catches type errors at compile time.

Example: replace the generic properties in CallbackProxyTarget with concrete classes such as ReadTextFileRequest and WriteTextFileRequest.

Use the [JsonRpcMethod] attribute to explicitly specify RPC method names instead of relying on default method-name mapping. This prevents invocation failures caused by naming-style differences such as PascalCase versus camelCase.

public interface IAcpAgentCallback
{
[JsonRpcMethod("readTextFile")]
Task<ReadTextFileResponse> ReadTextFileAsync(ReadTextFileRequest request);
[JsonRpcMethod("writeTextFile")]
Task WriteTextFileAsync(WriteTextFileRequest request);
}

3. Take Advantage of Connection State Callbacks

Section titled “3. Take Advantage of Connection State Callbacks”

StreamJsonRpc provides the JsonRpc.ConnectionLost event. You should absolutely listen for this event to handle unexpected process exits or network disconnections, which is more timely than relying only on Orleans Grain failure detection.

_rpc.ConnectionLost += (sender, e) =>
{
_logger.LogError("RPC connection lost: {Reason}", e.ToString());
// Handle reconnection logic or notify the user
};
  • Trace level: Record the full raw JSON request/response payload.
  • Debug level: Record method call stacks and parameter summaries.
  • Note: Make sure the logs do not include sensitive Authorization tokens or Base64-encoded large-file content.

5. Handle the Special Nature of Streaming Transport

Section titled “5. Handle the Special Nature of Streaming Transport”

StreamJsonRpc natively supports IAsyncEnumerable. When implementing streaming prompt responses for ACP, use IAsyncEnumerable directly instead of custom pagination logic. This can greatly simplify the amount of code needed for streaming processing.

public interface IAcpAgentCallback
{
[JsonRpcMethod("streamText")]
IAsyncEnumerable<string> StreamTextAsync(StreamTextRequest request);
}

Keep ACPSession and ClientSideConnection separate. ACPSession should focus on Orleans state management and business logic, such as message enqueueing, and should use the StreamJsonRpc connection object through composition rather than inheritance.

By comprehensively integrating StreamJsonRpc, the HagiCode project successfully addressed the high maintenance cost, functional limitations, and architectural layering confusion of the original custom implementation. The key improvements include:

  1. Replacing dynamic properties with strongly typed DTOs, improving maintainability and reliability
  2. Implementing transport-layer abstraction and protocol-layer separation, improving architectural clarity
  3. Strengthening logging capabilities to make communication problems easier to diagnose
  4. Introducing streaming support to simplify streaming implementation

These improvements provide HagiCode with a more stable and more efficient communication foundation, allowing it to interact better with external AI tools and laying a solid foundation for future feature expansion.


If this article helped you:


Thank you for reading. If you found this article useful, please click the like button below so more people can discover it.

This content was created with AI-assisted collaboration and reviewed by me, and it reflects my own views and position.

Best Practices for Building a Modern Build System with C# and Nuke

Say Goodbye to Script Hell: Why We Chose C# for a Modern Build System

Section titled “Say Goodbye to Script Hell: Why We Chose C# for a Modern Build System”

A look at how the HagiCode project uses Nuke to build a type-safe, cross-platform, and highly extensible automated build workflow, fully addressing the maintenance pain points of traditional build scripts.

Throughout the long journey of software development, the word “build” tends to inspire both love and frustration. We love it because with a single click, code becomes a product, which is one of the most rewarding moments in programming. We hate it because maintaining that pile of messy build scripts can feel like a nightmare.

In many projects, we are used to writing scripts in Python or using XML configuration files (just imagine the fear of being ruled by <property> tags). But as project complexity grows, especially in projects like HagiCode that involve frontend and backend work, multiple platforms, and multiple languages, traditional build approaches start to show their limits. Scattered script logic, no type checking, weak IDE support… these issues become small traps that repeatedly trip up the development team.

To solve these pain points, in the HagiCode project we decided to introduce Nuke - a modern build system based on C#. It is more than just a tool; it is a new way of thinking about build workflows. Today, let us talk about why we chose it and how it has made our development experience take off.

Hey, let us introduce what we are building

We are developing HagiCode - an AI-powered intelligent coding assistant that makes development smarter, more convenient, and more enjoyable.

Smarter - AI assistance throughout the entire process, from idea to code, multiplying development efficiency. Convenient - Multi-threaded concurrent operations make full use of resources and keep the development workflow smooth. Enjoyable - Gamification mechanisms and an achievement system make coding less tedious and far more rewarding.

The project is evolving quickly. If you are interested in technical writing, knowledge management, or AI-assisted development, feel free to check us out on GitHub~

You might be wondering: “There are so many build systems, like Make, Gradle, or even plain Shell scripts. Why go out of the way to use one built on C#?”

That is actually a great question. Nuke’s core appeal is that it brings the programming language features we know best into the world of build scripts.

1. Modularizing the Build Workflow: The Art of Targets

Section titled “1. Modularizing the Build Workflow: The Art of Targets”

Nuke has a very clear design philosophy: everything is a target.

In traditional scripts, we may end up with hundreds of lines of sequential code and tangled logic. In Nuke, we break the build workflow into independent Targets. Each target is responsible for just one thing, for example:

  • Clean: clean the output directory
  • Restore: restore dependency packages
  • Compile: compile the code
  • Test: run unit tests

This design aligns well with the single responsibility principle. Like building with blocks, we can combine these targets however we want. More importantly, Nuke lets us define dependencies between targets. For example, if you want Test, the system will automatically check whether Compile has already run; if you want Compile, then Restore naturally has to come first.

This dependency graph not only makes the logic clearer, but also greatly improves execution efficiency, because Nuke automatically analyzes the optimal execution path.

2. Type Safety: Saying Goodbye to the Nightmare of Typos

Section titled “2. Type Safety: Saying Goodbye to the Nightmare of Typos”

Anyone who has written build scripts in Python has probably experienced this embarrassment: the script runs for five minutes and then fails because Confi.guration was misspelled, or because a string was passed to a parameter that was supposed to be a number.

The biggest advantage of writing build scripts in C# is type safety. That means:

  • Compile-time checks: while you are typing, the IDE tells you what is wrong instead of waiting until runtime to reveal the issue.
  • Safe refactoring: if you want to rename a variable or method, the IDE can handle it with one refactor action instead of a nervous global search-and-replace.
  • Intelligent completion: powerful IntelliSense completes the code for you, so you do not need to dig through documentation to remember obscure APIs.

3. Cross-Platform: A Unified Build Experience

Section titled “3. Cross-Platform: A Unified Build Experience”

In the past, you might write .bat files on Windows and .sh files on Linux, then add a Python script just to bridge the two. Now, wherever .NET Core (now .NET 5+) can run, Nuke can run too.

This means that whether team members use Windows, Linux, or macOS, and whether they prefer Visual Studio, VS Code, or Rider, everyone executes the same logic. That greatly reduces the environment-specific problems behind the classic “it works on my machine” scenario.

Nuke provides a very elegant parameter parsing mechanism. You do not need to manually parse string[] args. You only need to define a property and add the [Parameter] attribute, and Nuke will automatically map command-line arguments and configuration files for you.

For example, we can easily define the build configuration:

[Parameter("Configuration to build - Default is 'Debug'")]
readonly Configuration BuildConfiguration = IsLocalBuild ? Configuration.Debug : Configuration.Release;
Target Compile => _ => _
.DependsOn(Restore)
.Executes(() =>
{
// Use BuildConfiguration here; it is type-safe
DotNetBuild(s => s
.SetConfiguration(BuildConfiguration)
.SetProjectFile(SolutionFile));
});

This style is both intuitive and less error-prone.

Practical Guide: How to Apply It in a Project

Section titled “Practical Guide: How to Apply It in a Project”

Talking is easy; implementation is what matters. Let us take a look at how we put this approach into practice in the HagiCode project.

We did not want build scripts cluttering the project root, and we also did not want a directory structure so deep that it feels like certain Java projects. So we placed all Nuke-related build files in the nukeBuild/ directory.

The benefits are straightforward:

  • the project root stays clean;
  • the build logic remains cohesive and easy to manage;
  • when new team members join, they can immediately see, “oh, this is where the build logic lives.”

When designing targets, we followed one principle: atomic tasks + dependency flow.

Each target should be small enough to do exactly one thing. For example, Clean should only delete files; do not sneak packaging into it.

A recommended dependency flow looks roughly like this:

Clean -> Restore -> Compile -> Test -> Pack

Of course, this is not absolute. For example, if you only want to run tests and do not want to package, Nuke allows you to run nuke Test directly, and it will automatically take care of the required Restore and Compile steps.

What is the most frustrating thing about build scripts? Unclear error messages. For example, if a build fails and the log only says “Error: 1”, that is enough to drive anyone crazy.

In Nuke, because we can directly use C# exception handling, we can capture and report errors with much greater precision.

Target Publish => _ => _
.DependsOn(Test)
.Executes(() =>
{
try
{
// Try publishing to NuGet
DotNetNuGetPush(s => s
.SetTargetPath(ArtifactPath)
.SetSource("https://api.nuget.org/v3/index.json")
.SetApiKey(ApiKey));
}
catch (Exception ex)
{
Log.Error($"Publishing failed. Team, please check whether the key is correct: {ex.Message}");
throw; // Ensure the build process exits with a non-zero code
}
});

A build script is still code, and code should be tested. Nuke allows us to write tests for the build workflow, ensuring that when we modify the build logic, we do not break the existing release process. This is especially important in continuous integration (CI) pipelines.

By introducing Nuke, HagiCode’s build process has become smoother than ever before. This is not just a tool replacement; it is an upgrade in engineering thinking.

What did we gain?

  • Maintainability: code as configuration, clear logic, and a faster onboarding path for new team members.
  • Stability: strong typing eliminates more than 90% of low-level mistakes.
  • Consistency: a unified cross-platform experience removes environment differences.

If writing build scripts used to feel like “feeling your way through the dark,” then using Nuke feels like “walking at night with the lights on.” If you are tired of maintaining hard-to-debug scripting languages, try bringing your build logic into the world of C# as well. You may discover that build systems can actually be this elegant.


Thank you for reading. If you found this article useful, please click the like button below 👍 so more people can discover it.

This content was created with AI-assisted collaboration, reviewed by me, and reflects my own views and position.

HagiCode in Practice: How to Use GitHub Actions for Docusaurus Automated Deployment

Adding GitHub Pages Automated Deployment Support to HagiCode

Section titled “Adding GitHub Pages Automated Deployment Support to HagiCode”

The project’s early codename was PCode, and it has now officially been renamed HagiCode. This article records how we introduced automated static site deployment for the project so publishing content becomes as easy as drinking water.

During HagiCode development, we ran into a very practical problem: as the amount of documentation and proposals kept growing, efficiently managing and presenting that content became increasingly urgent. We decided to use GitHub Pages to host our static site, but building and deploying manually was simply too much trouble. Every change required a local build, packaging, and then a manual push to the gh-pages branch. That was not only inefficient, but also error-prone.

To solve this problem (mostly because we wanted to be lazy), we needed an automated deployment workflow. This article documents in detail how we added GitHub Actions-based automated deployment support to the HagiCode project, so we can focus on creating content and leave the rest to automation.

Hey, let us introduce what we are building

We are developing HagiCode - an AI-powered coding assistant that makes development smarter, easier, and more enjoyable.

Smarter - AI assistance throughout the whole process, from ideas to code, multiplying coding efficiency. More convenient - multi-threaded concurrent operations that make full use of resources and keep the development workflow smooth. More enjoyable - gamification mechanisms and an achievement system that make coding less dull and far more rewarding.

The project is evolving quickly. If you are interested in technical writing, knowledge management, or AI-assisted development, feel free to check it out on GitHub~

Before getting started, we first need to clarify what exactly this task is supposed to accomplish. After all, sharpening the axe does not delay the work.

  1. Automated build: Automatically trigger the build process when code is pushed to the main branch.
  2. Automated deployment: After a successful build, automatically deploy the generated static files to GitHub Pages.
  3. Environment consistency: Ensure the CI environment matches the local build environment to avoid the awkward “it works locally but fails in production” situation.

Since HagiCode is built on Docusaurus (a very popular React static site generator), we can use GitHub Actions to achieve this goal.

GitHub Actions is the CI/CD service provided by GitHub. By defining workflow files in YAML format inside the repository, we can customize a variety of automation tasks.

We need to create a new configuration file in the .github/workflows folder under the project root, for example deploy.yml. If the folder does not exist, remember to create it manually first.

The core logic of this configuration file is as follows:

  1. Trigger condition: Listen for push events on the main branch.
  2. Runtime environment: The latest Ubuntu.
  3. Build steps:
    • Check out the code
    • Install Node.js
    • Install dependencies (npm install)
    • Build the static files (npm run build)
  4. Deployment step: Use the official action-gh-pages to push the build artifacts to the gh-pages branch.

Below is the configuration template we ultimately adopted:

name: Deploy to GitHub Pages
# Trigger condition: when pushing to the main branch
on:
push:
branches:
- main
# You can add path filters as needed, for example only build when docs change
# paths:
# - 'docs/**'
# - 'package.json'
# Set permissions, which are important for deploying to GitHub Pages
permissions:
contents: read
pages: write
id-token: write
# Concurrency control: cancel older builds on the same branch
concurrency:
group: "pages"
cancel-in-progress: false
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
# Note: you must set fetch-depth: 0, otherwise the build version may be inaccurate
with:
fetch-depth: 0
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: 20 # Recommended to match your local development environment
cache: 'npm' # Enabling cache can speed up the build process
- name: Install dependencies
run: npm ci
# Use npm ci instead of npm install because it is faster, stricter, and better suited for CI
- name: Build website
run: npm run build
env:
# If your site build requires environment variables, configure them here
# NODE_ENV: production
# PUBLIC_URL: /your-repo-name
- name: Upload artifact
uses: actions/upload-pages-artifact@v3
with:
path: ./build # Default Docusaurus output directory
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
needs: build
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4

Pitfalls Encountered During Implementation

Section titled “Pitfalls Encountered During Implementation”

In practice, we ran into a few issues. I am sharing them here in the hope that everyone can avoid them, or at least prepare solutions in advance.

When we first set things up, deployment kept failing with a 403 (Forbidden) error. After a long investigation, we discovered that GitHub’s default GITHUB_TOKEN did not have permission to write to Pages.

Solution: In the repository Settings -> Actions -> General -> Workflow permissions, make sure to choose “Read and write permissions”.

By default, Docusaurus puts the built static files in the build directory. However, some projects may use different configurations. For example, Create React App defaults to build, while Vite defaults to dist. If Actions reports that it cannot find files, remember to check the output path configuration in docusaurus.config.js.

If your repository is not a user homepage (that is, not username.github.io) but instead a project page (such as username.github.io/project-name), you need to configure baseUrl.

In docusaurus.config.js:

module.exports = {
// ...
url: 'https://hagicode.com', // Your Hagicode URL
baseUrl: '/', // Deploy at the root path
// ...
};

This detail is easy to overlook. If it is configured incorrectly, the page may load as a blank screen because the resource paths cannot be resolved.

After configuring everything and pushing the code, we can head to the Actions tab in the GitHub repository and enjoy the show.

You will see a yellow circle while the workflow is running. When it turns green, it means success. If it turns red, click into the logs to inspect the issue. Usually, you can track it down there, and most of the time it is a typo or an incorrect path configuration.

Once the build succeeds, visit https://<your-username>.github.io/<repo-name>/ and you should see your brand-new site.

By introducing GitHub Actions, we successfully implemented automated deployment for the HagiCode documentation site. This not only saves the time previously spent on manual operations, but more importantly standardizes the release process. Now, no matter which teammate updates the documentation, as long as the changes are merged into the main branch, the latest content will appear online a few minutes later.

Key benefits:

  • Higher efficiency: from “manual packaging and manual upload” to “code is the release”.
  • Fewer errors: removes the possibility of human operational mistakes.
  • Better experience: lets developers focus more on content quality instead of being distracted by tedious deployment steps.

Although setting up CI/CD can be a bit troublesome at first, especially with all the permissions and path issues, it is a one-time investment with huge long-term returns. I strongly recommend that every static site project adopt a similar automated workflow.


Thank you for reading. If you found this article useful, click the like button below 👍 so more people can discover it.

This content was created with AI-assisted collaboration, reviewed by me, and reflects my own views and position.

How to Use GitHub Actions + image-syncer for Automated Image Sync from Docker Hub to Azure ACR

Automating Image Sync from Docker Hub to Azure ACR

Section titled “Automating Image Sync from Docker Hub to Azure ACR”

This article explains how to use GitHub Actions and the image-syncer tool to automate image synchronization from Docker Hub to Azure Container Registry, solving the problem of slow Docker Hub access in mainland China and some Azure regions, while improving image availability and deployment efficiency in Azure environments.

The HagiCode project uses Docker images as its core runtime components, with the main images hosted on Docker Hub. As the project has evolved and Azure deployment needs have grown, we encountered the following pain points:

  • Slow image pulls, because access to Docker Hub is limited in mainland China and some Azure regions
  • Relying on a single image source creates a single point of failure risk
  • Using Azure Container Registry in Azure environments provides better network performance and integration experience

To solve these problems, we need to establish an automated image synchronization mechanism that regularly syncs images from Docker Hub to Azure ACR, ensuring users get faster image pull speeds and higher availability in Azure environments.

We are building HagiCode, an AI-driven coding assistant that makes development smarter, more convenient, and more enjoyable.

Smart: AI assistance throughout the entire process, from idea to code, boosting coding efficiency several times over. Convenient: Multi-threaded concurrent operations make full use of resources and keep the development workflow smooth. Fun: Gamification and an achievement system make coding less dull and more rewarding.

The project is evolving rapidly. If you are interested in technical writing, knowledge management, or AI-assisted development, welcome to check it out on GitHub.

When defining the solution, we compared multiple technical approaches:

  • Incremental sync: only synchronizes changed image layers, significantly reducing network transfer
  • Resume support: synchronization can resume after network interruptions
  • Concurrency control: supports configurable concurrency to improve large image sync efficiency
  • Robust error handling: built-in retry mechanism for failures (3 times by default)
  • Lightweight deployment: single binary with no dependencies
  • Multi-registry support: compatible with Docker Hub, Azure ACR, Harbor, and more
  • No incremental sync support: each run requires pulling the full image content
  • Lower efficiency: large network transfer volume and longer execution time
  • Simple and easy to use: relies on familiar docker pull / docker push commands
  • Higher complexity: requires Azure CLI authentication setup
  • Functional limitations: az acr import is relatively limited
  • Native integration: integrates well with Azure services

Decision 1: Set the sync frequency to daily at 00:00 UTC

Section titled “Decision 1: Set the sync frequency to daily at 00:00 UTC”
  • Balances image freshness with resource consumption
  • Avoids peak business hours and reduces impact on other operations
  • Docker Hub images are usually updated after daily builds
  • Maintains full consistency with Docker Hub
  • Provides flexible version choices for users
  • Simplifies sync logic by avoiding complex tag filtering rules

Decision 3: Store credentials in GitHub Secrets

Section titled “Decision 3: Store credentials in GitHub Secrets”
  • Natively supported by GitHub Actions with strong security
  • Simple to configure and easy to manage and maintain
  • Supports repository-level access control
  • Use GitHub Secrets for encrypted storage
  • Rotate ACR passwords regularly
  • Limit ACR account permissions to push-only
  • Monitor ACR access logs

Risk 2: Sync failures causing image inconsistency

Section titled “Risk 2: Sync failures causing image inconsistency”
  • image-syncer includes a built-in incremental sync mechanism
  • Automatic retry on failure (3 times by default)
  • Detailed error logs and failure notifications
  • Resume support
  • Incremental sync reduces network transfer
  • Configurable concurrency (10 in the current setup)
  • Monitor the number and size of synchronized images
  • Run synchronization during off-peak hours

We use an automated GitHub Actions + image-syncer solution to synchronize images from Docker Hub to Azure ACR.

  • Create or confirm an Azure Container Registry in Azure Portal
  • Create ACR access credentials (username and password)
  • Confirm access permissions for the Docker Hub image repository

Add the following secrets in the GitHub repository settings:

  • AZURE_ACR_USERNAME: Azure ACR username
  • AZURE_ACR_PASSWORD: Azure ACR password

Configure the workflow in .github/workflows/sync-docker-acr.yml:

  • Scheduled trigger: every day at 00:00 UTC
  • Manual trigger: supports workflow_dispatch
  • Extra trigger: run when the publish branch receives a push (for fast synchronization)
SequenceParticipantActionDescription
1GitHub ActionsTrigger workflowTriggered by schedule, manual run, or a push to the publish branch
2GitHub Actions → image-syncerDownload and run the sync toolEnter the actual sync phase
3image-syncer → Docker HubFetch image manifests and tag listRead source repository metadata
4image-syncer → Azure ACRFetch existing image information from the target repositoryDetermine the current target-side state
5image-syncerCompare source and target differencesIdentify image layers that need to be synchronized
6image-syncer → Docker HubPull changed image layersTransfer only the content that needs updating
7image-syncer → Azure ACRPush changed image layersComplete incremental synchronization
8image-syncer → GitHub ActionsReturn synchronization statisticsIncludes results, differences, and error information
9GitHub ActionsRecord logs and upload artifactsUseful for follow-up auditing and troubleshooting

Here is the actual workflow configuration in use (.github/workflows/sync-docker-acr.yml):

name: Sync Docker Image to Azure ACR
on:
schedule:
- cron: "0 0 * * *" # Every day at 00:00 UTC
workflow_dispatch: # Manual trigger
push:
branches: [publish]
permissions:
contents: read
jobs:
sync:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Download image-syncer
run: |
# Download the image-syncer binary
wget https://github.com/AliyunContainerService/image-syncer/releases/download/v1.5.5/image-syncer-v1.5.5-linux-amd64.tar.gz
tar -zxvf image-syncer-v1.5.5-linux-amd64.tar.gz
chmod +x image-syncer
- name: Create auth config
run: |
# Generate the authentication configuration file (YAML format)
cat > auth.yaml <<EOF
hagicode.azurecr.io:
username: "${{ secrets.AZURE_ACR_USERNAME }}"
password: "${{ secrets.AZURE_ACR_PASSWORD }}"
EOF
- name: Create images config
run: |
# Generate the image synchronization configuration file (YAML format)
cat > images.yaml <<EOF
docker.io/newbe36524/hagicode: hagicode.azurecr.io/hagicode
EOF
- name: Run image-syncer
run: |
# Run synchronization (using the newer --auth and --images parameters)
./image-syncer --auth=./auth.yaml --images=./images.yaml --proc=10 --retries=3
- name: Upload logs
if: always()
uses: actions/upload-artifact@v4
with:
name: sync-logs
path: image-syncer-*.log
retention-days: 7
  • Scheduled trigger: cron: "0 0 * * *" - runs every day at 00:00 UTC
  • Manual trigger: workflow_dispatch - allows users to run it manually in the GitHub UI
  • Push trigger: push: branches: [publish] - triggered when the publish branch receives a push (for fast synchronization)

2. Authentication configuration (auth.yaml)

Section titled “2. Authentication configuration (auth.yaml)”
hagicode.azurecr.io:
username: "${{ secrets.AZURE_ACR_USERNAME }}"
password: "${{ secrets.AZURE_ACR_PASSWORD }}"
docker.io/newbe36524/hagicode: hagicode.azurecr.io/hagicode

This configuration means synchronizing all tags from docker.io/newbe36524/hagicode to hagicode.azurecr.io/hagicode

  • --auth=./auth.yaml: path to the authentication configuration file
  • --images=./images.yaml: path to the image synchronization configuration file
  • --proc=10: set concurrency to 10
  • --retries=3: retry failures 3 times

Configure the following in Settings → Secrets and variables → Actions in the GitHub repository:

Secret NameDescriptionExample ValueHow to Get It
AZURE_ACR_USERNAMEAzure ACR usernamehagicodeAzure Portal → ACR → Access keys
AZURE_ACR_PASSWORDAzure ACR passwordxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxAzure Portal → ACR → Access keys → Password
  1. Open the Actions tab of the GitHub repository
  2. Select the Sync Docker Image to Azure ACR workflow
  3. Click the Run workflow button
  4. Choose the branch and click Run workflow to confirm
  1. Click a specific workflow run record on the Actions page
  2. View the execution logs for each step
  3. Download the sync-logs file from the Artifacts section at the bottom of the page
Terminal window
# Log in to Azure ACR
az acr login --name hagicode
# List images and their tags
az acr repository show-tags --name hagicode --repository hagicode --output table
  • Rotate Azure ACR passwords regularly (recommended every 90 days)
  • Use a dedicated ACR service account with push-only permissions
  • Monitor ACR access logs to detect abnormal access in time
  • Do not output credentials in logs
  • Do not commit credentials to the code repository
  • Adjust the --proc parameter: tune concurrency based on network bandwidth (recommended 5-20)
  • Monitor synchronization time: if it takes too long, consider reducing concurrency
  • Clean up logs regularly: set a reasonable retention-days value (7 days in the current setup)
Error: failed to authenticate to hagicode.azurecr.io

Solution:

  1. Check whether GitHub Secrets are configured correctly
  2. Verify whether the Azure ACR password has expired
  3. Confirm whether the ACR service account permissions are correct
Error: timeout waiting for response

Solution:

  1. Check network connectivity
  2. Reduce concurrency (--proc parameter)
  3. Wait for the network to recover and trigger the workflow again
Warning: some tags failed to sync

Solution:

  1. Check the synchronization logs to identify failed tags
  2. Manually trigger the workflow to synchronize again
  3. Verify that the source image on Docker Hub is working properly
  • Regularly check the Actions page to confirm workflow run status
  • Configure GitHub notifications to receive workflow failure alerts promptly
  • Monitor Azure ACR storage usage
  • Regularly verify tag consistency

Q1: How do I sync specific tags instead of all tags?

Section titled “Q1: How do I sync specific tags instead of all tags?”

Modify the images.yaml configuration file:

# Sync only the latest and v1.0 tags
docker.io/newbe36524/hagicode:latest: hagicode.azurecr.io/hagicode:latest
docker.io/newbe36524/hagicode:v1.0: hagicode.azurecr.io/hagicode:v1.0

Q2: How do I sync multiple image repositories?

Section titled “Q2: How do I sync multiple image repositories?”

Add multiple lines in images.yaml:

docker.io/newbe36524/hagicode: hagicode.azurecr.io/hagicode
docker.io/newbe36524/another-image: hagicode.azurecr.io/another-image

Q3: How do I retry after a synchronization failure?

Section titled “Q3: How do I retry after a synchronization failure?”
  • Automatic retry: image-syncer includes a built-in retry mechanism (3 times by default)
  • Manual retry: click Re-run all jobs on the GitHub Actions page

Q4: How do I view detailed synchronization progress?

Section titled “Q4: How do I view detailed synchronization progress?”
  • View real-time logs on the Actions page
  • Download the sync-logs artifact to see the full log file
  • The log file includes the synchronization status and transfer speed for each tag
  • Initial full synchronization: typically takes 10-30 minutes depending on image size
  • Incremental synchronization: usually 2-5 minutes if image changes are small
  • Time depends on network bandwidth, image size, and concurrency settings

Add a notification step to the workflow:

- name: Notify on success
if: success()
run: |
echo "Docker images synced successfully to Azure ACR"

Add tag filtering logic to the workflow:

- name: Filter tags
run: |
# Sync only tags that start with v
echo "docker.io/newbe36524/hagicode:v* : hagicode.azurecr.io/hagicode:v*" > images.yaml

3. Add a synchronization statistics report

Section titled “3. Add a synchronization statistics report”
- name: Generate report
if: always()
run: |
echo "## Sync Report" >> $GITHUB_STEP_SUMMARY
echo "- Total tags: $(grep -c 'synced' image-syncer-*.log)" >> $GITHUB_STEP_SUMMARY
echo "- Sync time: ${{ steps.sync.outputs.duration }}" >> $GITHUB_STEP_SUMMARY

With the method introduced in this article, we successfully implemented automated image synchronization from Docker Hub to Azure ACR. This solution uses the scheduled and manual trigger capabilities of GitHub Actions together with the incremental synchronization and error-handling features of image-syncer to ensure timely and consistent image synchronization.

We also discussed security best practices, performance optimization, troubleshooting, and other related topics to help users better manage and maintain this synchronization mechanism. We hope this article provides valuable reference material for developers who need to deploy Docker images in Azure environments.


Thank you for reading. If you found this article useful, please click the like button below 👍 so more people can discover it.

This content was created with AI-assisted collaboration, reviewed by me, and reflects my own views and position.

GitHub Issues Integration

Building a GitHub Issues Integration from Scratch: HagiCode’s Frontend Direct Connection Practice

Section titled “Building a GitHub Issues Integration from Scratch: HagiCode’s Frontend Direct Connection Practice”

This article documents the full process of integrating GitHub Issues into the HagiCode platform. We will explore how to use a “frontend direct connection + minimal backend” architecture to achieve secure OAuth authentication and efficient issue synchronization while keeping the backend lightweight.

As an AI-assisted development platform, HagiCode’s core value lies in connecting ideas with implementation. But in actual use, we found that after users complete a Proposal in HagiCode, they often need to manually copy the content into GitHub Issues for project tracking.

This creates several obvious pain points:

  1. Fragmented workflow: Users need to switch back and forth between two systems. The experience is not smooth, and key information can easily be lost during copy and paste.
  2. Inconvenient collaboration: Other team members are used to checking tasks on GitHub and cannot directly see proposal progress inside HagiCode.
  3. Repeated manual work: Every time a proposal is updated, someone has to manually update the corresponding issue on GitHub, adding unnecessary maintenance cost.

To solve this problem, we decided to introduce the GitHub Issues Integration feature, connecting HagiCode sessions with GitHub repositories to enable “one-click sync.”

Hey, let us introduce what we are building

We are building HagiCode — an AI-powered coding assistant that makes development smarter, easier, and more enjoyable.

Smarter — AI assists throughout the entire journey, from idea to code, multiplying development efficiency. Easier — Multi-threaded concurrent operations make full use of resources and keep the development workflow smooth. More enjoyable — Gamification and an achievement system make coding less tedious and more rewarding.

The project is iterating quickly. If you are interested in technical writing, knowledge management, or AI-assisted development, welcome to check us out on GitHub~


Technical Choice: Frontend Direct Connection vs Backend Proxy

Section titled “Technical Choice: Frontend Direct Connection vs Backend Proxy”

When designing the integration approach, we had two options in front of us: the traditional “backend proxy model” and the more aggressive “frontend direct connection model.”

In the traditional backend proxy model, every request from the frontend must first go through our backend, which then calls the GitHub API. This centralizes the logic, but it also puts a significant burden on the backend:

  1. Bloated backend: We would need to write a dedicated GitHub API client wrapper and also handle the complex OAuth state machine.
  2. Token risk: The user’s GitHub token would have to be stored in the backend database. Even with encryption, this still increases the security surface.
  3. Development cost: We would need database migrations to store tokens and an additional synchronization service to maintain.

The frontend direct connection model is much lighter. In this approach, we use the backend only for the most sensitive “secret exchange” step (the OAuth callback). After obtaining the token, we store it directly in the browser’s localStorage. Later operations such as creating issues and updating comments are sent directly from the frontend to GitHub over HTTP.

Comparison DimensionBackend Proxy ModelFrontend Direct Connection Model
Backend complexityRequires a full OAuth service and GitHub API clientOnly needs an OAuth callback endpoint
Token managementMust be encrypted and stored in the database, with leakage riskStored in the browser and visible only to the user
Implementation costRequires database migrations and multi-service developmentPrimarily frontend work
User experienceCentralized logic, but server latency may be slightly higherExtremely fast response with direct GitHub interaction

Because we wanted rapid integration and minimal backend changes, we ultimately chose the “frontend direct connection model”. It is like giving the browser a “temporary pass.” Once it gets the pass, the browser can go handle things on GitHub by itself without asking the backend administrator for approval every time.


After settling on the architecture, we needed to design the specific data flow. The core of the synchronization process is how to obtain the token securely and use it efficiently.

The whole system can be abstracted into three roles: the browser (frontend), the HagiCode backend, and GitHub.

+--------------+ +--------------+ +--------------+
| Frontend | | Backend | | GitHub |
| React | | ASP.NET | | REST API |
| | | | | |
| +--------+ | | | | |
| | OAuth |--+--------> /callback | | |
| | Flow | | | | | |
| +--------+ | | | | |
| | | | | |
| +--------+ | | +--------+ | | +--------+ |
| | GitHub | +------------> Session | +----------> Issues | |
| | API | | | |Metadata| | | | | |
| | Direct | | | +--------+ | | +--------+ |
| +--------+ | | | | |
+--------------+ +--------------+ +--------------+

The key point is: only one small step in OAuth (exchanging the code for a token) needs to go through the backend. After that, the heavy lifting (creating issues) is handled directly between the frontend and GitHub.

When the user clicks the “Sync to GitHub” button in the HagiCode UI, a series of complex actions takes place:

User clicks "Sync to GitHub"
1. Frontend checks localStorage for the GitHub token
2. Format issue content (convert the Proposal into Markdown)
3. Frontend directly calls the GitHub API to create/update the issue
4. Call the HagiCode backend API to update Session.metadata (store issue URL and other info)
5. Backend broadcasts the SessionUpdated event via SignalR
6. Frontend receives the event and updates the UI to show the "Synced" state

Security is always the top priority when integrating third-party services. We made the following considerations:

  1. Defend against CSRF attacks: Generate a random state parameter during the OAuth redirect and store it in sessionStorage. Strictly validate the state in the callback to prevent forged requests.
  2. Isolated token storage: The token is stored only in the browser’s localStorage. Using the Same-Origin Policy, only HagiCode scripts can read it, avoiding the risk of a server-side database leak affecting users.
  3. Error boundaries: We designed dedicated handling for common GitHub API errors (such as 401 expired token, 422 validation failure, and 429 rate limiting), so users receive friendly feedback.

In Practice: Implementation Details in Code

Section titled “In Practice: Implementation Details in Code”

Theory only goes so far. Let us look at how the code actually works.

The backend only needs to do two things: store synchronization information and handle the OAuth callback.

Database changes We only need to add a Metadata column to the Sessions table to store extension data in JSON format.

-- Add metadata column to Sessions table
ALTER TABLE "Sessions" ADD COLUMN "Metadata" text NULL;

Entity and DTO definitions

src/HagiCode.DomainServices.Contracts/Entities/Session.cs
public class Session : AuditedAggregateRoot<SessionId>
{
// ... other properties ...
/// <summary>
/// JSON metadata for storing extension data like GitHub integration
/// </summary>
public string? Metadata { get; set; }
}
// DTO definition for easier frontend serialization
public class GitHubIssueMetadata
{
public required string Owner { get; set; }
public required string Repo { get; set; }
public int IssueNumber { get; set; }
public required string IssueUrl { get; set; }
public DateTime SyncedAt { get; set; }
public string LastSyncStatus { get; set; } = "success";
}
public class SessionMetadata
{
public GitHubIssueMetadata? GitHubIssue { get; set; }
}

This is the entry point of the connection. We use the standard Authorization Code Flow.

src/HagiCode.Client/src/services/githubOAuth.ts
// Generate the authorization URL and redirect
export async function generateAuthUrl(): Promise<string> {
const state = generateRandomString(); // Generate a random string for CSRF protection
sessionStorage.setItem('hagicode_github_state', state);
const params = new URLSearchParams({
client_id: clientId,
redirect_uri: window.location.origin + '/settings?tab=github&oauth=callback',
scope: ['repo', 'public_repo'].join(' '),
state: state,
});
return `https://github.com/login/oauth/authorize?${params.toString()}`;
}
// Handle the code-to-token exchange on the callback page
export async function exchangeCodeForToken(code: string, state: string): Promise<GitHubToken> {
// 1. Validate state to prevent CSRF
const savedState = sessionStorage.getItem('hagicode_github_state');
if (state !== savedState) throw new Error('Invalid state parameter');
// 2. Call the backend API to exchange the token
// Note: this must go through the backend because ClientSecret cannot be exposed to the frontend
const response = await fetch('/api/GitHubOAuth/callback', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ code, state, redirectUri: window.location.origin + '/settings?tab=github&oauth=callback' }),
});
if (!response.ok) throw new Error('Failed to exchange token');
const token = await response.json();
// 3. Save into LocalStorage
saveToken(token);
return token;
}

Once we have the token, we need a solid tool for calling the GitHub API.

src/HagiCode.Client/src/services/githubApiClient.ts
const GITHUB_API_BASE = 'https://api.github.com';
// Core request wrapper
async function githubApi<T>(endpoint: string, options: RequestInit = {}): Promise<T> {
const token = localStorage.getItem('gh_token');
if (!token) throw new Error('Not connected to GitHub');
const response = await fetch(`${GITHUB_API_BASE}${endpoint}`, {
...options,
headers: {
...options.headers,
Authorization: `Bearer ${token}`,
Accept: 'application/vnd.github.v3+json', // Specify the API version
},
});
// Error handling logic
if (!response.ok) {
if (response.status === 401) throw new Error('GitHub token expired, please reconnect');
if (response.status === 403) throw new Error('No permission to access this repository or rate limit exceeded');
if (response.status === 422) throw new Error('Issue validation failed, the title may be duplicated');
throw new Error(`GitHub API Error: ${response.statusText}`);
}
return response.json();
}
// Create issue
export async function createIssue(owner: string, repo: string, data: { title: string, body: string, labels: string[] }) {
return githubApi(`/repos/${owner}/${repo}/issues`, {
method: 'POST',
body: JSON.stringify(data),
});
}

The final step is to convert HagiCode session data into the format of a GitHub issue. It is a bit like translation work.

// Convert a Session object into a Markdown string
function formatIssueForSession(session: Session): string {
let content = `# ${session.title}\n\n`;
content += `**> HagiCode Session:** #${session.code}\n`;
content += `**> Status:** ${session.status}\n\n`;
content += `## Description\n\n${session.description || 'No description provided.'}\n\n`;
// If this is a Proposal session, add extra fields
if (session.type === 'proposal') {
content += `## Chief Complaint\n\n${session.chiefComplaint || ''}\n\n`;
// Add a deep link so users can jump back from GitHub to HagiCode
content += `---\n\n**[View in HagiCode](hagicode://sessions/${session.id})**\n`;
}
return content;
}
// Main logic when clicking the sync button
const handleSync = async (session: Session) => {
try {
const repoInfo = parseRepositoryFromUrl(session.repoUrl); // Parse the repository URL
if (!repoInfo) throw new Error('Invalid repository URL');
toast.loading('Syncing to GitHub...');
// 1. Format content
const issueBody = formatIssueForSession(session);
// 2. Call API
const issue = await githubApiClient.createIssue(repoInfo.owner, repoInfo.repo, {
title: `[HagiCode] ${session.title}`,
body: issueBody,
labels: ['hagicode', 'proposal', `status:${session.status}`],
});
// 3. Update Session Metadata (save the issue link)
await SessionsService.patchApiSessionsSessionId(session.id, {
metadata: {
githubIssue: {
owner: repoInfo.owner,
repo: repoInfo.repo,
issueNumber: issue.number,
issueUrl: issue.html_url,
syncedAt: new Date().toISOString(),
}
}
});
toast.success('Synced successfully!');
} catch (err) {
console.error(err);
toast.error('Sync failed, please check your token or network');
}
};

With this “frontend direct connection” approach, we achieved seamless GitHub Issues integration with the least possible backend code.

  1. High development efficiency: Backend changes are minimal, mainly one extra database field and a simple OAuth callback endpoint. Most logic is completed on the frontend.
  2. Strong security: The token does not pass through the server database, reducing leakage risk.
  3. Great user experience: Requests are initiated directly from the frontend, so response speed is fast and there is no need for backend forwarding.

There are a few pitfalls to keep in mind during real deployment:

  • OAuth App settings: Remember to enter the correct Authorization callback URL in your GitHub OAuth App settings (usually http://localhost:3000/settings?tab=github&oauth=callback).
  • Rate limits: GitHub API limits unauthenticated requests quite strictly, but with a token the quota is usually sufficient (5000 requests/hour).
  • URL parsing: Users enter all kinds of repository URLs, so make sure your regex can match .git suffixes, SSH formats, and similar cases.

The current feature is still one-way synchronization (HagiCode -> GitHub). In the future, we plan to implement two-way synchronization through GitHub Webhooks. For example, if an issue is closed on GitHub, the session state on the HagiCode side could also update automatically. That will require us to expose a webhook endpoint on the backend, which will be an interesting next step.

We hope this article gives you a bit of inspiration for your own third-party integration development. If you have questions, feel free to open an issue for discussion on HagiCode GitHub.