Skip to content

Frontend Engineering

2 posts with the tag “Frontend Engineering”

Technical Analysis of the HagiCode Soul Platform: The Evolution from Emerging Needs to an Independent Platform

Technical Analysis of the HagiCode Soul Platform: The Evolution from Emerging Needs to an Independent Platform

Section titled “Technical Analysis of the HagiCode Soul Platform: The Evolution from Emerging Needs to an Independent Platform”

Writing technical articles is not really such a grand thing. It is mostly just a matter of organizing the pitfalls you have run into and the detours you have taken. We have all been inexperienced before, after all. This article takes an in-depth look at the design philosophy, architectural evolution, and core technical implementation of Soul in the HagiCode project, and explores how an independent platform can provide a more focused experience for creating and sharing Agent personas.

In the practice of building AI Agents, we often run into a question that looks simple but is actually crucial: how do we give different Agents stable and distinctive language styles and personality traits?

It is a slightly frustrating question, honestly. In the early Hero system of HagiCode, different Heroes (Agent instances) were mainly distinguished through profession settings and generic prompts. That approach came with some fairly obvious pain points, and anyone who has tried something similar has probably felt the same.

First, language style was difficult to keep consistent. The same “developer engineer” role might sound professional and rigorous one day, then casual and loose the next. This was not a model problem so much as the absence of an independent personality configuration layer to constrain and guide the output style.

Second, the sense of character was generally weak. When we described an Agent’s traits, we often had to rely on vague adjectives like “friendly,” “professional,” or “humorous,” without concrete language rules to support those abstract descriptions. Put plainly, it sounded nice in theory, but there was little to hold onto in practice.

Third, persona configurations were almost impossible to reuse. Suppose we carefully designed the speaking style of a “catgirl waitress” and wanted to reuse that expression style in another business scenario. In practice, we would almost have to configure it again from scratch. Sometimes you do not want to possess something beautiful, only reuse it a little… and even that turns out to be hard.

To solve those real problems, we introduced the Soul mechanism: an independent language style configuration layer separate from equipment and descriptions. Soul can define an Agent’s speaking habits, tone preferences, and wording boundaries, can be shared and reused across multiple Heroes, and can also be injected into the system prompt automatically on the first Session call.

Some people might say that this is just configuring a few prompts. But sometimes the real question is not whether something can be done; it is how to do it more elegantly. As Soul matured, we realized it had enough depth to develop independently. A dedicated Soul platform could let users focus on creating, sharing, and browsing interesting persona configurations without being distracted by the rest of the Hero system. That is how the standalone platform at soul.hagicode.com came into being.

HagiCode is an open-source AI coding assistant project built with a modern technology stack and aimed at giving developers a smooth intelligent programming experience. The Soul platform approach shared in this article comes from our own hands-on exploration while building HagiCode to solve the practical problem of Agent persona management. If you find the approach valuable, then it probably means we have accumulated a certain amount of engineering judgment in practice, and the HagiCode project itself may also be worth a closer look.

The Technical Architecture Evolution of the Soul Platform

Section titled “The Technical Architecture Evolution of the Soul Platform”

The Soul platform did not appear all at once. It went through three clear stages. The story began abruptly and concluded naturally.

Phase 1: Soul Configuration Embedded in Hero

Section titled “Phase 1: Soul Configuration Embedded in Hero”

The earliest Soul implementation existed as a functional module inside the Hero workspace. We added an independent SOUL editing area to the Hero UI, supporting both preset application and text fine-tuning.

Preset application let users choose from classic persona templates such as “professional developer engineer” and “catgirl waitress.” Text fine-tuning let users personalize those presets further. On the backend, the Hero entity gained a Soul field, with SoulCatalogId used to identify its source.

This stage solved the question of whether the capability existed at all, and it grew forward somewhat awkwardly, like anything young does. But as Soul content became richer, the limitations of an architecture tightly coupled with the Hero system started to show.

To provide a better Soul discovery and reuse experience, we built a SOUL Marketplace catalog page with support for browsing, searching, viewing details, and favoriting.

At this stage, we introduced a combinatorial design built from 50 main Catalogs (base roles) and 10 orthogonal rules (expression styles). The main Catalogs defined the Agent’s core persona, with abstract character settings such as “Mistport Traveler” and “Night Hunter.” The orthogonal rules defined how the Agent expressed itself, with language style traits such as “Concise & Professional” and “Verbose & Friendly.”

50 x 10 = 500 possible combinations gave users a wide configuration space for personas. It is not an overwhelming number, but it is not small either. There are many roads to Rome, after all; some are simply easier to walk than others. On the backend, the full SOUL catalog was generated through catalog-sources.json, while the frontend presented those catalog entries as an interactive card list.

The in-site Marketplace was a good transitional solution, but only that: transitional. It was still attached to the main system, and for users who only wanted Soul functionality, the access path remained too deep. Not everyone wants to take the scenic route just to do something simple.

Phase 3: Splitting into an Independent Platform

Section titled “Phase 3: Splitting into an Independent Platform”

In the end, we decided to move Soul into an independent repository (repos/soul). The Marketplace in the original main system was changed into an external jump guide, while the new platform adopted a Builder-first design philosophy: the homepage is the creation workspace by default, so users can start building their own persona configuration the moment they open the site.

The technology stack was also comprehensively upgraded in this stage: Vite 8 + React 19 + TypeScript 5.9, a unified design language through the shadcn/ui component system, and Tailwind CSS 4 theme variables. The improvement in frontend engineering laid a solid foundation for future feature iteration.

Everything faded away… no, actually, everything was only just beginning.

One core design principle of the Soul platform is local-first. That means the homepage must remain fully functional without a backend, and failure to load remote materials must never block page entry.

There is nothing especially miraculous about that. It simply means thinking one step further when designing the system. Using a local snapshot as the baseline and remote data as enhancement lets the product remain basically usable under any network condition. Concretely, we implemented a two-layer material architecture:

export async function loadBuilderMaterials(): Promise<BuilderMaterials> {
const localMaterials = createLocalMaterials(snapshot) // local baseline
try {
const inspirationFragments = await fetchMarketplaceItems() // remote enhancement
return { ...localMaterials, inspirationFragments, remoteState: "ready" }
} catch (error) {
return { ...localMaterials, remoteState: "fallback" } // graceful degradation
}
}

Local materials come from build-time snapshots of the main system documentation and include the complete data for 50 base roles and 10 expression rules. Remote materials come from Souls published by users and fetched through the Marketplace API. Together, they give users a full spectrum of materials, from official templates to community creativity. If that sounds dramatic, it really is just local plus remote.

The core data abstraction of Soul is the SoulFragment:

export type SoulFragment = {
fragmentId: string
group: "main-catalog" | "expression-rule" | "published-soul"
title: string
summary: string
content: string
keywords: string[]
localized?: Partial<Record<AppLocale, LocalizedFragmentContent>>
sourceRef: SoulFragmentSourceRef
meta: SoulFragmentMeta
}

The group field distinguishes fragment types: the main catalog defines the character core, orthogonal rules define expression style, and user-published Souls are marked as published-soul. The localized field supports multilingual presentation, allowing the same fragment to display different titles and descriptions in different language environments. Internationalization is something you really want to think about early, and in this case we actually did.

The Builder draft state encapsulates the user’s current editing state:

export type SoulBuilderDraft = {
draftId: string
name: string
selectedMainFragmentId: string | null
selectedRuleFragmentId: string | null
inspirationSoulId: string | null
mainSlotText: string
ruleSlotText: string
customPrompt: string
previewText: string
updatedAt: string
}

Each fragment selected in the editor has its content concatenated into the corresponding slot, forming the final preview text. mainSlotText corresponds to the main role content, ruleSlotText corresponds to the expression rule content, and customPrompt is the user’s additional instruction text.

Preview compilation is the core capability of Soul Builder. It assembles user-selected fragments and custom text into a system prompt that can be copied directly:

export function compilePreview(
draft: Pick<SoulBuilderDraft, "mainSlotText" | "ruleSlotText" | "customPrompt">,
fragments: {
mainFragment: SoulFragment | null
ruleFragment: SoulFragment | null
inspirationFragment: SoulFragment | null
}
): PreviewCompilation {
// Assembly logic: main role + expression rule + inspiration reference + custom content
}

The compilation result is shown in the central preview panel, where users can see the final effect in real time and copy it to the clipboard with one click. It sounds simple, and it is. But simple things are often the most useful.

Frontend state management in Soul Builder follows one important principle: clear separation of state boundaries. More specifically, drawer state is not persisted and does not write directly into the draft. Only explicit Builder actions trigger meaningful state changes.

// Domain state (useSoulBuilder)
export function useSoulBuilder() {
// Material loading and caching
// Slot aggregation and preview compilation
// Copy actions and feedback messages
// Locale-safe descriptors
}
// Presentation state (useHomeEditorState)
export function useHomeEditorState() {
// activeSlot, drawerSide, drawerOpen
// default focus behavior
}

That separation ensures both edit-state safety and responsive UI behavior. Opening and closing the drawer is purely a UI interaction and should not trigger complicated persistence logic. It may sound obvious, but it matters: UI state and business state should be separated clearly so interface interactions do not pollute the core data model.

Soul Builder uses a single-drawer mode: only one slot drawer may be open at a time. Clicking the mask, pressing the ESC key, or switching slots automatically closes the current drawer. This simplifies state management and also matches common drawer interaction patterns on mobile.

Closing the drawer does not clear the current editing content, so when users come back, their context is preserved. This kind of “lightweight” drawer design avoids interrupting the user’s flow. Nobody wants carefully written content to disappear because of one accidental click.

Internationalization is an important capability of the Soul platform. System copy fully supports bilingual switching, while user draft text is never rewritten when the language changes, because draft text is user-authored free input rather than system-translated content.

Official inspiration cards (Marketplace Souls) keep the upstream display name while also providing a best-effort English summary. For Souls with Chinese names, we generate English versions through predefined mapping rules:

// English name mapping for main roles
const mainNameEnglishMap = {
"雾港旅人": "Mistport Traveler",
"夜航猎手": "Night Hunter",
// ...
}
// English name mapping for orthogonal rules
const ruleNameEnglishMap = {
"简洁干练": "Concise & Professional",
"啰嗦亲切": "Verbose & Friendly",
// ...
}

The mapping table itself looks simple enough, but keeping it in good shape still takes care. There are 50 main roles and 10 orthogonal rules, which means 500 combinations in total. That is not huge, but it is enough to deserve respect.

Bulk generation of the Soul Catalog happens on the backend, where C# is used to automate the creation of 50 x 10 = 500 combinations:

foreach (var main in source.MainCatalogs)
{
foreach (var orthogonal in source.OrthogonalCatalogs)
{
var catalogId = $"soul-{main.Index:00}-{orthogonal.Index:00}";
var displayName = BuildNickname(main, orthogonal);
var soulSnapshot = BuildSoulSnapshot(main, orthogonal);
// Write to the database...
}
}

The nickname generation algorithm combines the main role name with the expression rule name to create imaginative Agent codenames:

private static readonly string[] MainHandleRoots = [
"雾港", "夜航", "零帧", "星渊", "霓虹", "断云", ...
];
private static readonly string[] OrthogonalHandleSuffixes = [
"旅人", "猎手", "术师", "行者", "星使", ...
];
// Combination examples: 雾港旅人, 夜航猎手, 零帧术师...

Soul snapshot assembly follows a fixed template format that combines the main role core, signature traits, expression rule core, and output constraints together:

private static string BuildSoulSnapshot(main, orthogonal) => string.Join('\n', [
$"你的人设内核来自「{main.Name}」:{main.Core}",
$"保持以下标志性语言特征:{main.Signature}",
$"你的表达规则来自「{orthogonal.Name}」:{orthogonal.Core}",
$"必须遵循这些输出约束:{orthogonal.Signature}"
]);

Template assembly may sound terribly dull, but without that sort of dull work, interesting products rarely appear.

After splitting Soul from the main system into an independent platform, one important challenge was handling existing user data. It is a familiar problem: splitting things apart is easy, migration is not. We adopted three safeguards:

Backward compatibility protection. Previously saved Hero SOUL snapshots remain visible, and historical snapshots can still be previewed even if they no longer have a Marketplace source ID. In other words, none of the user’s prior configurations are lost; only where they appear has changed.

Main system API deprecation. The in-site Marketplace API returns HTTP status 410 Gone together with a migration notice that guides users to soul.hagicode.com.

Hero SOUL form refactoring. A migration notice block was added to the Hero Soul editing area to clearly tell users that the Soul platform is now independent and to provide a one-click jump button:

HeroSoulForm.tsx
<div className="rounded-2xl border border-orange-200/70 bg-orange-50/80 p-4">
<div>{t('hero.soul.migrationTitle')}</div>
<p>{t('hero.soul.migrationDescription')}</p>
<Button onClick={onOpenSoulPlatform}>
{t('hero.soul.openSoulPlatformAction')}
</Button>
</div>

Looking back at the development of the Soul platform as a whole, there are a few practical lessons worth sharing. They are not grand principles, just things learned from real mistakes.

Local-first runtime assumptions. When designing features that depend on remote data, always assume the network may be unavailable. Using local snapshots as the baseline and remote data as enhancement ensures the product remains basically usable under any network condition.

Clear separation of state boundaries. UI state and business state should be distinguished clearly so interface interactions do not pollute the core data model. Drawer toggles are purely UI state and should not be mixed with draft persistence.

Design for internationalization early. If your product has multilingual requirements, it is best to think about them during the data model design phase. The localized field adds some structural complexity, but it greatly reduces the long-term maintenance cost of multilingual content.

Automate the material synchronization workflow. Local materials for the Soul platform come from the main system documentation. When upstream documentation changes, there needs to be a mechanism to sync it into frontend snapshots. We designed the npm run materials:sync script to automate that process and keep materials aligned with upstream.

Based on the current architecture, the Soul platform could move in several directions in the future. These are only tentative ideas, but perhaps they can be useful as a starting point.

Community sharing ecosystem. Support user uploads and sharing of custom Souls, with rating, commenting, and recommendation mechanisms so excellent Soul configurations can be discovered and reused by more people.

Multimodal expansion. Beyond text style, the platform could also support dimensions such as voice style configuration, emoji usage preferences, and code style and formatting rules. It sounds attractive in theory; implementation may tell a more complicated story.

Intelligent assistance. Automatically recommend Souls based on usage scenarios, support style transfer and fusion, and even run A/B tests on the real-world effectiveness of different Souls. There is no better way to know than to try.

Cross-platform synchronization. Support importing persona configurations from other AI platforms, provide a standardized Soul export format, and integrate with mainstream Agent frameworks.

This article shares the full evolution of the HagiCode Soul platform from its earliest emerging need to an independent platform. We discussed why a Soul mechanism is needed to solve Agent persona consistency, analyzed the three stages of architectural evolution (embedded configuration, in-site Marketplace, and independent platform), examined the core data model, state management, preview compilation, and internationalization design in depth, and summarized practical migration lessons.

The essence of Soul is an independent persona configuration layer separated from business logic. It makes the language style of AI Agents definable, reusable, and shareable. From a technical perspective, the design itself is not especially complicated, but the problem it solves is real and broadly relevant.

If you are also building AI Agent products, it may be worth asking whether your persona configuration solution is flexible enough. The Soul platform’s practical experience may offer a few useful ideas.

Perhaps one day you will run into a similar problem as well. If this article can help a little when that happens, that is probably enough.


If you found this article helpful, feel free to give the project a Star on GitHub. The public beta has already started, and you are welcome to install it and try it out.

Thank you for reading. If you found this article useful, likes, bookmarks, and shares are all appreciated. This content was created with AI-assisted collaboration, and the final content was reviewed and confirmed by the author.

A Practical Guide to Optimizing Vite Build Performance with Worker Threads

From 120 Seconds to 45 Seconds: A Practical Guide to Optimizing Vite Build Performance with Worker Threads

Section titled “From 120 Seconds to 45 Seconds: A Practical Guide to Optimizing Vite Build Performance with Worker Threads”

When working with large frontend projects, production builds can feel painfully slow. This article shares how we used Node.js Worker Threads to reduce the obfuscation stage in a Vite build from 120 seconds to 45 seconds, along with the implementation details and lessons learned in the HagiCode project.

In our frontend engineering practice, build efficiency issues became increasingly prominent as the project grew. In particular, during the production build process, we usually introduce JavaScript obfuscation tools such as javascript-obfuscator to protect the source code logic. This step is necessary, but it is also computationally expensive and heavily CPU-bound.

During the early development stage of HagiCode, we ran into a very tricky performance bottleneck: production build times deteriorated rapidly as the codebase grew.

The specific pain points were:

  • Obfuscation tasks ran serially on a single thread, maxing out one CPU core while the others sat idle
  • Build time surged from the original 30 seconds to 110-120 seconds
  • The post-change build verification loop became extremely long, seriously slowing development iteration
  • In the CI/CD pipeline, the build stage became the most time-consuming part

Why did HagiCode need this? HagiCode is an AI-driven code assistant whose frontend architecture includes complex business logic and AI interaction modules. To ensure the security of our core code, we enforced high-intensity obfuscation in production releases. Faced with build waits approaching two minutes, we decided to carry out a deep performance optimization of the build system.

Since we have mentioned the project, let me say a bit more about it.

If you have run into frustrations like these during development:

  • Multiple projects and multiple tech stacks, with high maintenance costs for build scripts
  • Complicated CI/CD pipeline configuration, forcing you to check the docs every time you make a change
  • Endless cross-platform compatibility issues
  • Wanting AI to help write code, but finding existing tools not smart enough

Then HagiCode, which we are building, may interest you.

What is HagiCode?

  • An AI-driven code assistant
  • Supports multi-language, cross-platform code generation and optimization
  • Comes with built-in gamification so coding feels less tedious

Why mention it here? The parallel JavaScript obfuscation solution shared in this article is exactly what we refined while building HagiCode. If you find this engineering approach valuable, that suggests our technical taste is probably pretty good, and HagiCode itself may also be worth a look.

Want to learn more?


Analysis: Finding the Breakthrough Point in the Performance Bottleneck

Section titled “Analysis: Finding the Breakthrough Point in the Performance Bottleneck”

Before solving the performance issue, we first needed to clarify our thinking and identify the best technical solution.

There are three main ways to achieve parallel computation in Node.js:

  1. child_process: create independent child processes
  2. Web Workers: mainly used on the browser side
  3. worker_threads: native multithreading support in Node.js

After comparing the options, HagiCode ultimately chose Worker Threads for the following reasons:

  • Zero serialization overhead: Worker Threads run in the same process and can share memory through SharedArrayBuffer or transfer ownership, avoiding the heavy serialization cost of inter-process communication.
  • Native support: built into Node.js 12+ with no need for extra heavyweight dependencies.
  • Unified context: debugging and logging are more convenient than with child processes.

Task Granularity: How Should Obfuscation Tasks Be Split?

Section titled “Task Granularity: How Should Obfuscation Tasks Be Split?”

It is hard to parallelize the obfuscation of one huge JS bundle file because the code has dependencies, but Vite build output is composed of multiple chunks. That gives us a natural parallel boundary:

  • Independence: after Vite packaging, dependencies between different chunks are already decoupled, so they can be processed safely in parallel.
  • Appropriate granularity: projects usually have 10-30 chunks, which is an excellent scale for parallel scheduling.
  • Easy integration: the generateBundle hook in Vite plugins lets us intercept and process these chunks before the files are emitted.

We designed a parallel processing system with four core components:

  1. Task Splitter: iterates over Vite’s bundle object, filters out files that do not need obfuscation such as vendor chunks, and generates a task queue.
  2. Worker Pool Manager: manages the Worker lifecycle and handles task distribution, recycling, and retry on failure.
  3. Progress Reporter: outputs build progress in real time to reduce waiting anxiety.
  4. ObfuscationWorker: the Worker thread that actually performs the obfuscation logic.

Based on the analysis above, we started implementing this parallel obfuscation system.

First, we integrated the parallel obfuscation plugin in vite.config.ts. The configuration is straightforward. You only need to specify the number of Workers and the obfuscation rules.

import { defineConfig } from 'vite'
import { parallelJavascriptObfuscator } from './buildTools/plugin'
export default defineConfig(({ mode }) => {
const isProduction = mode === 'production'
return {
build: {
rollupOptions: {
...(isProduction
? {
plugins: [
parallelJavascriptObfuscator({
enabled: true,
// Automatically adjust based on CPU core count; leave one core for the main thread
workerCount: 4,
retryAttempts: 3,
fallbackToMainThread: true, // Automatically degrade to single-thread mode on failure
// Filter out vendor chunks; third-party libraries usually do not need obfuscation
isVendorChunk: (fileName: string) => fileName.includes('vendor-'),
obfuscationConfig: {
compact: true,
controlFlowFlattening: true,
deadCodeInjection: true,
disableConsoleOutput: true,
// ... more obfuscation options
},
}),
],
}
: {}),
},
},
}
})

A Worker is the unit that executes tasks. We need to define the input and output data structures clearly.

Note: although the code here is simple, there are several pitfalls to watch out for, such as checking whether parentPort is null and handling errors properly. In HagiCode’s implementation, we found that certain special ES6 syntax patterns could cause the obfuscator to crash, so we added try-catch protection.

import { parentPort } from 'worker_threads'
import javascriptObfuscator from 'javascript-obfuscator'
export interface ObfuscationTask {
chunkId: string
code: string
config: any
}
export interface ObfuscationResult {
chunkId: string
obfuscatedCode: string
error?: string
}
// Listen for tasks sent from the main thread
if (parentPort) {
parentPort.on('message', async (task: ObfuscationTask) => {
try {
// Perform obfuscation
const obfuscated = javascriptObfuscator.obfuscate(task.code, task.config)
const result: ObfuscationResult = {
chunkId: task.chunkId,
obfuscatedCode: obfuscated.getObfuscatedCode(),
}
// Send the result back to the main thread
parentPort?.postMessage(result)
} catch (error) {
// Handle exceptions so one Worker crash does not block the whole build
const result: ObfuscationResult = {
chunkId: task.chunkId,
obfuscatedCode: '',
error: error instanceof Error ? error.message : 'Unknown error',
}
parentPort?.postMessage(result)
}
})
}

This is the core of the whole solution. We need to maintain a fixed-size Worker pool and schedule tasks using a FIFO (first in, first out) strategy.

import { Worker } from 'worker_threads'
import os from 'os'
export class WorkerPool {
private workers: Worker[] = []
private taskQueue: Array<{
task: ObfuscationTask
resolve: (result: ObfuscationResult) => void
reject: (error: Error) => void
}> = []
constructor(options: WorkerPoolOptions = {}) {
// Default to core count - 1 so the main thread still has some breathing room
const workerCount = options.workerCount ?? Math.max(1, (os.cpus().length || 4) - 1)
for (let i = 0; i < workerCount; i++) {
this.createWorker()
}
}
private createWorker() {
const worker = new Worker('./worker.ts')
worker.on('message', (result) => {
// After one task finishes, take the next task from the queue
const nextTask = this.taskQueue.shift()
if (nextTask) {
this.dispatchTask(worker, nextTask)
} else {
// If there are no pending tasks, mark the Worker as idle
this.activeWorkers.delete(worker)
}
})
this.workers.push(worker)
}
// Submit a task to the pool
public runTask(task: ObfuscationTask): Promise<ObfuscationResult> {
return new Promise((resolve, reject) => {
const job = { task, resolve, reject }
const idleWorker = this.workers.find(w => !this.activeWorkers.has(w))
if (idleWorker) {
this.dispatchTask(idleWorker, job)
} else {
this.taskQueue.push(job)
}
})
}
private dispatchTask(worker: Worker, job: any) {
this.activeWorkers.set(worker, job.task)
worker.postMessage(job.task)
}
}

Waiting is painful, especially when you have no idea how much longer it will take. So we added a simple progress reporter to provide real-time feedback on the current status.

export class ProgressReporter {
private completed = 0
private readonly total: number
private readonly startTime: number
constructor(total: number) {
this.total = total
this.startTime = Date.now()
}
increment(): void {
this.completed++
this.report()
}
private report(): void {
const now = Date.now()
const elapsed = now - this.startTime
const percentage = (this.completed / this.total) * 100
// Simple ETA estimate
const avgTimePerChunk = elapsed / this.completed
const remaining = (this.total - this.completed) * avgTimePerChunk
console.log(
`[Parallel Obfuscation] ${this.completed}/${this.total} chunks completed (${percentage.toFixed(1)}%) | ETA: ${(remaining / 1000).toFixed(1)}s`
)
}
}

After deploying this solution, the build performance of the HagiCode project improved immediately.

We tested in the following environment:

  • CPU: Intel Core i7-12700K (12 cores / 20 threads)
  • RAM: 32GB DDR4
  • Node.js: v18.17.0
  • OS: Ubuntu 22.04

Results comparison:

  • Single-threaded (before optimization): 118 seconds
  • 4 Workers: 55 seconds (53% improvement)
  • 8 Workers: 48 seconds (60% improvement)
  • 12 Workers: 45 seconds (62% improvement)

As you can see, the gains were not linear. Once the Worker count exceeded 8, the improvement became smaller. This was mainly limited by the evenness of task distribution and memory bandwidth bottlenecks.

In HagiCode’s real-world use, we also ran into several pitfalls, so here they are for reference:

Q1: Build time did not decrease much and even became slower?

  • Reason: creating Workers has its own overhead, or too many Workers were configured, causing frequent context switching.
  • Solution: we recommend setting the Worker count to CPU core count - 1. Also check whether any single chunk is especially large, for example > 5MB. That kind of “monster” file will become the bottleneck, so you may need to optimize your code splitting strategy.

Q2: Workers occasionally crash and cause build failures?

  • Reason: some special code syntax patterns may cause internal errors inside the obfuscator.
  • Solution: we implemented an automatic degradation mechanism. When a Worker reaches the failure threshold, the plugin automatically falls back to single-thread mode to ensure the build does not stop. At the same time, it records the filename that caused the error so it can be fixed later.

Q3: Memory usage is too high (OOM)?

  • Reason: each Worker needs its own memory space to load the obfuscator and parse the AST.
  • Solution:
    • Reduce the number of Workers.
    • Increase the Node.js memory limit: NODE_OPTIONS="--max-old-space-size=4096" npm run build.
    • Make sure Workers do not keep unnecessary references to large objects.

By introducing Node.js Worker Threads, we successfully reduced the production build time of the HagiCode project from 120 seconds to around 45 seconds, greatly improving the development experience and CI/CD efficiency.

The core of this solution is:

  1. Split tasks properly: use Vite chunks as the parallel unit.
  2. Control resources: use a Worker pool to avoid resource exhaustion.
  3. Design for fault tolerance: an automatic degradation mechanism ensures build stability.

If you are also struggling with frontend build efficiency, or your project also does heavy code processing, this solution is worth trying. Of course, we would recommend taking a direct look at HagiCode, where these engineering details are already integrated.

If this article helped you, feel free to give us a Star on GitHub or join the public beta and try it out.


Thank you for reading. If you found this article useful, please click the like button below so more people can discover it.

This content was created with AI-assisted collaboration, reviewed by me, and reflects my own views and position.