Skip to content

Vite

2 posts with the tag “Vite”

HagiCode Splash Screen Design: The Ultimate Way to Fill the Hydration Gap in React 19 Apps

Designing 12 Exceptional Startup Experiences for HagiCode: From Minimalism to Cyberpunk

Section titled “Designing 12 Exceptional Startup Experiences for HagiCode: From Minimalism to Cyberpunk”

The brief gap between downloading a React 19 app and completing Hydration is a golden opportunity for users to feel your brand personality. In this article, we share a complete startup style system we built for the HagiCode project using HTML/CSS/JS.

As a modern application built with ASP.NET Core 10 and React 19 (Vite), HagiCode uses a frontend-backend separated deployment architecture. The frontend build output is packaged into the backend wwwroot/ directory and hosted by ASP.NET Core.

However, this architecture introduces a classic UX pain point: when users visit the page, the browser first loads the HTML, then downloads the large JS bundle, and finally lets React perform Hydration. During this “vacuum period” that lasts from a few hundred milliseconds to several seconds, users see either a blank screen or a lifeless static page.

To fill that gap and inject HagiCode’s brand personality, we needed to design a startup style system implemented entirely with inline code inside index.html.

The splash screen design approach shared in this article comes from our practical experience in the HagiCode project. As an AI coding assistant, HagiCode cares not only about code generation efficiency, but also about the developer’s visual experience. This startup system is one of the outcomes of our pursuit of ultimate frontend performance.

Before we started designing, we first had to clarify the technical constraints. Since everything had to be implemented inline in index.html, we could not load any external CSS or JS files other than React’s own bundle.

  1. Zero-dependency principle: All styles must live inside a <style> tag, and all logic must live inside a <script> tag.
  2. Defensive CSS: To prevent global styles from polluting the splash screen after the React app mounts, we decided to wrap all startup styles with a high-priority ID prefix such as #boot-screen.
  3. Performance first: Animations should use CSS transform and opacity wherever possible to avoid reflow and ensure the main thread stays unblocked.
  4. Visual consistency: Colors and fonts must stay aligned with HagiCode’s Tailwind configuration.

We adopted a variant pattern. The core logic is encapsulated inside an immediately invoked function expression (IIFE), while the specific rendering logic is injected through configuration. This lets us switch between different styles through simple configuration instead of rewriting DOM manipulation logic repeatedly.

Here is the core architecture code:

<!-- Inline in index.html -->
<div id="boot-root"></div>
<script>
(function() {
const BootSequence = {
config: {
theme: 'terminal', // Configurable as 'minimal', 'skeleton', 'code-rain', etc.
color: '#3b82f6' // Brand color
},
// Core lifecycle
init() {
this.render();
this.listenForMount();
},
// Render the currently selected style
render() {
const root = document.getElementById('boot-root');
if (this.variants[this.config.theme]) {
root.innerHTML = this.variants[this.config.theme].render();
}
},
// Listen for successful React mount and exit gracefully
listenForMount() {
window.addEventListener('hagicode:ready', () => {
const screen = document.getElementById('boot-root');
// Fade out first, then remove the DOM to avoid flicker
screen.style.opacity = '0';
screen.style.transition = 'opacity 0.3s ease';
setTimeout(() => screen.remove(), 300);
});
},
// The implementation logic for all 12 styles lives here
variants: {
// ...see details below
}
};
BootSequence.init();
})();
</script>

We grouped these 12 styles into six major categories to satisfy different scenarios and aesthetic preferences.

“Less is more.” For scenarios that pursue ultimate loading speed, we provide the lightest possible options.

A simple dot sits at the center of the screen, paired with a breathing animation.

  • Implementation: CSS @keyframes controls scale and opacity.
  • Best for: Any case where the page must remain absolutely clean.

Using SVG stroke-dasharray animation, it simulates a hand-drawn reveal of the HagiCode logo lines, followed by a text fade-in.

  • Technique: SVG path animation with a highly polished feel.

“The art of deceiving the eye.” By simulating a real UI layout, users feel like the page is already half loaded.

This may be the most practical option. We manually built a layout in HTML that mirrors the React Sidebar and ChatInput components exactly, then overlaid it with a gray shimmer animation.

  • Value: When React hydration completes, the skeleton instantly becomes the real component, and users can barely perceive the switch.

It simulates the stacked motion of proposal cards while loading, using 3D transforms to make the cards float subtly.

Show off HagiCode’s geek DNA.

A geometric shape (a square) is rendered at the center of the screen, then smoothly transforms over time into a circle, a triangle, and finally the logo.

  • Technology: Smooth transitions with CSS border-radius.

A tribute to The Matrix. Using the JetBrains Mono font, faint streams of characters fall in the background.

  • Note: For performance, the character streams must stay within a smaller area or use a lower refresh rate.

A cyberpunk-style glowing ring that uses multiple box-shadow layers to create a powerful neon glow.

Make the system feel alive.

This is a dynamic loader. It checks the current date for holidays such as Lunar New Year or Christmas and loads the corresponding SVG animation.

  • Example: During Lunar New Year, red lanterns gently sway at the bottom of the screen.

The background uses a fluid gradient based on HagiCode brand colors. Combined with animated background-size and background-position, it creates an aurora-like sense of motion.

A salute to developers.

It simulates console output. Lines of code scroll by rapidly:

> Initializing HagiCode Core...
> Loading models...
> Connecting to neural network...

That instantly feels familiar to every developer.

A thin progress bar appears at the top of the screen, with a percentage shown on the right. While we cannot access the real download progress, we can use a timer to simulate a “believable” loading process: fast for the first 80%, then gradually slower for the last 20%.

This is a very interesting idea. Small squares are scattered across the screen, then converge toward the center and gradually assemble into the HagiCode logo icon. It symbolizes the process of building code.

In HagiCode’s real development work, we summarized several critically important implementation details.

Never get lazy and skip the prefix. Once, we forgot to scope the splash screen styles with an ID, and global div styles after React mounted unexpectedly affected the splash screen, breaking the layout. Lesson learned: Put every CSS selector under #boot-screen, and use !important to raise priority when necessary, but only inside the splash screen CSS.

After React mounts successfully, do not directly remove() the splash screen DOM. Correct approach:

  1. React triggers window.dispatchEvent(new Event('hagicode:ready')).
  2. The splash screen listens for the event and first sets opacity: 0.
  3. Wait 300ms, which matches the CSS transition duration, and call .remove() only after the screen is fully invisible.

The splash screen color values are hard-coded in index.html. If we change Tailwind’s primary color, we must update the splash screen too. Optimization: Write a simple plugin in the Vite build script to read tailwind.config.js and inject color variables into the index.html template variables, creating a single source of truth.

Splash screens often need to use a brand font, but if the font loads slowly, FOUT (Flash of Unstyled Text) can appear. Solution: Add <link rel="preload" href="/fonts/JetBrainsMono.woff2" as="font" type="font/woff2" crossorigin> inside <head>. This is a low-cost, high-return way to improve the experience.

We injected performance.mark('boot-start') at the bottom of index.html, and marked boot-end when React mounted successfully. Why it matters: By collecting this data through Application Insights, we can directly measure how much the splash screen shortens perceived waiting time. The data shows that an excellent skeleton screen can improve users’ tolerance for a “slow network” by more than 50%.

A good splash screen is more than “decoration while waiting”. It is the handshake signal in the very first interaction between the product and the user. In the HagiCode project, this startup system based on the Variants pattern lets us flexibly switch styles across holidays and releases, greatly enhancing the product’s sense of fun and professionalism.

The solution shared in this article is built entirely on native web standards without introducing any heavy dependencies, which reflects HagiCode’s pursuit of being “lightweight yet powerful.” If you find this approach valuable, feel free to check out the source code in the HagiCode repository and even contribute your own creative designs.

If this article helped you, feel free to give the project a Star on GitHub. The public beta has already started, and we look forward to your feedback!


Thank you for reading. If you found this article useful, please click the like button below 👍 so more people can discover it.

This content was created with AI-assisted collaboration, reviewed by me, and reflects my own views and position.

A Practical Guide to Optimizing Vite Build Performance with Worker Threads

From 120 Seconds to 45 Seconds: A Practical Guide to Optimizing Vite Build Performance with Worker Threads

Section titled “From 120 Seconds to 45 Seconds: A Practical Guide to Optimizing Vite Build Performance with Worker Threads”

When working with large frontend projects, production builds can feel painfully slow. This article shares how we used Node.js Worker Threads to reduce the obfuscation stage in a Vite build from 120 seconds to 45 seconds, along with the implementation details and lessons learned in the HagiCode project.

In our frontend engineering practice, build efficiency issues became increasingly prominent as the project grew. In particular, during the production build process, we usually introduce JavaScript obfuscation tools such as javascript-obfuscator to protect the source code logic. This step is necessary, but it is also computationally expensive and heavily CPU-bound.

During the early development stage of HagiCode, we ran into a very tricky performance bottleneck: production build times deteriorated rapidly as the codebase grew.

The specific pain points were:

  • Obfuscation tasks ran serially on a single thread, maxing out one CPU core while the others sat idle
  • Build time surged from the original 30 seconds to 110-120 seconds
  • The post-change build verification loop became extremely long, seriously slowing development iteration
  • In the CI/CD pipeline, the build stage became the most time-consuming part

Why did HagiCode need this? HagiCode is an AI-driven code assistant whose frontend architecture includes complex business logic and AI interaction modules. To ensure the security of our core code, we enforced high-intensity obfuscation in production releases. Faced with build waits approaching two minutes, we decided to carry out a deep performance optimization of the build system.

Since we have mentioned the project, let me say a bit more about it.

If you have run into frustrations like these during development:

  • Multiple projects and multiple tech stacks, with high maintenance costs for build scripts
  • Complicated CI/CD pipeline configuration, forcing you to check the docs every time you make a change
  • Endless cross-platform compatibility issues
  • Wanting AI to help write code, but finding existing tools not smart enough

Then HagiCode, which we are building, may interest you.

What is HagiCode?

  • An AI-driven code assistant
  • Supports multi-language, cross-platform code generation and optimization
  • Comes with built-in gamification so coding feels less tedious

Why mention it here? The parallel JavaScript obfuscation solution shared in this article is exactly what we refined while building HagiCode. If you find this engineering approach valuable, that suggests our technical taste is probably pretty good, and HagiCode itself may also be worth a look.

Want to learn more?


Analysis: Finding the Breakthrough Point in the Performance Bottleneck

Section titled “Analysis: Finding the Breakthrough Point in the Performance Bottleneck”

Before solving the performance issue, we first needed to clarify our thinking and identify the best technical solution.

There are three main ways to achieve parallel computation in Node.js:

  1. child_process: create independent child processes
  2. Web Workers: mainly used on the browser side
  3. worker_threads: native multithreading support in Node.js

After comparing the options, HagiCode ultimately chose Worker Threads for the following reasons:

  • Zero serialization overhead: Worker Threads run in the same process and can share memory through SharedArrayBuffer or transfer ownership, avoiding the heavy serialization cost of inter-process communication.
  • Native support: built into Node.js 12+ with no need for extra heavyweight dependencies.
  • Unified context: debugging and logging are more convenient than with child processes.

Task Granularity: How Should Obfuscation Tasks Be Split?

Section titled “Task Granularity: How Should Obfuscation Tasks Be Split?”

It is hard to parallelize the obfuscation of one huge JS bundle file because the code has dependencies, but Vite build output is composed of multiple chunks. That gives us a natural parallel boundary:

  • Independence: after Vite packaging, dependencies between different chunks are already decoupled, so they can be processed safely in parallel.
  • Appropriate granularity: projects usually have 10-30 chunks, which is an excellent scale for parallel scheduling.
  • Easy integration: the generateBundle hook in Vite plugins lets us intercept and process these chunks before the files are emitted.

We designed a parallel processing system with four core components:

  1. Task Splitter: iterates over Vite’s bundle object, filters out files that do not need obfuscation such as vendor chunks, and generates a task queue.
  2. Worker Pool Manager: manages the Worker lifecycle and handles task distribution, recycling, and retry on failure.
  3. Progress Reporter: outputs build progress in real time to reduce waiting anxiety.
  4. ObfuscationWorker: the Worker thread that actually performs the obfuscation logic.

Based on the analysis above, we started implementing this parallel obfuscation system.

First, we integrated the parallel obfuscation plugin in vite.config.ts. The configuration is straightforward. You only need to specify the number of Workers and the obfuscation rules.

import { defineConfig } from 'vite'
import { parallelJavascriptObfuscator } from './buildTools/plugin'
export default defineConfig(({ mode }) => {
const isProduction = mode === 'production'
return {
build: {
rollupOptions: {
...(isProduction
? {
plugins: [
parallelJavascriptObfuscator({
enabled: true,
// Automatically adjust based on CPU core count; leave one core for the main thread
workerCount: 4,
retryAttempts: 3,
fallbackToMainThread: true, // Automatically degrade to single-thread mode on failure
// Filter out vendor chunks; third-party libraries usually do not need obfuscation
isVendorChunk: (fileName: string) => fileName.includes('vendor-'),
obfuscationConfig: {
compact: true,
controlFlowFlattening: true,
deadCodeInjection: true,
disableConsoleOutput: true,
// ... more obfuscation options
},
}),
],
}
: {}),
},
},
}
})

A Worker is the unit that executes tasks. We need to define the input and output data structures clearly.

Note: although the code here is simple, there are several pitfalls to watch out for, such as checking whether parentPort is null and handling errors properly. In HagiCode’s implementation, we found that certain special ES6 syntax patterns could cause the obfuscator to crash, so we added try-catch protection.

import { parentPort } from 'worker_threads'
import javascriptObfuscator from 'javascript-obfuscator'
export interface ObfuscationTask {
chunkId: string
code: string
config: any
}
export interface ObfuscationResult {
chunkId: string
obfuscatedCode: string
error?: string
}
// Listen for tasks sent from the main thread
if (parentPort) {
parentPort.on('message', async (task: ObfuscationTask) => {
try {
// Perform obfuscation
const obfuscated = javascriptObfuscator.obfuscate(task.code, task.config)
const result: ObfuscationResult = {
chunkId: task.chunkId,
obfuscatedCode: obfuscated.getObfuscatedCode(),
}
// Send the result back to the main thread
parentPort?.postMessage(result)
} catch (error) {
// Handle exceptions so one Worker crash does not block the whole build
const result: ObfuscationResult = {
chunkId: task.chunkId,
obfuscatedCode: '',
error: error instanceof Error ? error.message : 'Unknown error',
}
parentPort?.postMessage(result)
}
})
}

This is the core of the whole solution. We need to maintain a fixed-size Worker pool and schedule tasks using a FIFO (first in, first out) strategy.

import { Worker } from 'worker_threads'
import os from 'os'
export class WorkerPool {
private workers: Worker[] = []
private taskQueue: Array<{
task: ObfuscationTask
resolve: (result: ObfuscationResult) => void
reject: (error: Error) => void
}> = []
constructor(options: WorkerPoolOptions = {}) {
// Default to core count - 1 so the main thread still has some breathing room
const workerCount = options.workerCount ?? Math.max(1, (os.cpus().length || 4) - 1)
for (let i = 0; i < workerCount; i++) {
this.createWorker()
}
}
private createWorker() {
const worker = new Worker('./worker.ts')
worker.on('message', (result) => {
// After one task finishes, take the next task from the queue
const nextTask = this.taskQueue.shift()
if (nextTask) {
this.dispatchTask(worker, nextTask)
} else {
// If there are no pending tasks, mark the Worker as idle
this.activeWorkers.delete(worker)
}
})
this.workers.push(worker)
}
// Submit a task to the pool
public runTask(task: ObfuscationTask): Promise<ObfuscationResult> {
return new Promise((resolve, reject) => {
const job = { task, resolve, reject }
const idleWorker = this.workers.find(w => !this.activeWorkers.has(w))
if (idleWorker) {
this.dispatchTask(idleWorker, job)
} else {
this.taskQueue.push(job)
}
})
}
private dispatchTask(worker: Worker, job: any) {
this.activeWorkers.set(worker, job.task)
worker.postMessage(job.task)
}
}

Waiting is painful, especially when you have no idea how much longer it will take. So we added a simple progress reporter to provide real-time feedback on the current status.

export class ProgressReporter {
private completed = 0
private readonly total: number
private readonly startTime: number
constructor(total: number) {
this.total = total
this.startTime = Date.now()
}
increment(): void {
this.completed++
this.report()
}
private report(): void {
const now = Date.now()
const elapsed = now - this.startTime
const percentage = (this.completed / this.total) * 100
// Simple ETA estimate
const avgTimePerChunk = elapsed / this.completed
const remaining = (this.total - this.completed) * avgTimePerChunk
console.log(
`[Parallel Obfuscation] ${this.completed}/${this.total} chunks completed (${percentage.toFixed(1)}%) | ETA: ${(remaining / 1000).toFixed(1)}s`
)
}
}

After deploying this solution, the build performance of the HagiCode project improved immediately.

We tested in the following environment:

  • CPU: Intel Core i7-12700K (12 cores / 20 threads)
  • RAM: 32GB DDR4
  • Node.js: v18.17.0
  • OS: Ubuntu 22.04

Results comparison:

  • Single-threaded (before optimization): 118 seconds
  • 4 Workers: 55 seconds (53% improvement)
  • 8 Workers: 48 seconds (60% improvement)
  • 12 Workers: 45 seconds (62% improvement)

As you can see, the gains were not linear. Once the Worker count exceeded 8, the improvement became smaller. This was mainly limited by the evenness of task distribution and memory bandwidth bottlenecks.

In HagiCode’s real-world use, we also ran into several pitfalls, so here they are for reference:

Q1: Build time did not decrease much and even became slower?

  • Reason: creating Workers has its own overhead, or too many Workers were configured, causing frequent context switching.
  • Solution: we recommend setting the Worker count to CPU core count - 1. Also check whether any single chunk is especially large, for example > 5MB. That kind of “monster” file will become the bottleneck, so you may need to optimize your code splitting strategy.

Q2: Workers occasionally crash and cause build failures?

  • Reason: some special code syntax patterns may cause internal errors inside the obfuscator.
  • Solution: we implemented an automatic degradation mechanism. When a Worker reaches the failure threshold, the plugin automatically falls back to single-thread mode to ensure the build does not stop. At the same time, it records the filename that caused the error so it can be fixed later.

Q3: Memory usage is too high (OOM)?

  • Reason: each Worker needs its own memory space to load the obfuscator and parse the AST.
  • Solution:
    • Reduce the number of Workers.
    • Increase the Node.js memory limit: NODE_OPTIONS="--max-old-space-size=4096" npm run build.
    • Make sure Workers do not keep unnecessary references to large objects.

By introducing Node.js Worker Threads, we successfully reduced the production build time of the HagiCode project from 120 seconds to around 45 seconds, greatly improving the development experience and CI/CD efficiency.

The core of this solution is:

  1. Split tasks properly: use Vite chunks as the parallel unit.
  2. Control resources: use a Worker pool to avoid resource exhaustion.
  3. Design for fault tolerance: an automatic degradation mechanism ensures build stability.

If you are also struggling with frontend build efficiency, or your project also does heavy code processing, this solution is worth trying. Of course, we would recommend taking a direct look at HagiCode, where these engineering details are already integrated.

If this article helped you, feel free to give us a Star on GitHub or join the public beta and try it out.


Thank you for reading. If you found this article useful, please click the like button below so more people can discover it.

This content was created with AI-assisted collaboration, reviewed by me, and reflects my own views and position.