How to Build a Fast Download Distribution Station with Low-Cost Cloud Servers
How to Build a Fast Download Distribution Station with Low-Cost Cloud Servers
Section titled “How to Build a Fast Download Distribution Station with Low-Cost Cloud Servers”Cloud storage bandwidth is absurdly expensive, cross-border access is painfully slow, and CDN pricing is enough to scare anyone away… If you handle file distribution, you probably know these problems well. In this post, I want to share a low-cost approach we worked out while building HagiCode: a cloud server plus an Nginx caching layer. The cost dropped by about half, while download speed improved quite a bit, which was at least a little comforting.
Background
Section titled “Background”When it comes to the internet, download speed and stability are really part of the user experience. Whether you are running an open-source project or a commercial product, you still need to provide users with a reliable way to download files.
Downloading files directly from cloud storage, such as Azure Blob Storage or AWS S3, looks simple, but it comes with quite a few practical issues:
Network latency: Cross-border and cross-region access can be slow enough to make you want to smash your keyboard. If users have to wait forever, the experience is obviously not going to be great.
Bandwidth cost: Cloud storage egress traffic is painfully expensive. Accessing Azure Blob Storage from mainland China costs about CNY 0.5 per GB, which means 1 TB per month adds up to roughly CNY 500. For a small team, that is not an insignificant amount. After all, nobody’s money comes from the wind.
Access restrictions: In some regions, access to overseas cloud services is unstable, and sometimes it is simply unavailable. Users want to download the files but cannot, which is frustrating for everyone.
CDN cost: Commercial CDNs can solve these problems, but the price is just as real. Most small teams simply cannot justify it.
So is there a solution that is both affordable and practical? Yes. Use a cloud server, a reverse proxy, and a caching layer. It is a straightforward approach, but it works. The cost drops by about half, and the speed improves as well, which is a decent trade-off.
About HagiCode
Section titled “About HagiCode”We did not come up with this architecture out of thin air. It came from our real-world experience working on HagiCode.
HagiCode is an AI coding assistant, and we need to provide downloads for both server-side and desktop-side distributions. Since it is a tool for developers, it is important that users around the world can download it quickly and reliably. That is exactly why we had to figure out a low-cost distribution strategy in the first place.
If you think this solution looks useful, then maybe our engineering is at least decent enough… and if that is the case, HagiCode itself might also be worth checking out.
Architecture Design
Section titled “Architecture Design”Overall Architecture
Section titled “Overall Architecture”Let us start with the full architecture:
User request ↓DNS resolution ↓┌─────────────────────────────────────┐│ Reverse proxy layer (Traefik/Bunker Web) │ ← SSL termination, routing, security protection├─────────────────────────────────────┤│ Ports: 80/443 ││ Features: Automatic Let's Encrypt certificates ││ Host routing │└─────────────────────────────────────┘ ↓┌─────────────────────────────────────┐│ Cache layer (Nginx) │ ← File caching, Gzip compression├─────────────────────────────────────┤│ Ports: 8080(server) / 8081(desktop) ││ Cache strategy: ││ - index.json: 1 hour ││ - other files: 7 days ││ Cache size: 1GB │└─────────────────────────────────────┘ ↓┌─────────────────────────────────────┐│ Origin (Azure Blob Storage) │ ← File storage└─────────────────────────────────────┘The core idea of this architecture is simple: put a cache between users and cloud storage.
User requests first arrive at the reverse proxy layer on the cloud server, and then the Nginx cache layer takes over. If the requested file is already cached, it is returned immediately. If not, Nginx fetches it from cloud storage and stores a local copy at the same time. The next time someone requests the same file, cloud storage does not need to be involved again.
Why Choose This Architecture?
Section titled “Why Choose This Architecture?”Advantages of cloud servers:
- Predictable cost: providers like Alibaba Cloud offer low-cost cloud servers, with 1-2 vCPU and 2 GB RAM instances priced around CNY 50-100 per month
- Flexible deployment: you can configure reverse proxy rules and caching policies freely
- Flexible geography: you can choose server regions closer to your users
- Good scalability: you can upgrade the server specification as traffic grows
Reverse proxy + cache architecture:
- Reduce origin pressure: cache hot files to reduce direct access to cloud storage
- Lower cost: cloud server traffic is much cheaper than cloud storage egress
- Improve speed: nearby access and server bandwidth are usually better than direct cloud storage delivery
Why choose Nginx as the cache layer?
This was not a random choice. Nginx has several real advantages here:
- High performance: Nginx is widely recognized for excellent reverse proxy performance
- Mature caching: the built-in
proxy_cachefeature is stable and reliable - Low resource usage: it can run with as little as 256 MB of memory
- Flexible configuration: you can apply different cache policies to different file types
Reverse Proxy Layer: Traefik vs Bunker Web
Section titled “Reverse Proxy Layer: Traefik vs Bunker Web”HagiCode’s deployment solution supports two reverse proxy options, and each one has its own strengths:
| Option | Characteristics | Suitable Scenarios |
|---|---|---|
| Traefik | Lightweight, automatic SSL, simple configuration | Basic deployment, low-traffic scenarios |
| Bunker Web | Built-in WAF, DDoS protection, anti-bot protection | High-security, high-traffic scenarios |
Traefik: The Lightweight First Choice
Section titled “Traefik: The Lightweight First Choice”Traefik is a modern HTTP reverse proxy and load balancer. Its biggest advantage is that configuration is simple, and it can obtain Let’s Encrypt certificates automatically.
For initial deployments or low-traffic scenarios, Traefik is often a very good choice:
- It uses relatively few resources; 1.5 CPU and 512 MB memory is enough
- SSL certificates are configured automatically, so you do not need to manage them yourself
- Routing is configured through Docker labels, which is convenient enough
Bunker Web: For High-Security Scenarios
Section titled “Bunker Web: For High-Security Scenarios”Bunker Web is an Nginx-based web application firewall with more comprehensive security protection.
When should you consider switching to Bunker Web? Usually in cases like these:
- You are under DDoS attack
- You need ModSecurity protection
- You want anti-bot protection
- You have stricter security requirements
HagiCode provides the switch-deployment.sh script so you can switch quickly between the two options:
# Switch to Bunker Web./switch-deployment.sh bunkerweb
# Switch back to Traefik./switch-deployment.sh traefik
# Check current status./switch-deployment.sh statusThe script performs pre-checks, health checks, and automatic rollback, so the switch process is fairly safe and reliable.
Nginx Cache Layer Configuration
Section titled “Nginx Cache Layer Configuration”The cache layer is the core of the whole architecture, so Nginx configuration makes a huge difference in cache performance.
Cache Path Configuration
Section titled “Cache Path Configuration”# Cache path configurationproxy_cache_path /var/cache/nginx levels=1:2 keys_zone=azure_cache:10m max_size=1g inactive=7d use_temp_path=off;Parameter details:
levels=1:2: cache directory hierarchy with two levels to improve file access efficiencykeys_zone=azure_cache:10m: cache key storage zone; 10 MB is enough for a large number of keysmax_size=1g: maximum cache size is 1 GBinactive=7d: delete cached files if they have not been accessed for 7 daysuse_temp_path=off: write directly into the cache directory for better performance
Tiered Cache Strategy
Section titled “Tiered Cache Strategy”Different file types need different cache strategies:
# Server download serviceserver { listen 8080;
# Short-term cache for index.json (to allow timely updates) location /index.json { proxy_cache azure_cache; proxy_cache_valid 200 1h; proxy_cache_key "$scheme$server_port$request_uri"; add_header X-Cache-Status $upstream_cache_status; add_header Cache-Control "public, max-age=3600";
# Reverse proxy to Azure OSS proxy_pass https://${SERVER_DL_HOST}/${SERVER_DL_CONTAINER}$uri?${SERVER_DL_SAS_TOKEN}; proxy_ssl_server_name on; proxy_ssl_protocols TLSv1.2 TLSv1.3; }
# Long-term cache for static files such as installation packages location / { proxy_cache azure_cache; proxy_cache_valid 200 7d; proxy_cache_key "$scheme$server_port$request_uri"; add_header X-Cache-Status $upstream_cache_status; add_header Cache-Control "public, max-age=604800";
proxy_pass https://${SERVER_DL_HOST}/${SERVER_DL_CONTAINER}$uri?${SERVER_DL_SAS_TOKEN}; proxy_ssl_server_name on; proxy_ssl_protocols TLSv1.2 TLSv1.3; }}Why is it designed this way?
index.json is the version check file, so it needs to update promptly. With a 1-hour cache window, users can detect a new release within at most one hour after publication.
Static files such as installation packages change infrequently, so caching them for 7 days greatly reduces origin access. When an update is needed, you can just clear the cache manually.
X-Cache-Status response header:
This header helps you inspect cache hit behavior:
HIT: cache hitMISS: cache miss, fetched from originEXPIRED: cache expired, fetched from origin againBYPASS: cache bypassed
How to check it:
curl -I https://server.dl.hagicode.com/app.zipCost Analysis
Section titled “Cost Analysis”Assume 1 TB of download traffic per month. Let us do the math:
| Option | Traffic Cost | Server Cost | Total |
|---|---|---|---|
| Direct Azure OSS | About CNY 500 | CNY 0 | CNY 500 |
| Cloud server + OSS (80% cache hit ratio) | CNY 100 + CNY 80 | CNY 60 | CNY 240 |
| Commercial CDN | CNY 300-500 | CNY 0 | CNY 300-500 |
Conclusion: adding a cache layer can reduce distribution cost by roughly 50%.
This estimate assumes an 80% cache hit ratio. In practice, if files do not change often, the hit ratio may be even higher.
Deployment Practice
Section titled “Deployment Practice”Environment Preparation
Section titled “Environment Preparation”First, configure the environment variables:
cd /path/to/hagicode_aliyun_deployment/dockercp .env.example .envvi .env # Fill in the Azure OSS SAS URL and Lark Webhook URLImportant: the .env file contains sensitive information such as the SAS Token and Webhook URL. Never commit it to version control.
DNS Configuration
Section titled “DNS Configuration”Add the following DNS A records:
server.dl.hagicode.com→ server IPdesktop.dl.hagicode.com→ server IP
Initialize the Server
Section titled “Initialize the Server”Use Ansible to initialize the server automatically:
cd /path/to/hagicode_aliyun_deploymentansible-playbook -i ./ansible/inventory/hosts.yml ./ansible/playbooks/init.ymlThis playbook handles the following tasks automatically:
- Create the deployment user
- Install Docker and Docker Compose
- Configure SSH keys
- Set firewall rules
That is the main setup work, and automation saves a lot of time.
Deploy the Services
Section titled “Deploy the Services”./deploy.shThe deployment script helps you do the following:
- Check environment configuration
- Pull the latest code
- Start Docker containers
- Run health checks
- Send deployment notifications (Lark)
One command is enough, which keeps the process convenient.
Verify the Deployment
Section titled “Verify the Deployment”# Check container statusdocker ps
# Test the download domainscurl -I https://server.dl.hagicode.com/index.jsoncurl -I https://desktop.dl.hagicode.com/index.jsonOperations Tips
Section titled “Operations Tips”Cache Management
Section titled “Cache Management”Caches also need maintenance from time to time:
Check cache disk usage:
docker volume inspect docker_nginx-cachedu -sh /var/lib/docker/volumes/docker_nginx-cache/_dataClear the cache manually:
./clear-cache.shOr run the manual commands directly if needed:
docker exec nginx sh -c "rm -rf /var/cache/nginx/*"docker restart nginxResource Limits
Section titled “Resource Limits”On a 1-core, 2 GB server, the resource limit configuration looks like this:
services: traefik: deploy: resources: limits: cpus: '1.50' memory: 512M
nginx: deploy: resources: limits: cpus: '0.50' memory: 256MTo monitor resource usage, you can occasionally run:
docker statsSAS Token Security Practices
Section titled “SAS Token Security Practices”The SAS Token is the credential used to access Azure Blob Storage, so leaking it would be serious:
- Do not commit the
.envfile to version control; it is already in.gitignore - Set an appropriate SAS Token expiration time, with 1 year recommended
- Limit SAS Token permissions to read-only
- Rotate SAS Tokens regularly
Monitoring and Alerts
Section titled “Monitoring and Alerts”HagiCode integrates Lark/Feishu Webhook notifications, which can send alerts for the following events:
- Deployment success or failure
- Cache clearing status
- Service exceptions
Notifications include server information, timestamps, and error details, making troubleshooting much faster.
High Availability Extensions
Section titled “High Availability Extensions”When one server is no longer enough, you can consider the following:
- Horizontal scaling: deploy multiple nodes and distribute traffic with DNS round-robin or a load balancer
- CDN in front: put a CDN in front of the cloud servers for even faster access
- Cache warming: use scripts to preload hot files into the cache
There are a few things worth keeping in mind:
- SSL certificates: Let’s Encrypt has rate limits, so do not switch deployments too frequently or certificate issuance may fail
- Cache clearing: after updating important files, remember to clear the cache or users may still download the old version
- Log management: clean up Docker logs regularly, or the disk may fill up
- Backup strategy: back up files such as
Traefik acme.jsonand Bunker Web configuration - Monitoring and alerts: configure Feishu notifications so you can track deployment status and respond quickly to issues
Conclusion
Section titled “Conclusion”A cloud server plus an Nginx caching layer is all it takes. HagiCode uses this solution with a fairly low monthly cost, around CNY 60-100 for the server, and the results have been very solid. The main advantages are:
- Predictable cost: roughly 50% cheaper than using cloud storage directly or paying for a commercial CDN
- Flexible deployment: choose Traefik or Bunker Web depending on your needs
- Strong scalability: you can scale horizontally or add a CDN later if needed
- Simple operations: Shell scripts plus Ansible make automated deployment straightforward
For small teams and independent developers who need file distribution, this is definitely a practical option worth trying.
HagiCode has been running this architecture stably in production for a while, and global user downloads have remained reliable. If you are looking for a similar solution, it is well worth a try.
Technology Stack Recap
Section titled “Technology Stack Recap”To wrap up, here is a summary of the technologies involved:
| Component | Choice | Purpose |
|---|---|---|
| Cloud server | Alibaba Cloud ECS | Base runtime environment |
| Reverse proxy | Traefik / Bunker Web | SSL termination, routing, security protection |
| Cache layer | Nginx | Reverse proxy caching, Gzip compression |
| File storage | Azure Blob Storage | File origin |
| Containerization | Docker Compose | Service orchestration |
| Automation | Ansible | Server configuration management |
| Notifications | Lark/Feishu Webhook | Deployment status notifications |
References
Section titled “References”Here are the reference materials mentioned in this post:
- HagiCode project: github.com/HagiCode-org/site
- HagiCode website: hagicode.com
- 30-minute hands-on demo: www.bilibili.com/video/BV1pirZBuEzq/
- Docker Compose one-click installation: docs.hagicode.com/installation/docker-compose
- Desktop quick installation: hagicode.com/desktop/
If this post helped you, that already makes it worthwhile:
- Give it a like so more people can find it
- Star the project on GitHub: github.com/HagiCode-org/site
- Visit the official website for more information: hagicode.com
- Watch the 30-minute hands-on demo: www.bilibili.com/video/BV1pirZBuEzq/
- Try the one-click installation: docs.hagicode.com/installation/docker-compose
- Install the desktop app quickly: hagicode.com/desktop/
- Public beta has started, and you are welcome to try it out
That is about it for this post. I hope this solution helps you. If you have better ideas, feel free to share them. Technology is always easier to improve when people learn from each other.
Copyright Notice
Section titled “Copyright Notice”Thank you for reading. If you found this article useful, you are welcome to like, bookmark, and share it. This content was created with AI-assisted collaboration, and the final version was reviewed and approved by the author.
- Author: newbe36524
- Original link: https://docs.hagicode.com/blog/2026-04-15-low-cost-cloud-server-download-distribution-station/
- Copyright notice: Unless otherwise stated, all articles on this blog are licensed under BY-NC-SA. Please include the source when reposting.