It is kind of funny when you think about it: the era of AI programming has arrived, and the Agent Skills we keep on hand are becoming more and more numerous. But along with that comes more and more hassle. This article is about how we used skillsbase to solve those problems.
In the age of AI programming, developers need to maintain an increasing number of Agent Skills - reusable instruction sets that extend the capabilities of coding assistants such as Claude Code, OpenCode, and Cursor. However, as the number of skills grows, a practical problem gradually emerges:
It is not exactly a major problem, but once you have too many things, managing them becomes troublesome.
Skills are scattered across different locations, making management costly
Local skills are scattered in multiple places: ~/.agents/skills/, ~/.claude/skills/, ~/.codex/skills/.system/, and so on
Different locations may have naming conflicts, for example skill-creator existing in both the user directory and the system directory
There is no unified management entry point, which makes backup and migration difficult
This part is genuinely annoying. Sometimes you do not even know where a certain skill actually is. It feels like losing something and then struggling to find it.
When switching development machines, all skills need to be configured again
In CI/CD environments, the skill repository cannot be validated and synchronized automatically
Changing to a different computer means doing everything all over again. It feels, in a way, just like moving house - troublesome every single time. You have to adapt to the new environment and reconfigure everything again.
To address these pain points, we tried many different approaches: from manual copying to scripted automation, from directly managing directories to globally installing and then recovering files. Each approach had its own flaws. Some could not guarantee consistency, some polluted the environment, and some were hard to use in CI.
We definitely took quite a few detours.
In the end, we found a more elegant solution: skillsbase. The core idea behind this approach is to install and validate locally first, then convert the structure and write it into the repository, and finally uninstall the temporary files. This ensures that the repository contents match the actual installation result while avoiding pollution of the global environment.
It sounds simple when you put it that way, but we only figured it out after stepping into quite a few pitfalls.
The solution shared in this article comes from our hands-on experience in the HagiCode project.
HagiCode is an AI coding assistant project. During development, we need to maintain a large number of Agent Skills to extend various coding capabilities. These real-world needs are exactly what pushed us to build the skillsbase toolset for standardized management of skill repositories.
This was not invented out of thin air. We were pushed into it by real needs. Once the number of skills grows, management naturally becomes necessary. When problems appear during management, solutions become necessary too. Step by step, that is how we got here.
If you are interested in HagiCode, you can visit the official website to learn more or check the source code on GitHub.
Pros: simple to implement
Cons: cannot guarantee consistency with the actual installation result of the skills CLI
We did think about this approach. Later, however, we realized that the CLI may apply some preprocessing logic during installation. Direct copying skips that step. As a result, what you copy is not the same as what is actually installed, and that becomes a problem.
Option 2: Install globally and then recover
Pros: the installation process can be validated
Cons: pollutes the execution environment, and it is hard to keep CI and local results consistent
This approach is even worse. A global installation pollutes the environment. More importantly, it is difficult to keep the CI environment consistent with the local environment, which leads to the classic “works on my machine, fails in CI” problem. Anyone who has dealt with that knows how painful it is.
Option 3: Local install -> convert -> uninstall (final solution)
This is the approach adopted by skillsbase:
First install skills into a temporary location with npx skills
Convert the directory structure and add source metadata
Write the result into the target repository
Finally uninstall the temporary files
This approach ensures that repository contents are consistent with the actual installation results seen by consumers, avoids polluting the global environment, standardizes the conversion process, and supports idempotent operations.
This solution was not obvious from the beginning either. We simply learned through enough trial and error what works and what does not.
Adding a skill is very simple. One command is enough. Sometimes, though, you may hit unexpected issues such as poor network conditions or permission problems. Those are manageable - just take them one at a time.
During synchronization, the system checks every source defined in sources.yaml and reconciles them with the contents under the skills/ directory. If differences exist, it updates them; if there are no differences, it skips them. This prevents the “configuration changed but files did not” problem.
The CI configuration is generated automatically as well. You still need to adjust some details yourself, such as trigger conditions and runtime environments, but that is not difficult.
This configuration file is the core of the entire system. All sources are defined here. Change this file, and the next synchronization will apply the new state. In that sense, it is truly a “single source of truth.”
Every skill directory contains this file, recording its source information. That way, when something goes wrong later, you can quickly locate where it came from and when it was synchronized.
Validation is one of those things that can feel both important and optional. Still, for the sake of safety, it never hurts to run it from time to time. After all, you never know when something unexpected might happen.
Once CI integration is in place, every change to sources.yaml or the skills/ directory automatically triggers validation. That prevents the situation where changes were made locally but synchronization was forgotten.
Handle naming conflicts: add the system- prefix to system skills consistently. This keeps every skill available while avoiding naming conflicts.
Idempotent operations: all commands support repeated execution, and running sync multiple times does not produce side effects. This is especially important in CI.
Managed files: generated files include the # Managed by skillsbase CLI comment, making them easy to identify and manage. These files can be safely overwritten, and manual modifications are not preserved.
Non-interactive mode: CI environments use deterministic behavior by default, so interactive prompts do not interrupt execution. All configuration is declared through sources.yaml.
Source traceability: every skill has a .skill-source.json file recording its source information, making troubleshooting much faster.
# Team members install the shared skills repository
npxskillsaddyour-org/myskills-g--all
# Clone locally and validate
gitclonehttps://github.com/your-org/myskills.git
cdmyskills
npxskillsadd.--list
By managing the skills repository with Git, team members can easily synchronize their skill collection and ensure that everyone uses the same versions of tools and configuration.
This is especially useful in team collaboration. You no longer run into situations where “it works for me but not for you.” Once the environment is unified, half the problems disappear.
The core value of using skillsbase to maintain a skills collection repository lies in the following:
Security: source validation, conflict detection, and managed file protection
Maintainability: a unified entry point, idempotent operations, and configuration-as-documentation
Standardization: a unified directory structure, naming conventions, and metadata format
Automation: CI/CD integration, automatic synchronization, and automatic validation
With this approach, developers can manage their own Agent Skills the same way they manage npm packages, building a reproducible, shareable, and maintainable skills repository system.
The tools and workflow shared in this article are exactly what we refined through real mistakes and real optimization while building HagiCode. If you find this approach valuable, that is a good sign that our engineering direction is the right one - and that HagiCode itself is worth your attention as well.
After all, good tools deserve to be used by more people.
Thank you for reading. If you found this article useful, you are welcome to like it, save it, and share it in support.
This content was created with AI-assisted collaboration, and the final version was reviewed and confirmed by the author.
In the Hagicode project, users can choose from multiple CLI tools to drive AI programming assistants, including Claude Code CLI, GitHub Copilot, OpenCode CLI, Codebuddy CLI, Hermes CLI, and more. These CLI tools are general-purpose AI programming tools on their own, but through Hagicode’s abstraction layer, they can flexibly connect to different AI model providers.
Zhipu AI (ZAI) provides an interface compatible with the Anthropic Claude API, allowing these CLI tools to directly use domestic GLM series models. Among them, GLM-5.1 is Zhipu’s latest large language model release, with significant improvements over GLM-5.0.
Through a unified abstraction layer, Hagicode enables flexible integration between GLM-5.1 and multiple CLIs. Developers can choose the CLI tool that best fits their preferences and usage scenarios, then use the latest GLM-5.1 model through simple configuration.
As Zhipu’s latest model version, GLM-5.1 offers clear improvements over GLM-5.0:
An independent version identifier with no legacy burden
Stronger reasoning and code understanding
Broad multi-CLI compatibility
Flexible reasoning level configuration
With the correct environment variables and Hero equipment configured, users can fully unlock the power of GLM-5.1 across different CLI environments.
Thank you for reading. If you found this article useful, feel free to like, bookmark, and share it to show your support.
This content was created with AI-assisted collaboration, and the final content was reviewed and confirmed by the author.
During the development of the HagiCode project, we needed to integrate multiple AI coding assistant CLIs at the same time, including Claude Code, Codex, and CodeBuddy. Each CLI has different interfaces, parameters, and output formats, and the repeated integration code made the project harder and harder to maintain. In this article, we share how we built a unified abstraction layer with HagiCode.Libs to solve this engineering pain point. You could also say it is simply some hard-earned experience gathered from the pitfalls we have already hit.
The market for AI coding assistants is quite lively now. Besides Claude Code, there are also OpenAI’s Codex, Zhipu’s CodeBuddy, and more. As an AI coding assistant project, HagiCode needs to integrate these different CLI tools across multiple subprojects, including desktop, backend, and web.
At first, the problem was manageable. Integrating one CLI was only a few hundred lines of code. But as the number of CLIs we needed to support kept growing, things started to get messy.
Each CLI has its own command-line argument format, different environment variable requirements, and a wide variety of output formats. Some output JSON, some output streaming JSON, and some output plain text. On top of that, there are cross-platform compatibility issues. Executable discovery and process management work very differently between Windows and Unix systems, so code duplication kept increasing. In truth, it was just a bit more Ctrl+C and Ctrl+V, but maintenance quickly became painful.
The most frustrating part was that every time we wanted to add support for a new CLI capability, we had to change the same code in several projects. That approach was clearly not sustainable in the long run. Code has a temper too; duplicate it too many times and it starts causing trouble.
The approach shared in this article comes from our practical experience in the HagiCode project. HagiCode is an open-source AI coding assistant project that needs to maintain multiple subprojects at the same time, including a frontend VSCode extension, backend AI services, and a cross-platform desktop client. In a way, it was exactly this complex, multi-language, multi-platform environment that led to the birth of HagiCode.Libs. You could say we were forced into it, and so be it.
Although these AI coding assistant CLIs each have their own characteristics, from a technical perspective they share several obvious traits:
Similar interaction patterns: they all start a CLI process, send a prompt, receive streaming responses, parse messages, and then either end or continue the session. At the end of the day, the whole flow follows the same basic mold.
Similar configuration needs: they all need API key authentication, working directory setup, model selection, tool permission control, and session management. After all, everyone is making a living from APIs; the differences are mostly a matter of flavor.
The same cross-platform challenges: they all need to solve executable path resolution (claude vs claude.exe vs /usr/local/bin/claude), process startup and environment variable handling, shell command escaping, and argument construction. Cross-platform work is painful no matter how you describe it. Only people who have stepped into the traps really understand the difference between Windows and Unix.
Based on this analysis, we needed a unified abstraction layer that could provide a consistent interface, encapsulate cross-platform CLI discovery logic, handle streaming output parsing, and support both dependency injection and non-DI scenarios. It is the kind of problem that makes your head hurt just thinking about it, but you still have to face it. After all, it is our own project, so we have to finish it even if we have to cry our way through it.
We created HagiCode.Libs, a lightweight .NET 10 library workspace released under the MIT license and now published on GitHub. It may not be some world-shaking masterpiece, but it is genuinely useful for solving real problems.
When designing HagiCode.Libs, we followed a few principles. They all came from lessons learned the hard way:
Zero heavy framework dependencies: it does not depend on ABP or any other large framework, which keeps it lightweight. These days, the fewer dependencies you have, the fewer headaches you get. Most people have already been beaten up by dependency hell at least once.
Cross-platform support: native support for Windows, macOS, and Linux, without writing separate code for different platforms. One codebase that runs everywhere is a pretty good thing.
Streaming processing: CLI output is handled with asynchronous streams, which fits modern .NET programming patterns much better. Times change, and async is king.
Flexible integration: it supports dependency injection scenarios while also allowing direct instantiation. Different people have different preferences, so we wanted it to be convenient either way.
If your project already uses dependency injection, such as ASP.NET Core or the generic host, you can integrate it directly. It is a small thing, but a well-behaved one:
If you are writing a simple script or working in a non-DI scenario, creating an instance directly also works. Put simply, it depends on your personal preference:
var claude =new ClaudeCodeProvider();
var options =new ClaudeCodeOptions
{
ApiKey ="sk-ant-xxx",
Model ="claude-sonnet-4-20250514"
};
awaitforeach (var message inclaude.ExecuteAsync(options, "Help me write a quicksort"))
{
// Handle messages
}
Both approaches use the same underlying implementation, so you can choose the integration style that best fits your project. There is no universal right answer in this world. What suits you is the best option. It may sound cliché, but it is true.
Each provider has its own dedicated testing console project, making it easier to validate the integration independently. Testing is one of those things where if you are going to do it, you should do it properly:
Ping: health check to confirm the CLI is available
Simple Prompt: basic prompt test
Complex Prompt: multi-turn conversation test
Session Restore/Resume: session recovery test
Repository Analysis: repository analysis test
This standalone testing console design is especially useful during debugging because it lets us quickly identify whether the issue is in the HagiCode.Libs layer or in the CLI itself. Debugging is really just about finding where the problem is. Once the direction is right, you are already halfway there.
Cross-platform compatibility is one of the core goals of HagiCode.Libs. We configured the GitHub Actions workflow .github/workflows/cli-discovery-cross-platform.yml to run real CLI discovery validation across ubuntu-latest, macos-latest, and windows-latest.
This ensures that every code change does not break cross-platform compatibility. During local development, you can also reproduce it with the following commands. After all, you cannot ask CI to take the blame for everything. Your local environment should be able to run it too:
HagiCode.Libs uses asynchronous streams to process CLI output. Compared with traditional callback or event-based approaches, this fits the asynchronous programming style of modern .NET much better. In the end, this is simply how technology moves forward, whether anyone likes it or not:
This design lets callers handle streaming output flexibly, whether for real-time display, buffered post-processing, or forwarding to other services. Why worry whether the sky is sunny or cloudy? What matters is that once the idea opens up, you can use it however you like.
The HagiCode.Libs.Exploration module provides Git repository discovery and status checking, which is especially useful in repository analysis scenarios. This feature was also born out of necessity, because HagiCode needs to analyze repositories:
// Discover Git repositories
var repositories =awaitGitRepositoryDiscovery.DiscoverAsync("/path/to/search");
// Get repository information
var info =awaitGitRepository.GetInfoAsync(repoPath);
HagiCode’s code analysis capabilities use this module to identify project structure and Git status. It is a good example of making full use of what we built.
Based on our practice in the HagiCode project, there are several points that deserve special attention. They are all real issues that need to be handled carefully:
API key security: do not hardcode API keys in your code. Use environment variables or configuration management instead. HagiCode.Libs supports passing configuration through Options objects, making it easier to integrate with different configuration sources. When it comes to security, there is no such thing as being too careful.
CLI version pinning: in CI/CD, we pin specific versions, such as @anthropic-ai/claude-code@2.1.79, to reduce uncertainty caused by version drift. It is also a good idea to use fixed versions in local development. Versioning can be painful. If you do not pin versions, the problem will teach you a lesson very quickly.
Test categorization: default tests use fake providers to keep them deterministic and fast, while real CLI tests must be enabled explicitly. This gives CI fast feedback while still allowing real-environment validation when needed. Striking that balance is never easy. Speed and stability always require trade-offs.
Session management: different CLIs have different session recovery mechanisms. Claude Code uses the .claude/ directory to store sessions, while Codex and CodeBuddy each have their own approaches. When using them, be sure to check their respective documentation and understand the details of their session persistence mechanisms. There is no harm in understanding it clearly.
HagiCode.Libs is the unified abstraction layer we built during the development of HagiCode to solve the repeated engineering work involved in multi-CLI integration. By providing a consistent interface, encapsulating cross-platform details, and supporting flexible integration patterns, it greatly reduces the engineering complexity of integrating multiple AI coding assistants. Much may fade away, but the experience remains.
If you also need to integrate multiple AI CLI tools in your project, or if you are interested in cross-platform process management and streaming message handling, feel free to check it out on GitHub. The project is released under the MIT license, and contributions and feedback are welcome. In the end, it is a happy coincidence that we met here, so since you are already here, we might as well become friends.
The approach shared in this article was shaped by real pitfalls and real optimization work inside HagiCode. What else could we do? Running into pitfalls is normal. If you think this solution is valuable, then perhaps our engineering work is doing all right. And HagiCode itself may also be worth your attention. You might even find a pleasant surprise.
Thank you for reading. If you found this article useful, you are welcome to like, bookmark, and share it.
This content was created with AI-assisted collaboration, and the final content was reviewed and confirmed by the author.
This article explains how to build an automatable image asset pipeline from scratch, covering CLI tool design, a Provider Adapter architecture, and metadata management strategies.
Honestly, I did not expect image asset management to keep us tangled up for this long.
During HagiCode development, we ran into a problem that looked simple on the surface but was surprisingly thorny in practice: generating and managing image assets. In a way, it was like the dramas of adolescence - calm on the outside, turbulent underneath.
As the project accumulated more documentation and marketing materials, we needed a large number of supporting images. Some had to be AI-generated, some had to be selected from an existing asset library, and others needed AI recognition plus automatic labeling. The problem was that all of this had long been handled through scattered scripts and manual steps. Every time we generated an image, we had to run a script by hand, organize metadata by hand, and create thumbnails by hand. That alone was annoying enough, but the bigger issue was that everything was scattered everywhere. When we wanted to find something, we could not. When we needed to reuse something, we could not.
The pain points were concrete:
No unified entry point: the logic for image generation was spread across different scripts, so batch execution was basically impossible.
Missing metadata: generated images had no unified metadata.json, which meant no reliable searchability or traceability.
High manual organization cost: titles and tags had to be sorted out one by one by hand, which was inefficient.
No automation: automatically generating visual assets in a CI/CD pipeline? Not a chance.
We did think about just leaving it alone. But projects still need to move forward. Since we could not avoid the problem, we figured we might as well solve it. So we decided to upgrade ImgBin from a set of scattered scripts into an image asset pipeline that can be executed automatically. Some problems, after all, do not disappear just because you look away.
The approach shared in this article comes from our hands-on experience in the HagiCode project. HagiCode is an AI coding assistant project that simultaneously maintains multiple components, including a VSCode extension, backend AI services, and a cross-platform desktop client. In a complex, multilingual, cross-platform environment like this, standardized image asset management becomes a key part of improving development efficiency.
You could say this was one of those small growing pains in HagiCode’s journey. Every project has moments like that: a minor issue that looks insignificant, yet somehow manages to take up half the day.
HagiCode’s build system is based on the TypeScript + Node.js ecosystem, so ImgBin naturally adopted the same tech stack to keep the project technically consistent. Once you are used to one stack, switching to something else just feels like unnecessary trouble.
The benefit of this layered design is clear responsibility boundaries. It also makes testing easier because external dependencies can be mocked cleanly. In practice, it just means each layer does its own job without getting in the way of the others, so when something breaks, it is easier to figure out why.
ImgBin uses a model of “one asset, one directory.” Every time an image is generated, it creates a structure like this:
library/
└── 2026-03/
└── orange-dashboard/
├── original.png # Original image
├── thumbnail.webp # 512x512 thumbnail
└── metadata.json # Structured metadata
The advantages of this model are:
Self-contained: all files for a single asset live in the same directory, making migration and backup convenient.
Traceable: metadata.json makes it possible to trace generation time, prompt, model, and other details.
Extensible: if more variants are needed later, such as thumbnails in multiple sizes, we can simply add new files in the same directory.
Beautiful things do not always need to be possessed. Sometimes it is enough that they remain beautiful, and that you can quietly appreciate them. That may sound a little far afield, but the logic still holds here: once images are kept together, they are more pleasant to look at and much easier to find.
metadata.json is the core of the entire system. It uses a layered storage strategy that separates fields into three categories:
{
"schemaVersion": 2,
"assetId": "orange-dashboard",
"slug": "orange-dashboard",
"title": "Orange Dashboard",
"tags": ["dashboard", "hero", "orange"],
"source": { "type": "generated" },
"paths": {
"assetDir": "library/2026-03/orange-dashboard",
"original": "original.png",
"thumbnail": "thumbnail.webp"
},
"generated": {
"prompt": "orange dashboard for docs hero",
"provider": "azure-openai-image-api",
"model": "gpt-image-1.5"
},
"recognized": {
"title": "Orange Dashboard",
"tags": ["dashboard", "ui", "orange"],
"description": "A modern orange dashboard with charts and metrics"
},
"status": {
"generation": "succeeded",
"recognition": "succeeded",
"thumbnail": "succeeded"
},
"timestamps": {
"createdAt": "2026-03-11T04:01:19.570Z",
"updatedAt": "2026-03-11T04:02:09.132Z"
}
}
generated: records the original information from image generation, such as the prompt, provider, and model.
recognized: stores AI recognition results, such as auto-generated titles, tags, and descriptions.
manual: stores manually curated results. Data in this area has the highest priority and will not be overwritten by AI recognition.
This layered strategy resolves one of our earlier core conflicts: when AI recognition and manual curation disagree, which one should win? The answer is manual input. AI recognition is there to assist, not to decide. That question also became clearer over time - machines are still machines, and in the end, people still need to make the call.
Another core part of ImgBin is the Provider Adapter pattern. We abstract external APIs behind a unified interface so that even if we switch AI service providers, we do not need to change the business logic.
In a way, it is a bit like relationships - outward appearances can change, but what matters is that the inner structure stays the same. Once the interface is fixed, the internal implementation can vary freely.
Testable: in unit tests, we can pass in mock providers instead of making real external API calls.
Extensible: adding a new provider only requires implementing the interface; caller code does not need to change.
Replaceable: production can use Azure OpenAI while testing can use a local model, with configuration being the only thing that changes.
Sometimes project work feels like that too. On the surface it looks like we just swapped an API, but the internal logic remains exactly the same, and that makes the whole thing a lot less scary.
Batch jobs are defined through YAML or JSON manifest files, which makes them suitable for CI/CD workflows:
assets/jobs/launch.yaml
defaults:
annotate: true
thumbnail: true
libraryRoot: ./library
jobs:
- prompt: "orange dashboard hero"
slug: orange-dashboard
tags: [dashboard, hero, orange]
- prompt: "pricing grid for docs"
slug: pricing-grid
tags: [pricing, grid, docs]
Run the command:
Terminal window
imgbinbatchassets/jobs/launch.yaml
The batch job design supports failure isolation: items in the manifest are processed one by one, and a failure in one item does not affect the others. You can also preview the job with --dry-run without actually executing it.
And the best part is that it tells you exactly what succeeded and what failed. Unlike some things in life, where failure happens and you are left not even knowing how it happened.
The manifest format for batch jobs supports flexible configuration. Defaults can be set globally, and individual jobs can override them:
# Global defaults
defaults:
annotate: true# Enable AI annotation by default
thumbnail: true# Generate thumbnails by default
libraryRoot: ./library
model: gpt-image-1.5
jobs:
# Minimal configuration: only provide a prompt
- prompt: "first image"
# Full configuration
- prompt: "second image"
slug: custom-slug
tags: [tag1, tag2]
annotate: false# Do not run AI annotation for this job
model: dall-e-3# Use a different model for this job
When executed, ImgBin processes jobs one by one. The result of each job is written to its corresponding metadata.json. Even if one job fails, the others are unaffected. After all jobs complete, the CLI outputs a summary report:
✓ orange-dashboard (succeeded)
✓ pricing-grid (succeeded)
✗ hero-banner (failed: API rate limit exceeded)
2/3 succeeded, 1 failed
Some things cannot be rushed. Taking them one at a time is often the steadier path. Maybe that is the philosophy behind batch jobs.
Configuration is one of those things that can feel both important and not that important at the same time. In the end, whatever feels comfortable and fits your workflow best is usually the right choice.
Interface definitions should be clear and complete, including input parameters, return values, and error handling. It is also a good idea to provide both synchronous and asynchronous invocation styles for different scenarios.
That is one small piece of hard-earned experience. Once an interface is set, nobody wants to keep changing it later.
When one item fails in a batch job, the CLI should:
Write detailed error information to a separate log file.
Continue executing other jobs instead of interrupting the whole process.
Return a non-zero exit code at the end to indicate that some jobs failed.
Clearly display the execution result of every job in the summary report.
Some failures are just failures. There is no point pretending otherwise. It is better to acknowledge them openly and then figure out how to solve them. The same logic applies to projects and to life.
Recognition results are written to the recognized section by default, while manually edited fields are marked in manual. Metadata updates follow an append-only strategy: unless --force is explicitly passed, existing manually curated results are not overwritten.
That point became clear too - some things, once overwritten, are just gone. It is often better to preserve them, because the record itself has value.
As the core tool for image asset management in the HagiCode project, ImgBin solves our problems through the following design choices:
Unified entry point: the CLI covers generation, annotation, thumbnails, and all other core operations.
Metadata-driven: every asset has a complete metadata.json, enabling search and traceability.
Provider Adapter: flexible abstraction for external APIs, making testing and extension easier.
Batch job support: batch image generation can be automated within CI/CD workflows.
Everything else may have faded, but this approach really did end up proving useful.
This solution not only improves HagiCode’s own development efficiency, but also forms a reusable framework for image asset management. If you are building a similarly multi-component project, I believe ImgBin’s design ideas may give you some inspiration.
Youth is all about trying things and making a bit of a mess. If you never put yourself through that, how would you know what you are really capable of?