How to Write a Custom OpenClaw Skill: The SKILL.md Format Explained

OpenClaw is a powerful open-source AI agent framework, but out of the box it is a generalist. What turns OpenClaw from a chatbot into a domain specialist is the skill system. Skills are self-contained instruction modules that teach the agent how to perform specific tasks—from fetching weather forecasts to running database queries to editing PDFs. If you want OpenClaw to do something reliably without reinventing the approach every time, you write a skill.

This guide covers everything you need to know to write a custom OpenClaw skill: the SKILL.md format, the directory structure, how skill selection works, best practices, testing, and distribution. By the end, you will be able to create production-grade skills that slot cleanly into OpenClaw’s agent runtime.

What OpenClaw Skills Are and How They Work

A skill is a folder placed inside the OpenClaw skills/ directory. Every skill contains one required file—SKILL.md—and optional supporting files in references/, scripts/, and assets/ subdirectories. The agent reads SKILL.md to understand how to execute a specific task.

The critical design insight is how skills are selected. Before every response, the OpenClaw agent scans the name and description fields of every installed skill (these are kept in the agent’s context at all times). If exactly one skill clearly matches the user’s request, the agent loads that skill’s full body and follows its instructions. If multiple skills could apply, the agent picks the most specific one. If none match, the agent proceeds without a skill.

This selection mechanism means the description field is the most important part of any skill. It is the trigger condition. A well-written description makes the skill discoverable and prevents conflicts with other skills.

The SKILL.md Format: Every Field Explained

A SKILL.md file has two sections: YAML frontmatter at the top (between --- delimiters) and a Markdown body below. The frontmatter is always in context. The body is loaded only after the skill triggers.

YAML Frontmatter (Always in Context)

The frontmatter contains exactly two required fields:

  • name — The skill name. Use lowercase letters, digits, and hyphens. Keep it under 64 characters. Prefer short, verb-led phrases like pdf-editor or slack-address-comments.
  • description — The trigger condition. This field tells the agent when to use the skill. It must include both what the skill does and specific trigger contexts. A good description is explicit about trigger phrases and includes negative conditions (what NOT to use it for).

You must not add any other YAML fields. The description is the sole mechanism for skill selection.

Body (Loaded After Trigger)

The body contains the instructions the agent follows while the skill is active. The skill-creator built-in skill provides these guidelines for writing the body:

  • Keep it under 500 lines and under roughly 2,000 tokens to minimize context bloat.
  • Include a “When to Use” and “When NOT to Use” section for edge-case clarity.
  • Include specific commands, API calls, or workflows the agent should follow.
  • Reference supporting files in references/ and scripts/ using paths relative to the skill directory.
  • Use imperative, infinitive form for instructions.

The body lives in SKILL.md itself, not in a separate reference file, unless it exceeds the token budget. If it does, move variant-specific details into references/ files and link to them explicitly.

Writing a Great Skill Description: The Trigger Condition

The description field is the single most important part of your skill. It is the only thing the agent reads to decide whether to load your skill. A bad description means your skill never triggers for the cases it should, or worse, triggers when it should not.

What Makes a Description Good

A good description is specific, distinctive, and includes negative conditions:

description: >
  Comprehensive document creation, editing, and analysis with support for
  tracked changes, comments, formatting preservation, and text extraction.
  Use when Codex needs to work with professional documents (.docx files) for:
  (1) Creating new documents, (2) Modifying or editing content,
  (3) Working with tracked changes, (4) Adding comments, or any other
  document tasks. NOT for: PDF manipulation or plain text file editing.

Notice the specificity: it names the file format (.docx), lists concrete use cases, and explicitly excludes PDF and plain text work. This prevents overlap with a hypothetical pdf-editor skill and a text-editor skill.

What Makes a Description Bad

A bad description is vague and does not distinguish the skill from others:

description: "Use this skill for AI tasks."

This tells the agent almost nothing. Every skill is for “AI tasks.” The description does not specify file formats, trigger scenarios, or exclusions. A skill with this description would either fire constantly (conflicting with everything) or never fire (because it offers no distinct signal).

Practical Rules for Descriptions

  • Include all “when to use” information in the description, not in the body. The body is only loaded after triggering, so a “When to Use This Skill” section in the body will never help with selection.
  • Include a “NOT for” clause to prevent false triggers.
  • Keep the description to roughly 50–100 words. The frontmatter is always in context; brevity matters.
  • Mention specific file extensions, domain terms, or verb phrases the agent might see in user requests.

Skill Directory Structure: What Goes Where

Every skill lives in its own directory under the OpenClaw skills/ folder. The directory is named exactly after the skill name. The structure follows a three-level progressive disclosure pattern:

skill-name/
├── SKILL.md              (required)
├── references/           (optional, loaded on demand)
│   ├── api-docs.md
│   └── schema.md
├── scripts/              (optional, executable code)
│   └── rotate-pdf.py
└── assets/               (optional, used in output, not loaded)
    ├── logo.png
    └── template.docx

SKILL.md (Required)

The single required file. Contains YAML frontmatter with name and description, plus the Markdown body with instructions.

references/ (Optional)

Documentation and reference material the agent loads on demand. Use this to keep SKILL.md lean while providing deeper context when needed. Examples: API documentation, database schemas, company policies, domain knowledge files. Best practice: if a reference file exceeds roughly 100 lines, include a table of contents at the top so the agent can see its scope at a glance.

scripts/ (Optional)

Executable code (Python, Bash, etc.) for tasks that need deterministic reliability or are repeatedly rewritten. The agent can execute these scripts directly without loading their full contents into the context window, which saves tokens. Include scripts when the same code would otherwise be rewritten every time.

assets/ (Optional)

Files the agent uses in its output but does not load into context. Examples: templates (HTML boilerplate, Word documents), brand assets (logos, fonts), sample documents. These get copied or modified, not read.

What NOT to Include

Do not create extraneous documentation files. The skill should contain only what the agent needs to do the job. Do not include README.md, INSTALLATION_GUIDE.md, QUICK_REFERENCE.md, CHANGELOG.md, or similar auxiliary files. They add clutter and confuse the agent about what is instructional versus what is metadata.

A Complete Skill Example: Weather Skill Walkthrough

Here is a real skill shipped with OpenClaw. The weather skill demonstrates every part of the format in practice.

Directory Structure

weather/
└── SKILL.md

A simple skill with no references, scripts, or assets. Everything fits in one file.

Frontmatter

---
name: weather
description: >
  Get current weather and forecasts via wttr.in or Open-Meteo. Use when:
  user asks about weather, temperature, or forecasts for any location.
  NOT for: historical weather data, severe weather alerts, or detailed
  meteorological analysis. No API key needed.
---

The description names the data sources (wttr.in, Open-Meteo), lists trigger scenarios (weather, temperature, forecasts), and excludes three edge cases (historical data, alerts, detailed analysis). This prevents the weather skill from triggering on climate research queries or severe weather event requests that should go to official NWS sources.

Body (Abbreviated)

# Weather Skill

Get current weather conditions and forecasts.

## When to Use

✔ USE this skill when:
- "What's the weather?"
- "Will it rain today/tomorrow?"
- "Temperature in [city]"
- "Weather forecast for the week"

## When NOT to Use

✘ DON'T use this skill when:
- Historical weather data
- Climate analysis or trends
- Severe weather alerts
- Aviation/marine weather

## Commands

### Current Weather

```bash
curl "wttr.in/London?format=3"
curl "wttr.in/New+York?0"
```

### Forecasts

```bash
curl "wttr.in/London"
curl "wttr.in/London?format=v2"
```

### Format Options

```bash
curl "wttr.in/London?format=j1"
curl "wttr.in/London.png"
```

Notice the structure: concrete commands the agent can execute, format codes for building custom queries, and quick-response templates for common questions. The skill assumes the agent is competent (it does not explain what curl is) and focuses on providing the specific syntax the agent might not know.

Best Practices: What Makes a Skill Good

Keep Skills Focused

One skill equals one specific task or domain. Do not create a monolithic “all-document-work” skill. Instead, create separate skills for PDF editing, DOCX editing, and plain-text editing. Each skill has a distinctive description that prevents overlap.

Keep Description Distinctive and Non-Overlapping

Since the agent picks the most specific matching skill when multiple apply, your descriptions must carve out clear territory. If two skills both mention “weather,” the agent may load the wrong one or fail to select entirely. Use explicit exclusions (“NOT for”) to prevent ambiguity.

Keep SKILL.md Under 2,000 Tokens

The agent loads the full body of a triggered skill into context. A bloated skill wastes tokens that could be used for conversation history or other context. Use references/ files for deep documentation that is not needed every time. Use scripts/ for code that the agent can execute without reading into context.

Use Progressive Disclosure

The three-level loading system is designed for efficiency:

  1. Level 1 (always in context): The YAML frontmatter—just name and description, about 50–100 words.
  2. Level 2 (on trigger): The SKILL.md body and any reference files the agent chooses to read.
  3. Level 3 (as needed): Scripts that execute without loading into context, and assets used in output.

Respect this layering. Do not dump everything into the body. Keep only essential procedural instructions there.

Include a “What NOT to Do” Section

Edge cases are where skills fail. A section explaining what the skill does not handle (and what to do instead) saves the agent from following the wrong path. The weather skill explicitly excludes historical data, severe alerts, and aviation weather.

Prefer Scripts Over Re-Written Code

If the agent re-writes the same algorithm in every conversation, move it to a scripts/ file. The script is deterministic, token-efficient, and testable. The agent can execute it with a single call rather than re-deriving the logic.

Testing Your Skill: How to Verify It Loads Correctly

Before distributing a skill, verify that the agent loads it under the right conditions.

Step 1: Install the Skill

Place the skill directory inside OpenClaw’s skills/ folder:

cp -r my-skill /path/to/openclaw/skills/

Restart the agent or reload skills so the new skill is registered.

Step 2: Trigger the Skill

Send a query that matches the description. For a PDF editing skill, try:

Can you rotate page 3 of this PDF 90 degrees?

Step 3: Verify Loading

Check which skill the agent loaded by asking directly:

What skill did you just load?

The agent should respond with the skill name and confirm it activated.

Step 4: Test Negative Cases

Send a query that should NOT trigger the skill. For the weather skill, try:

What was the average temperature in London in 1980?

The agent should not load the weather skill (since historical data is excluded in the description) and should respond without it.

Step 5: Test Edge Cases

Send borderline queries to confirm the description is precise enough. If a query about “temperature” triggers the wrong skill, the descriptions overlap and need adjustment.

Step 6: Verify the Body Instructions Work

Execute each command or workflow the SKILL.md body describes. For the weather skill, run the curl commands and confirm the output format matches what the agent expects.

Installing Skills from ClawhHub

ClawhHub is the OpenClaw skills marketplace where the community shares skills. Installing a skill from ClawhHub is straightforward:

  1. Browse ClawhHub for the skill you need (weather, PDF editing, database queries, Slack integration, etc.).
  2. Download the skill directory. Skills are distributed as .skill files (a zipped archive) or as plain directories.
  3. Place the skill directory into your OpenClaw skills/ folder.
  4. Restart the OpenClaw agent so it registers the new skill.
  5. Test the skill with a matching query as described in the testing section above.

The skill ecosystem is community-driven. If you build a useful skill, consider publishing it on ClawhHub so others can benefit.

The skill-creator Meta-Skill

OpenClaw ships a built-in skill called skill-creator. It is a meta-skill: a skill that helps you write better skills. It triggers when you use phrases like “create a skill,” “author a skill,” “improve this skill,” “review the skill,” or “audit the skill.”

The skill-creator follows a six-step process:

  1. Understand — Clarify what the skill should do with concrete usage examples.
  2. Plan — Determine what reusable resources (scripts, references, assets) the skill needs.
  3. Initialize — Run init_skill.py to generate a template skill directory with the correct structure and frontmatter.
  4. Edit — Customize the SKILL.md body and add the planned resources.
  5. Package — Run package_skill.py to validate the skill and produce a distributable .skill file.
  6. Iterate — Test on real tasks and refine based on what works.

The skill-creator includes helper scripts (init_skill.py and package_skill.py) in its scripts/ directory and references for multi-step workflows and output patterns. It also provides built-in validation that checks YAML frontmatter format, naming conventions, description quality, and file organization before packaging.

If you are new to writing skills, start by asking OpenClaw to “create a skill” and let the skill-creator guide you through the process. It will produce a valid, well-structured skill every time.

Sources

This article is based on analysis of OpenClaw’s skill system as implemented in the skill-creator and weather skills shipped with the framework. Key reference files examined include:

  • OpenClaw skill-creator SKILL.md — the meta-skill that defines the skill creation process and all format requirements
  • OpenClaw weather SKILL.md — a production skill demonstrating the format in practice
  • The init_skill.py and package_skill.py scripts from the skill-creator skill

Related Reading

Similar Posts