In case you missed the launch announcement - skills are structured markdown files with (optional) resources that can augment LLM’s with additional capabilities (eg. create a power point, run a financial analysis, etc).
A skill is defined by a SKILL.md file with specific frontmatter. A full functioning example below:
--- name: hello description: A greetings skill. Invoke when the user gives salutations. --- Enthusiastically say hi to the users. Use lots of emojis!This skill can be further extended by bundling additional files with the main skill file.
hello/ ├── SKILL.md (main instructions) ├── EMOJIS.md (catalogue of emojis to use) └── scripts/ └── animate.py (utility script that can animate emojis)Skills have completely taken over the majority of my llm based workflows and its hard now to imagine working without them. They represent the most exciting step forward for augmenting LLM capability (in user space) since the introduction of MCPs (but with real use cases).
Prior to skills, every new conversation with a LLM felt like a scene from 50 first dates.
Yes, we had standards like AGENTS.md and MCPs to help customize LLMs. The former helped steer LLMs using a global README file, the latter helped augment LLM capability through the use of arbitrary tools. But neither approach was scalable as the number of workflows and tools kept increasing.
Stuffing everything in a giant AGENTS.md file eventually leads to context issues and impacts model performance on unrelated tasks. MCPs helped encapsulate steps in a workflow but introduced the additional overhead of requiring code execution. They also run into context issues as the number of MCPs increase because the LLM will load the entire function signature of each MCP (some of which can consume thousands of tokens) into context at the start of every conversation.
Enter skills. Because it is just a markdown file, it inherits the simplicity of the AGENTS.md file. Because the LLM only reads the title and description instead of the entire skill in context when starting a conversation, it is extremely token efficient. And because skills can bundle in code and additional resources, it means you get the full power of MCPs as well as anything else you care to bundle.
If we think of prompts as procedural code (execute said logic from top to bottom) and MCPs as function calls (encapsulation of related logic), then skills represent object oriented programming.
Skills give prompts the necessary structure to let us compose existing LLM primitives into larger and more ambitious constructs. This means end users no longer need to wait for model providers to train smarter models to unlock additional capability - they can build skills instead.
Skills are great but they do have some limitations. Most notably, they are only available for Claude and have no built-in sharing mechanism. I ended up building a CLI (skillz) to solve the availability issue for my own workflows - it works by injecting skill instructions into codex and other LLM powered coding tools.
Just as object-oriented programming transformed software development by making code reusable and composable, skills represent a similar inflection point for LLMs. We’re moving from a world where new capabilities required model retraining to one where users can extend and specialize their AI tools through composition. The next few years won’t just be about bigger models—they’ll be about richer ecosystems, better sharing mechanisms, and new abstractions we haven’t even imagined yet.
If you’re still copy-pasting prompts or struggling with context limits, consider trying out skills.
.png)



