Building custom AI workflows for strategy work: What I've learned from Skills

If you're doing strategy work with AI, you've probably had this experience: you craft the perfect prompt for a quarterly review facilitation, it works brilliantly, and then three months later you're scrambling to find that exact wording in your chat history. Or worse, you're rewriting it from scratch because you can't remember what made it work.

This is one of the gaps that Claude Skills fill and I've been experimenting with them for strategy delivery work recently. Skills let you codify your repeated processes as reusable AI workflows. But what I've learned is that "Skills" actually describes two very different things, and understanding which type you need makes the difference between a useful tool and wasted effort.

What Skills really are

At their core, Skills are just your strategy process written down for AI to follow consistently. If you run quarterly strategic reviews the same way every time - asking the same questions, looking for the same patterns, guiding conversations through the same steps then you can capture that as a Skill. Then instead of explaining your approach every time, you invoke the Skill and the AI already knows what to do.

The problem they solve is straightforward: repeatability without copy-pasting prompts or maintaining a personal library of "that one really good query I wrote six months ago." MCPs (Model Context Protocol) can do a similar thing but it relies on much more technical expertise to build.

Type 1: Pure markdown Skills (standardized prompting)

Here's what surprised me: the simplest Skills are literally just markdown files with instructions. No coding required. Anyone who can write clear documentation can build these.

I created a skill for adapting strategic communication to different teams. It's a .md file that describes:

  • How to analyze strategic priorities for their core intent

  • Questions to ask about a team's daily work, responsibilities, and constraints

  • Patterns for translating abstract goals into concrete, role-specific actions

  • Common communication pitfalls when bridging strategy and execution

That's it. Now when I'm preparing for a client session, I don't spend 10 minutes crafting the perfect prompt. I invoke the skill and it knows the process.

The limitation is obvious: these Skills can only work with information you provide in the conversation. They can't pull data from your systems, check your calendar, or query your database. They're sophisticated prompt templates, which is useful but bounded.

The threshold test: If you're writing the same prompt more than twice, turn it into a Skill. If you find yourself explaining your approach to the AI repeatedly, that explanation belongs in a Skill.

Type 2: Skills with external connections (real power, real complexity)

This is where Skills get genuinely powerful - and where you hit a technical cliff.

Skills can include code that queries databases, calls APIs, fetches live data, and integrates with your systems. Imagine a Skill that:

  • Pulls your team's current OKRs from your project management system

  • Analyzes progress data automatically

  • Identifies blocking dependencies

  • Generates a structured review agenda

This isn't just a dream and these capabilities exist right now. But building them requires significant development skills including Python, API authentication, error handling, the works. This is not "write some markdown and you're done" territory.

I've watched several mid-sized organizations get excited about the possibility of these Skills, start building, and then realize they need to hire a developer or invest serious time learning to code. That's fine if you understand the commitment going in. It's frustrating if you thought you were signing up for simple prompt engineering.

The decision point: If your repeated process needs live data or system integrations, you're in complex Skill territory. You'll need technical expertise, either in-house or contracted. Don't start building unless you can commit to seeing it through.

The portability question

Skills are Claude-native. They're defined for Anthropic's platform and optimised for Claude's capabilities (just as MCP originally were). But here's what matters: the techniques behind Skills aren't platform-specific.

The practice of codifying your repeated processes, documenting your approach, and creating reusable workflows applies to any AI tool. You can take the same strategy facilitation process I captured as a Claude Skill and adapt it for ChatGPT's custom instructions, or Gemini's system prompts, or whatever comes next.

One of the great advantages of Skills is that they do not use much of the “context” (thinking space) that the LLMs have until you specifically use them. Then they can be invoked in an agent (separate) LLM call and only the result uses space in the original call. This helps to keep speed, cost and reliability on track and this exact approach can be implemented in any of the major platforms and even with open source alternatives.

The markdown format translates easily. The approach - "write down how you work so the AI can follow it" - is universal. Yes, you'll need to adjust for each platform's quirks and capabilities. But you're learning a pattern, not locking into a vendor.

I've started with Skills because they're well-designed and Anthropic makes the workflow clear. But I'm not treating them as permanent infrastructure. I'm treating them as a way to practice something more fundamental: making my strategy processes explicit and reusable.

The practical take: Even if you use multiple AI platforms, building Skills teaches you how to think about codifying your work and can be applied across platforms.

How to decide what to build

You probably have 3-5 strategy processes you repeat regularly. Client onboarding, quarterly reviews, OKR planning sessions, post-mortem analyses, strategic briefings - whatever your work involves doing more than once.

Here's how I think about which ones to turn into Skills:

Start with something clearly defined. If you can write down the steps another person could follow, you can make a simple Skill. If your process is still intuitive and experience-based, it's not ready to codify yet.

Ask if it needs live data. If the process works with information provided in conversation, it's a simple Skill. If it needs to query systems or fetch external data, it's complex and you need to decide if the automation value justifies the build effort.

Consider frequency and variance. If you do something weekly and it's always the same, a Skill saves significant time. If you do it quarterly and it's different every time, maybe just keep good notes.

My suggestion: Map your three most repeated strategy processes. Pick the clearest one and build a simple markdown Skill this week. You'll learn whether this approach fits your work before investing in anything complex.

What I'm watching

The skill-building capability is still early. The documentation is solid, but the ecosystem of shared Skills is small but rapidly growing. Patterns will emerge, templates will proliferate, people will share what works.

The bigger question is how quickly other AI platforms adopt similar frameworks. If this becomes a standard pattern across tools, the investment in learning to build Skills gets more valuable.

For now, I'm building simple Skills for my most repeated work and staying away from complex integrations unless there's a compelling ROI. That feels like the right balance: get value from standardization without betting too heavily on any single platform.

If you're doing repeated strategy work with AI - and you probably should be - start simple. Capture one process. See if it saves you time. Expand from there.

Previous
Previous

Strategic Plans Fail Because They're PDFs, Not Practices

Next
Next

Are you Walking in a World of Bicycles? 🛣️ 🚶🏽‍♂️🚲