Why AI Didn't Work for You (And What Actually Fixes That)
- stephen03058
- Dec 9, 2025
- 5 min read
I hear the same objection constantly: "I tried ChatGPT / Claude / whatever, and it just gave me generic rubbish. It doesn't understand my business."
They're not wrong. Out of the box, AI doesn't know what you're looking for regarding your industry, your clients, your preferred communication style, or the 47 things you've learned the hard way over the past decade. It makes the same mistakes your newest employee would make—except it does it confidently and quickly.
So people try it once, get disappointed, and stick with doing that one task manually.
Here's what they're missing: AI tools have moved well beyond "type question, get generic answer." The people getting genuine productivity gains aren't using AI differently—they're configuring it to work with their specific context.
The Real Problem Isn't Intelligence
When someone tells me AI "doesn't get it right," I ask them to describe what went wrong. Usually it's one of three things:
Missing context. AI doesn't know that your client Sarah prefers bullet points, that "ASAP" means Thursday in your organisation, or that your Dataverse schema uses the "spmo_" prefix. It fills gaps with assumptions—and assumptions are where errors live.
Inconsistent results. The same request produces wildly different outputs depending on how you phrase it. Tuesday's report looks nothing like Monday's. You spend as much time editing AI output as you saved generating it.
Repeated explanations. Every conversation starts from zero. You explain your preferences, watch AI ignore half of them, correct it, then do the same thing again tomorrow.
These are real problems. But they're not problems with AI capability—they're problems with how much context the AI is given.
Two Features That Actually Change Things
Most AI platforms now offer ways to give the tool persistent context. In Claude, there are two awesome ways to improve productivity:
1. Skills (Reusable Instruction Sets)
A skill is essentially a briefing document. You write down how you want certain types of work handled—communication tone, report structure, formatting preferences, terminology—and Claude reads that before responding.
Instead of explaining your meeting summary format every time, you document it once:
"All meeting summaries follow this structure: decisions made, action items with owners and dates, key discussion points. Use bullet points. Keep it under one page. No filler phrases like 'productive discussion was had.'"
Now every meeting summary matches your format without you re-explaining it.
I wrote about this process in detail: [How to Build Claude Skills: Teach AI Your Way of Working]. The short version is that you identify where you're constantly correcting AI output, document your preferences in a simple markdown file, and add it to your Claude project.
The first skill takes 20 minutes. After that, AI actually works the way you work.
2. Reference Documents (Living Knowledge Bases)
Skills handle how you want things done. Reference documents handle what AI needs to know.
Using Claude's Filesystem connector, you can create markdown files on your computer that Claude reads and updates. Client preferences, project history, lessons learned, technical specifications—anything Claude should know but doesn't.
The powerful part is that these documents evolve. After a client call, you tell Claude:
"Sarah from Acme mentioned they've delayed the Queensland expansion to Q3 and they're now evaluating CRM systems. Update the client context file."
Claude finds the relevant section and adds the new information. Next conversation, that context is already there. You're not maintaining documentation—you're just telling Claude what happened, and the knowledge base updates itself.
I covered this approach in [Building a Second Brain with Claude: How to Create Living Reference Documents]. It sounds like overhead until you realise Claude is doing the documentation work, not you.
What This Looks Like in Practice
Take a common complaint: "AI gives me technically correct information that doesn't fit my situation."
Without configuration, that's exactly what happens. AI knows general principles but not your specifics.
With a skill and reference documents? Different story.
I work with Power Platform—Microsoft's low-code development tools. There are dozens of quirks, limitations, and workarounds that aren't in the official documentation. Things like "modern text input controls don't support the Format property even though the interface suggests they do" or "this specific OData syntax fails silently when you use the wrong case for navigation properties."
Rather than re-explaining these every conversation, I keep a limitations document that Claude reads automatically. When I encounter a new error, I tell Claude to add it to the file. Next time I (or Claude) hit that issue, the workaround is already documented.
This isn't theoretical. It's how I actually work. And it's why AI saves me time instead of creating rework.
The Objections, Addressed
"I don't have time to set all that up."
The first skill takes 20 minutes. A basic reference document takes 10. You'll make that back the second time AI produces work you don't have to heavily edit.
More importantly: you can ask Claude to create the skill for you. Describe how you like things done, and Claude will draft the document. You review and adjust. The AI does most of the configuration work.
"It still makes mistakes."
Yes. AI isn't infallible, and pretending otherwise would be dishonest. But the nature of mistakes changes. Instead of fundamental misunderstandings about your context, you get occasional errors in execution. Those are faster to spot and fix.
You're not eliminating review—you're reducing the type of review from "rewrite this completely" to "fix this one thing."
"My work is too complex / specialised / unique."
That's exactly why generic AI disappoints. And exactly why configuring it with your specific context makes the difference.
The more specialised your work, the more value you get from skills and reference documents. Because you're the only one who can provide that context. Once you do, AI stops being a generic tool and starts being useful.
"By the time I explain what I want, I could have done it myself."
True—for a single task. The investment pays off on the second occurrence, and every one after that.
This is the same logic as any process documentation. Writing it down takes time. But writing it down once beats explaining it verbally forever.
Getting Started Without Overwhelm
Don't try to configure everything at once. Start with one friction point:
What do you explain to AI repeatedly?
Where does AI consistently miss the mark?
What context would change a generic response into a useful one?
Pick one. Create a simple skill or reference document. Test it on real work. Refine based on what happens.
The goal isn't perfect AI output on day one. It's output that improves over time because you're building persistent context rather than starting fresh every conversation.
If you feel like you're getting stuck with any of this check out our one-day-workshop that helps business leaders get started with solid AI foundations.
This post references two earlier articles: [How to Build Claude Skills] covers creating instruction sets for consistent AI output, and [Building a Second Brain with Claude] covers using the Filesystem connector for evolving reference documents. Both are written for non-technical readers.



Comments