How I Taught Myself to Work with AI in Domains I Had Zero Expertise In
Most people ask AI for help in fields they don’t understand and get confident-sounding nonsense. I found a way around it: borrowing expert frameworks.
There’s a particular discomfort that comes with tackling something you’ve never done before. It isn’t fear, but the helplessness of not knowing what you don’t know.
You stare at the blank page where your finished work should live, acutely aware that somewhere out there, people have frameworks for this. Proven methodologies. Battle-tested approaches that could guide your decisions and save you from obvious mistakes. But those frameworks live in other people's minds, earned through years of trial and error you haven't experienced.
So you do what most of us do: you ask AI to fill the gap. "Help me create a social media strategy for my B2B startup." "Write the key clauses I should include in this partnership agreement." "What financial metrics should I track for my new e-commerce business?"
And AI responds with something that sounds authoritative, comprehensive, professionally confident. The kind of answer that would cost thousands if it came from a consultant. You feel a moment of relief, but then doubt creeps in:
Is this advice actually good? Is it grounded in world-class practice, or recycled from content farms written by people who’ve never done the work?
This is the challenge of working with AI when you lack domain expertise. It's not just that you don't know the answers, it's that you don't know how to evaluate the answers you receive. You're flying blind in a world where confidence and competence can sound incredibly similar.
How to ground AI in real expertise
What I’ve discovered is that there’s a way to solve this: systematically using the expertise of people who’ve already solved the problems you’re facing. When you do this, you’re not just getting answers, you’re drawing on the judgment of those who’ve already walked the path.
That’s what we’ll cover in this article. I’ll share the exact process I use to avoid costly errors, get higher-quality output, and evaluate AI’s work even in fields where I have no prior experience. By the end, you’ll know how to:
Direct AI to pull from trusted sources instead of vague or unreliable content
Judge the quality of AI’s output, even when you’re not an expert yourself
Build work that reflects proven frameworks and expert practices
Recognize when to trust AI’s reasoning and when to anchor it in outside expertise
Apply practical prompts and workflows to make better decisions faster
The expertise–AI playbook
This article is part three of my series on expertise and AI collaboration:
Part 1: Recursive prompting - I wrote about how to build expertise with AI by starting with dialogue instead of delegating tasks you don’t understand. Each interaction becomes a learning opportunity that improves both your output and your skills.
Part 2: Scaling your expertise with AI - I explored how to build systematic workflows that encode your existing expertise into reusable AI prompts. It’s a way to transform years of hard-won knowledge into scalable systems that still maintain your standards.
Part 3: This piece - Here we’re looking at what to do when you need professional-quality results in domains where you lack expertise. As in recursive prompting, the goal is to bridge the gap, but this time not by building knowledge with AI. Instead, you borrow it from real practitioners and apply proven frameworks through AI collaboration.
Sponsor Spotlight: Guidde ✨
Guidde is an AI-powered platform that turns any workflow into a step-by-step video guide (captions, highlights, voice-overs included). It’s a simple way to document processes without spending hours writing SOPs or editing tutorials.
Great for onboarding, support, or just cutting down those “can you show me how?” pings.
What Harvard's study reveals about expertise and AI
In my previous article on systematizing expertise, I explored how deep domain knowledge can make AI collaboration more demanding. Experts notice every flaw, every oversimplification, and every missing nuance. That vigilance leads to higher-quality outputs, but also requires more effort.
But new research from Harvard Business School with 776 Procter & Gamble professionals adds another dimension. It found that individuals using AI produced solutions at a quality level comparable to two-person teams. AI wasn’t just helping people move faster, it was stepping in for some of the collaboration and domain knowledge they didn’t have on their own.
The data highlights three patterns worth noting:
Time saved doesn’t explain quality gains.
Individuals with AI saved over 8 minutes per task, while teams saved slightly less. But the quality boost happened regardless of time saved, meaning AI improved capability, not just efficiency.
The benefits are uneven across domains.
When people worked on tasks in their own area of expertise, AI added only modest improvements. But when they stepped into less familiar territory, the performance lift was dramatic.
AI reduces the cost of inexperience.
In domains where participants normally struggled or performed near zero, AI helped them produce professional-level work. The less knowledge someone had to start with, the more transformative AI became.
The opportunity to work outside your expertise
What this Harvard field experiment shows is that AI can help you operate outside your area of expertise. Not exactly breaking news, but now we’ve got the data to back it.
In other words, you don’t need years of experience in every field you touch. With the right approach, AI lets you access proven frameworks and apply them to problems you’d normally feel underqualified to handle, or ones you would usually rely on others to solve.
The credibility problem: when AI guidance isn't enough
While AI can democratize expertise within controlled experiments, real-world application presents a different challenge: How do you know whether the expertise you're borrowing is actually worth borrowing?
When you ask AI for guidance in an unfamiliar domain, you're essentially asking it to be your expert consultant. But unlike a human consultant, you can't check its credentials, verify its track record, or understand the specific experiences that shaped its recommendations. You don't know if its advice comes from world-class practitioners or from content farm articles written by people who've never actually done the work.
This is where the recursive prompting approach, while valuable, hits its limits. Building understanding through dialogue with AI is powerful, but it's still constrained by the quality and reliability of the knowledge that AI has access to. If you're building expertise on shaky foundations, you're just becoming confidently wrong.
Sometimes you need something more robust: access to verified expert knowledge that you can trust as the foundation for your AI collaboration. Not AI pretending to be an expert, but AI helping you access and apply real expertise from people who've actually solved the problems you're facing.
The four-phase system for strategic expertise borrowing
Over the past months, I've developed a systematic approach for acquiring enough understanding in any domain to make good decisions and avoid obvious mistakes.
It's built on a simple insight: instead of asking AI to replace expertise, ask it to help you access and synthesize real expertise from people who've actually solved the problems you're facing.
You can go through all four phases, or, if you already know the credible experts in your field and have their sources gathered (a course, a book, or something else), you can skip Phase 1 (and even Phase 2).
Phase 1: Mapping and validating expertise
The internet is full of people who talk about expertise and people who actually have it. Your first job is learning to tell the difference.
Quick advice: For this phase, use the web search feature in your preferred LLM or in Perplexity. That way, you’ll see the sources behind the output and can judge their credibility.
This phase has two steps:
Step 1: Map the landscape – finding the real experts
Start by asking it to map the territory.
The expert discovery prompt:
I need to understand [specific domain] well enough to make good decisions about [specific challenge]. I'm not looking for answers yet—I'm looking for the right people to learn from.
Help me identify:
- The 5-7 most respected practitioners who've actually solved problems like mine
- Companies or organizations known for excellence in this specific area
- Researchers or institutions that study this domain seriously
- Publications, newsletters, or resources that consistently cover this topic with depth
- Specific case studies, methodologies, or frameworks I should know about
For each source, explain why they're considered authoritative and what specific insights they're known for.
Step 2: Validate the sources – separating signal from noise
Not all expertise is created equal. Once you have a list of potential sources, evaluate their credibility and relevance systematically.
The credibility assessment prompt:
I'm researching [domain] and found these potential sources: [list sources].
Help me evaluate each one:
- What are their actual credentials and track record in this specific area?
- What biases or limitations might shape their perspective?
- How recent is their most relevant work, and does timing matter for this topic?
- What criticism or alternative viewpoints exist about their approaches?
- Which sources complement each other vs. contradict each other, and why?
- Based on my specific context [describe your situation], which sources are most relevant?
Step 3: Gather entry points – where to find the knowledge
Even after you’ve identified and validated the right experts, you still need to know where to actually learn from them. Ask AI to point you toward concrete resources you can use so you’re not starting from scratch.
The resource discovery prompt:
Now, based on the sources you identified, point me to publicly available resources I can use to learn how to design [specific type of plan / solution] for [specific challenge].
Please include resources such as books, articles, case studies, courses, or professional guidelines. Focus on evidence-based, practitioner-recommended materials that I can later use to build my own approach.
Phase 2: Knowledge integration - gathering and analyzing expert resources
Once you know which sources to trust, you can start collecting them.
Step 1. Gather resources. Collect articles, case studies, or courses into a single document. Save it as a PDF.
Step 2. Analyze (optional but recommended). If you haven’t fully digested the sources, have AI summarize and extract the key insights first so you learn alongside it. If a source is too long (like a book), use the extraction prompt below to pull out the main takeaways and then add those to your master PDF. This helps you avoid hitting context window limits.
Guided extraction prompt:
I’ve gathered expert knowledge on [topic] from [sources]. Please read and analyze this material thoroughly before producing any output. If the file is long, process it in sections and summarize each section in 3–5 bullets before synthesizing.
The goal here is only to extract, structure, and synthesize expert knowledge into a standalone reference I can reuse. Hence, your main task is to build a reusable knowledge resource.
Follow these steps in order:
Extract and return the following:
1. Core Principles – 5–8 principles that appear consistently across sources. Cite section, page, or source for each.
2. Key Methodologies & Frameworks – Summarize the main methods experts use. Note where multiple sources describe the same approach in different ways.
3. Failure Modes – Common mistakes, blind spots, or pitfalls that experts warn against.
4. Success Patterns – Specific conditions, habits, or strategies that repeatedly lead to success.
5. Contextual Adaptations – Situations where experts disagree or advise different approaches depending on context.
6. Cross-Source Synthesis – A short summary (3–5 bullets) of the themes that show up across multiple sources and what they collectively suggest.
Phase 3: Application - putting AI to work on your task
Upload the PDF into your AI conversation (regular chat, project, or CustomGPT if you’ll use it recurrently). You can paste a summary, but for best results, use the full document.
⚠️ Context is king. In your prompt, explain your situation in detail. The more context you give, the less generic the output will be.
Task prompt (example, adapt it):
Goal: You will use the attached document to help me complete: [specific task] for [audience] with [success metric or constraint].
Scope: Rely on the attached document as the primary source. If you need outside knowledge, say so and wait for confirmation.
Method: Follow these steps in order:
1. Confirm file access – Confirm you can read the file. If it is long, process by sections. Summarize each section in 3–5 bullets before using it.
2. Extract principles – List the 5-8 principles from the document that are most relevant to my task. Cite section or page numbers.
3. Surface assumptions – State any assumptions or missing information that would change the approach. Ask up to 3 concise questions if needed.
4. Application plan – Outline the plan to apply the principles to my task. Keep it step by step.
5. Pause for confirmation – Present the application plan and wait for my approval. Do not move forward until I confirm.
6. Deliverable – Once confirmed, produce the requested work, grounded in the approved application plan.
7. Evidence map – For each major choice in the deliverable, cite the section or page that supports it.
Context:
- My situation: [insert details, constraints, timeline, risks, stakeholders].
- Things to avoid: [what not to do].
- Examples I like: [optional].
Output format (in order):
1. Document understanding – Extract the key principles, methods, and insights from the attached resource(s) that are relevant to my task. Include citations or page/section notes.
2. Task analysis – Restate my task in your own words and explain which parts of the resource connect most directly. Highlight gaps, assumptions, or open questions.
3. Application plan – Show step by step how the resource’s insights will be applied to my situation. Ask me to confirm this plan before proceeding.
4. Deliverable (after confirmation) – Produce the requested work, grounded in the application plan.
5. Evidence map – Link each major choice in the deliverable back to specific parts of the resource.
Phase 4: Evaluation - building in quality control
This is the part most people skip, but it makes a huge difference in quality. Once AI gives you an output, don’t just accept it. Add one more pass: ask AI to step back, review its own work, and check it against expert standards. That way you catch mistakes, fill in gaps, and keep your results closer to professional quality.
Direct re-check prompt:
Evaluate [your specific work/output] directly against the principles in the PDF attached before titled [add document title].
Output format:
1. Principles summary (with citations)
2. Alignment: Where the work matches expert practices
3. Divergence: Where it deviates or is missing key elements
4. Improvements: Specific, actionable changes with reference to sources
5. Evidence map: Each suggestion tied back to page/section citations
6. Self-check: Compare evaluation to principles and note any overlooked areas
Putting my system to work to make sense of SEO
To show you the system in action, I made a quick one-minute demo video with Guidde that walks through each phase step by step.
I wanted to see how well my system could apply expert SEO principles to evaluate and optimize one of my own Substack articles. The idea was simple: take advice collected from SEO experts, run my post through the framework, and see what blind spots or missed opportunities it surfaced.
The output was already great, but when I reached the evaluation step it pointed out a lot of things it had missed the first time, even though I had given it my expert curated source. That’s where the big improvement happened.
Here’s a snippet of the evaluation prompt output so you can see how critically it looked at its own work and pointed out what was missing:
In my recursive prompting post, I used the same example, but the output here was on a completely different level. The added value and flow of results felt like another class of work.
This time I also used more structured prompting, not just dialogue, which contributed to the difference.
If you find this useful, consider making a pledge. It helps me keep guides like this free and create more of them for you.
Why shortcuts fail (and why difficulty is the point)
The borrowed expertise approach takes more time upfront than just asking AI for quick answers. It requires research, evaluation, and synthesis before you even get to the "real work". This feels inefficient, especially when AI can give you instant responses to almost any question.
But here's what I've learned: the efficiency of getting fast answers is an illusion if those answers can't be trusted or implemented effectively.
When you build expertise first, several things happen:
Quality markers get defined. By grounding AI in expert frameworks instead of vague training data, you give it clear standards for what professional-quality work looks like. The result is an output you can trust with more confidence.
Constraints become clear. With both expert warnings and your own limits in play, AI can avoid common pitfalls instead of giving you overly broad or risky suggestions.
You become a sharper evaluator. Because you’ve mapped the domain first, you can judge AI’s work with more nuance, spotting where it’s aligned with best practice, where it falls short, and where follow-ups are needed.
The compound effect: building your personal expert system
Each time you go through this process, you're not just solving one problem, you're building a personal knowledge system that compounds over time.
After implementing this approach across multiple domains, I've noticed several meta-skills developing:
Source evaluation intuition: I can quickly spot expertise versus theoretical knowledge by looking for specific experience markers, track records, and evidence of real-world application.
Pattern recognition across domains: Similar principles appear in different fields, making it easier to adapt knowledge from one area to another and spot universally applicable frameworks.
Context adaptation skills: I've gotten better at taking general frameworks and making them work for specific situations by identifying key variables and constraints.
Critical synthesis ability: I can identify where experts disagree, understand why those disagreements exist, and choose approaches based on contextual fit rather than authority alone.
More importantly, I've built a library of frameworks, templates, and processes that I can adapt for new challenges. Each borrowed expertise session adds to this knowledge base, creating a compound effect where future learning becomes faster and more efficient.
The competitive advantage hiding in plain sight
Most people use AI as an answer machine. You’ll be using it as a knowledge engine. That shift changes everything. Instead of blindly trusting outputs, you’ll avoid obvious mistakes and make sharper decisions. While others settle for generic advice, you’ll be working with distilled insights from the best practitioners.
This distinction matters more than it seems. As AI makes basic information universally available, what sets you apart is the ability to recognize high-quality knowledge and adapt it to your own situation.
Experts will always have deeper mastery. But your edge is different: you can move across fields, pick up the core principles fast, and apply expert frameworks directly to problems you’ve never faced before.
In a world where new domains appear constantly and yesterday’s expertise fades quickly, the real differentiator is knowing how to learn systematically. Instead of waiting years to gain credibility in one field, you can step into new ones and produce solid work much sooner.
Your next move
Pick one challenge you're facing where you lack sufficient expertise to confidently guide AI. Maybe it's developing a content strategy, optimizing your sales process, designing a customer research study, or planning a product launch.
Instead of asking AI to "just figure it out", invest 30 minutes in strategic expertise borrowing:
Map and validate expertise. Find the real practitioners. Vet their credibility. Gather entry-point resources you can study.
Integrate knowledge. Collect the best sources into one PDF. Extract principles, frameworks, pitfalls, and disagreements into a reusable reference.
Apply to your task. Upload the PDF. Give full context. Then have AI produce the deliverable with an evidence map.
Evaluate before you ship. Use a reusable checklist or a one-time recheck against the sources. Fix gaps, then finalize.
The knowledge is out there. What matters is whether you make AI work with it, or without it.
How I can help you beyond this newsletter
Alongside writing here, I also work with a small number of people one-on-one to help them integrate AI into their work. That can mean building custom prompts or GPTs tailored to their workflow, setting up automations and systems, designing a tool stack that actually fits their needs, or even prototyping apps and products. The aim is always the same: to reduce friction, free up time, and help them scale the work that matters most. If that’s something you’d like to explore, just hit reply and tell me what you need help with.
Till next time,
Daria
I also feel like there are possibilities for enhancing and extending the solution. For example, notebookLM has discover sources. Maybe that would speed up option one? Currently, it feels to me like this would take a while, but perhaps not as you suggest 30 minutes at the end.
Second, how to deal with domains where these frameworks don’t exist? Are high-quality sources still good enough for things to work without the strong support of frameworks?
Daria, I always read your posts twice because you pack in so much value! Thanks for sharing Guidde, I’m going to play with it 🤗