ChatGPT Not Working? Here’s the Prompting Fix I Use
Most people use ChatGPT like a search bar. Learn the framework that turns it into a thinking partner and helps you get better results.
Whether you’re using AI to write, brainstorm, plan, or build, this post breaks down how to actually get the results you want. I’ll walk you through simple but powerful prompting techniques, plus real examples you can copy and adapt.
If you’re a curious, non-technical professional trying to work smarter and stay ahead, this is for you.
Why your AI output might be failing
If you’ve ever opened ChatGPT, typed in a question, and ended up with something completely off, that’s how most of us start.
It’s not that the tool doesn’t work. It’s that most people haven’t been taught how to talk to it.
We assume that because it uses “natural language” - aka just words - it should just work.
Thing is, natural language is just the surface, prompting well is what actually gets you the output you want.
So… if you’re:
Writing blog posts, essays, or newsletters with AI
Using ChatGPT to brainstorm, summarize, plan, or analyze
Or just trying to get more consistent and reliable answers…
Then learning how to prompt properly will completely change your game.
Right now, most people are leaving value on the table, not because AI isn’t powerful, but because they’re asking it the wrong way. And in many cases, they don’t even realize it.
Talking to AI like a person feels natural, until you realize it doesn’t think like one
The popular belief of just talking to AI like you’d talk to a person feels intuitive, empowering, and accessible. No code. No interface. Just words.
But language seems simple, until it's not.
We've all had moments where we explained something to someone and still got blank stares. Or sent a message that got totally misunderstood.
Now imagine trying to do that with an AI that has:
No emotional intelligence
No shared history
No context unless you build it in
Natural language may be the interface, but it’s not plug-and-play.
It requires structure, clarity, and intentionality.
Garbage Prompt = Garbage Output
If you:
Ask vague questions and get vague responses
Don’t set a tone, and it sounds robotic
Forget the context, and the AI “hallucinates”
Ask for help but get a generic answer…
Then chances are, your prompt is the problem.
Here’s what to do if you don’t get the output you want
Start by giving the AI a role.
This simple step changes everything. When you ask the model to act like a consultant, coach, marketer, or expert in a specific field, it frames the entire response with more relevance and clarity. You’re setting context, and that context shapes quality.
Then, use the SALT framework to move from vague replies to specific, useful results.
SALT stands for:
Style - Define the structure you want (list, essay, email, bullet points)
Audience - Who is this content for?
Length - Do you want brevity or depth?
Tone - Formal? Playful? Neutral?
Now, check out these SALT-powered prompts. These aren’t just examples, they’re ready-made templates you can steal, remix, or run with today.
Example 1
Let’s say you’re in sales. You’ve emailed a prospect about your product, a workflow automation tool, but they replied saying:
“Hey, I’m not sure how this would actually benefit us, we already have a few tools in place that kind of do the same thing.”
You want to respond in a clear, compelling way that shows the value without sounding pushy.
Let’s structure your prompt using SALT:
You are a sales professional with over 10 years of experience replying to a hesitant prospect who doesn’t clearly see the value in your product.
Style: Write this as a friendly, structured email with short paragraphs and a bullet point list for clarity.
Audience: The recipient is a mid-level operations manager who is juggling multiple software tools but is not deeply technical.
Length: Medium-length, around 200–300 words.
Tone: Professional but approachable and helpful, like you’re offering a hand, not closing a deal.
Include a brief recap of what your tool does, highlight 2-3 specific benefits that are unique compared to “tools they kind of already use”, and invite them to a 15-minute call to discuss their current setup.
Here’s the context about the product between ### context ###.
### context ###
Example 2
Let’s say you’re prepping your next Substack post about your favorite productivity tools.
You want to sound casual but smart, and you don’t want a generic list. You want to share how you actually use these tools, with real examples, and make it engaging for busy readers who care about practicality.
Let’s structure your prompt using SALT:
You’re a writer crafting a blog-style Substack post for an audience of productivity lovers, indie creators, and tool-fatigued readers, people who’ve tried every shiny app and are craving something that actually works.
The tone should be friendly, authentic, and practical, like you’re chatting with a smart friend over coffee. Keep it around 700 words: not too long, but enough to go into depth.
Structure the post with clear sections, headers, and bullet points where needed, so it’s easy to skim.
Title: “The 3 Productivity Tools That Actually Stuck With Me (And Why)”
Here’s the context about the 3 tools between ### context ###.
### context ###
Remember, these models aren’t mind readers. The more specific you are, the better the output.
If the response is too basic, ask for expert-level depth. If it’s too wordy, ask for something concise. If the format feels off, show the structure you’re aiming for.
That’s exactly what the SALT framework helps you do: give clear, targeted direction so the model doesn’t have to guess. And the less it guesses, the better your results.
Guide the AI’s thinking, not just its format
Sometimes when you ask ChatGPT to help you create something - say, a marketing strategy - it gives you a decent answer, but it’s not quite what you had in mind.
You type something like “Write a marketing plan for my product launch”, and in seconds, it gives you a list of steps: identify your audience, post on social media, create some content.
Technically correct? Sure.
Useful? Not really.
When you don’t guide the AI’s thinking, it fills in the blanks with its best guess, often pulling from high-level ideas or templates that don’t actually match what you need.
And even with a solid prompt using the SALT framework, sometimes the output still feels… off. You’ve set the tone, audience, length, and style, but what comes back sounds generic, surface-level, or like the model is just going through the motions.
That’s where Chain-of-Thought prompting comes in.
Instead of just telling the AI what you want, you walk it through how to think, before it generates the final answer.
Take the marketing plan again. Instead of just saying:
Write a marketing plan
try saying:
Think step by step.
First, define the target audience.
Then, outline the core messaging and positioning.
After that, list three distribution channels that fit this type of product.
Finally, propose a specific campaign idea for each.
Now the model isn’t guessing. It’s reasoning by following your thinking, and you get a response that’s structured, layered, and actually useful. It slows the process down, encouraging the model to reflect and build, rather than rush and dump.
Let me show you how this works using the same Substack post example we covered earlier. This time, we’ll design the prompt with more intention, not just telling it what to write, but how to think through it.
You’re a writer crafting a blog-style Substack post for an audience of productivity lovers, indie creators, and tool-fatigued readers, people who’ve tried every shiny app and are craving something that actually works.
The tone should be friendly, authentic, and practical, like you’re chatting with a smart friend over coffee. Keep it around 700 words: not too long, but enough to go into depth.
Structure the post with clear sections, headers, and bullet points where needed, so it’s easy to skim.
Title: “The 3 Productivity Tools That Actually Stuck With Me (And Why)”
Start with a short, punchy intro about testing dozens of tools, falling for the hype, and eventually realizing only a few actually stuck.
Then, for each of the three tools, share how you discovered it, how you actually use it in real life (not the theoretical features), what made it stick for you over time.
End the piece with a short reflection on what it means to “find tools that work for you,” and invite your readers to share their ride-or-die tools in the comments.
Here’s the context about the 3 tools between ### context ###.
### context ###
This works for almost any creative task, whether you’re building strategies, writing outlines, or solving problems. So next time the output feels too fast or too fluffy, remember this: If you want better thinking, prompt it to think better. Start with “Think step by step”, and see what unfolds.
Want even better responses? Feed the machine a few examples
If you’ve ever felt like ChatGPT just doesn’t get your style, know that AI isn’t a mind reader (yet). It doesn’t know your tone, your preferences, or what “good” looks like… unless you show it.
This is where Few-Shot Learning comes in.
It’s one of the most powerful (and most underused) prompting techniques. The idea is simple: instead of just telling the model what to do, you give it a few examples of how you do it.
Think of it like onboarding a new colleague who’s here to take some things off your plate. You wouldn’t just give them a vague task and walk away, you’d share past projects, examples, or notes that reflect your best practices. Same thing here. You’re giving the AI something to model itself on.
Let’s say you write tweets in a very specific style. Maybe you always lead with a personal hook, drop one insight, and end with a reflective line. You could just say:
Write a tweet about why AI can’t replace original thinking.
But the model might give you something flat, like:
“AI is powerful, but it can’t replace human creativity. #OriginalThinking”
Accurate? Sure. Memorable? Not really.
Instead, use the few-shot approach:
Here’s one of my most well-received posts:
###post###
First, study the tone, structure, and pacing.
Then, write a new post in a similar style about why AI can’t replace original thinking.
Match the voice and rhythm as closely as possible.
The more clearly you define your style, the better the model can imitate it.
Whether it’s tweets, newsletters, landing pages, or product updates, don’t just ask for content. Teach the AI your voice. Show it what great looks like. This is where it starts to feel personal.
Use delimiters to bring order to your prompts
When your prompt is messy, your output will be too. It’s not that the model is broken, it’s that it doesn’t know where one idea ends and another begins unless you show it.
That’s where structure comes in.
One of the simplest ways to clarify your prompt is by using triple dashes (—-) delimiters to split your prompt into clean sections.
This gives the model clear boundaries between context, goals, and instructions. They help the AI know what’s what (what’s background, what’s the task, what’s the format), so it can reason more cleanly and respond more accurately.
Messy version:
Help me write a business plan for my startup idea. I don’t know where to start but I want it to be clear and convincing.
Structured with delimiters:
You are a startup consultant helping a new founder write their first business plan.
---
My startup idea: An AI-powered note-taking app for students with ADHD, designed to summarize lectures in real time.
---
Goal of the business plan: Create a 1-page plan I can pitch to early-stage investors.
---
Format: Use clear sections with headings. Keep the tone professional but enthusiastic.
How to know if your AI response is any good
Once you’ve gotten a response, how do you know it’s solid? That’s where the LARF checklist comes in.
LARF =
Logical consistency - Does it contradict itself?
Accuracy - Is the info correct?
Relevance - Does it answer what you asked?
Factual correctness - Especially post-2023 events.
What to do when it messes up
If you get a response that sounds great, but then you realize one of the “facts” is made up. That’s called a hallucination.
Use follow-ups like:
Double-check this answer against recent news. If your cutoff doesn’t include it, say so.
Provide your answer by only referencing and citing reliable sources.
Prompting is a power skill, but most people treat it like a shortcut
Every prompt you write is a blueprint. You refine it by doing, testing, and tweaking. The first draft is never the final one, so treat it like a skill. One you can sharpen with practice.
The model is only as smart as the instructions you give it. Learning how to prompt well isn’t “cheating”, it’s the difference between getting a tool to help with surface tasks and getting it to collaborate with you on real thinking, creation, and decision-making.
If you’re stuck on a prompt or want feedback on something you’re working on, drop it in the comments or reply, I’d love to help you sharpen it.
I’ve been playing with AI tools and prompts for a while now. I’ve built a bunch of things, broken a bunch more, and honestly, I still don’t feel like I’ve mastered any of it. But that’s kind of the point.
If you’re curious about AI, not in a hypey way, but more like how can this actually help me think, create, and work smarter, I’d love to have you on this ride with me.
Each week I test AI tools, prompts, and workflows that blew my mind, specifically for curious, non-technical professionals who want to stay relevant, work smarter, and keep up without the overwhelm.
This is one of the most useful and well-structured breakdowns of prompting I’ve seen, thank you for sharing it so clearly! 🙌 Your use of SALT and Chain-of-Thought is spot on, especially for non-technical professionals who want to go beyond surface-level results.
If I may add a couple of extra tips that have helped me (especially with GPT-style models):
Keep conversations short. The longer the thread, the higher the risk of hallucination or drift. Don’t hesitate to start fresh often.
Save your winning prompts. If you’re repeating similar tasks, keeping a simple .txt file with your most effective prompts can save time and help you stay consistent.
For multi-step activities, go one step at a time. If your tool allows uploads or multi-modal context (like ChatGPT’s 'Projects'), share your full plan upfront, but guide the AI through each step with clear references like “step 2 of 5.” Otherwise, it will start forgetting things after just a few turns, even key goals or constraints.
Thanks again for writing this, it’s a goldmine for anyone trying to use AI intentionally, not just casually. Following!
Really useful tips here!! Thanks