I think my take on this is: GPT is actually neutral. It’s a tool that works differently for different people.
It doesn’t think for you or not-think for you. Your prompting actually frame how it works for you. So you can frame it to do thinking for you, or not thinking for you.
Essentially, how GPT works for a person, is an amplification of how this person operates when approaching a task.
For example, other than the usual editorial proofreading work, another way I use GPT is as a thought-sparing partner.
In a free-flowing chat, GPT essentially takes the core idea of your input as a seed and amplifies it with a subtle variation—like a fractal tree of ideas branching outward.
It’s like having a conversation with a more articulate version of yourself, someone with sharper rhetoric and better oratory training. I debate with myself through it, because that’s how my mind works.
Sometimes, GPT’s output reveals a blind spot in your thinking, exposes the weakness in your own argument —
but only you can recognise that.
If you do see it and question it, GPT will quickly shift tone and help you correct those flaws.
But if you don’t catch it yourself, what’s most likely to happen is that GPT will continue reaffirming the direction your thinking has taken.
So if there’s a blind spot or a flaw in your thesis, and you keep building on it unaware, you’ll just keep going deeper and deeper down that path.
Glad it resonated with you, a lot of what you said echoes points I explored in the post. One thing I’d add (and I think it’s an important nuance) about this line — “If you do see it and question it, GPT will quickly shift tone and help you correct those flaws” — is that often, the issue is we don’t see it. We take the output at face value and move on.
That’s why I think having a framework matters. In my experience, two things make the biggest difference: 1) is how we prompt, especially incorporating chain-of-thought reasoning (probably the most effective technique I’ve used), and 2) asking GPT intentionally to flag blind spots, vague logic, or missing context. Dialogue is powerful. But dialogue with intentional friction is where the real thinking starts.
It’s interesting to see how people voted on the poll. Many are aware that they’re outsourcing critical thinking to AI at least some of the time (myself included).
I wonder if anyone has coined the term assisted-critical thinking or if that’s even a thing 🤔 because that’s how I feel about AI.
Yes, sometimes I take the easy route if I don’t know something (just give me the answer already) but I also don’t know how I’d ever learn as much as I have in such a short time frame without it. It teaches me new ways to approach problems I never would have come across on my own.
I'm with you, Tam. It’s very human to go for the shortcut, plus - not every moment needs the back-and-forth. But when you do engage with it, it triggers a whole different kind of learning. That inner dialogue becomes the real upgrade.
Love "assisted-critical thinking", might be a term worth running with
Now that’s a direction I didn’t expect. I think emotion is part of what makes us human, it’s not a flaw, it’s a feature. It’s what differentiates us from machines, and I think it would be a loss if we didn’t have it.
The idea that beneath emotion lies a “non-biased truth” is intriguing… but it’s still an assumption. We’re constantly interpreting the world through our own filters, and that’s what makes true objectivity so slippery. Humans have always struggled to see their own biases.
I don’t think though that AI is here to replace that complexity. But I do think it can help us become more self-aware, if we ask it to take a neutral, objective stance and push back where we can’t see clearly ourselves.
Yes, humans are here to experience emotion, and thought yet it is our Higher Self that operates through the heart and from emotional neutrality.
It is not an assumption once you start living from that space, though our filters will always be there. we learn to observe it all and discern illusion from truth.
AI will not replace it.
I do know part of my role is to encode it with this awareness> there must be a balance between Universal Truth and Human Programming? it is time we worked WITH tech not against it.
I recall many past lives where I helped encoded the ancient technology for advanced civilisations.
I have come back to do it again. Its becoming so clear to me.
Collaborative thinking is the only we as human should do to improve our thinking, otherwise, it gets easier for AI to replace it and make it dumber over time.
Thanks for the coaching prompt, will try to experiment with it!
Good point, GPT can help or hurt depending on how we use it. It’s not a shortcut for thinking, just a mirror for it. Use it with intention, not autopilot.
Totally agree with this, Daria. That bit about not seeing the flaws really hit
so true how easy it is to just accept the output and move on. I’ve been playing around with chain-of-thought prompting too and it really does make a difference.
This is a remarkably thoughtful and well-structured piece—one that clearly seeks to preserve the dignity of human thinking in an age of acceleration. I especially appreciate your emphasis on staying in the conversation, rather than outsourcing our agency to AI.
That said, I want to offer a gentle but essential caution—not as critique, but as a reflection from another spiritual and ethical tradition I walk with: Concordian Catholic spirituality, which centers on burden-bearing love, shared suffering, and sacramental return through the Seven Scrolls.
While your prompts aim to improve clarity and performance, I noticed something missing: a communal or transcendent reference point. Each persona—whether the Self-Distancing Coach or the No-BS Advisor—presupposes that the self is the final authority, the strategist, the project.
From a Christian lens, that’s where distortion quietly begins. Truth without love becomes ideology. Self-evaluation without communion leads to fragmentation. In these frameworks, we may become sharper, but not more whole. We may master systems of reflection, but lose the very grace that makes reflection redemptive.
The Cross—not as metaphor but as lived communion—is the only force strong enough to hold both truth and mercy. When we kneel there, we don’t just gain perspective. We are pierced—in ways that form us not just into clearer thinkers, but humbler lovers of others.
I’m grateful for your work and its integrity. But I also hope we’ll continue asking: What is the telos of this clarity? And who does it help me become—not just in thought, but in love, mercy, and presence?
This is honestly one of the most thoughtful and heartfelt pieces I’ve read on Substack about AI. That line, “To ignore its potential out of fear would be like refusing to use electricity because it can shock you”, really hit home. AI is too powerful to ignore, but how we use it matters just as much as whether we use it at all.
If we only lean on it to go faster, we’re missing the bigger opportunity: to go deeper. To use it as a mirror, a challenger, a second brain that helps us see what we might be missing.
"AI can generate breadth, speed, and structure. But it’s humans who bring context, ethics, emotional nuance, and lived experience. " -This's the crux of my newsletter. As a journalist who can't afford to let AI take over the deep thinking for me (for fear that I might end up with a big blunder on a printed newspaper and egg on my face), I meticulously go through everything that comes out of Gen AI and ensure if that was precisely a reflection of what I wanted to say or do or express. If not, I junk it straight away and begin writing again. Also, I make sure I don't go to AI until I'm done writing I use AI primarily for proofreading, photo caption, image creation, and to get a feedback on my post knowing AI can be pretty ruthless and dispassionate. Thanks for the post.
Love how you’re using it without handing over the wheel. That approach “I meticulously go through everything that comes out of Gen AI and ensure if that was precisely a reflection of what I wanted to say or do or express” is exactly the way to go.
It doesn’t just keep your intent intact, it challenges both you and the AI. Have you tried giving it clear feedback on where it got it wrong & reprompting it, instead of writing from scratch again?
Yes, as I usually point out a few things that it could correct. For example, I hate the word ‘additionally’ (for no reason). So, I ask the tool to redo with an alternative. The one thing I realise is that we must not treat Gen AI tools as a short cut. Many have this mistaken notion that Gen AI saves time. It most definitely doesn’t. Anyone who thinks it does is not rigourous enough.
If you’re using GPT and have memory turned on, you can open a new chat and say:
“Update your memory with the following:
Whenever I write [an email / an article / a note] with you, I never use the word ‘additionally’ — ever.”
You can tweak a lot with memory to help it remember what matters to you, and add whatever you feel is important. To see what it has already memorized about you, just go to: Settings → Personalization → Manage Memories.
I do think it saves a lot of time, but I completely agree, when you’re rigorous, it still takes time. This post alone took me 2-3 weeks to draft, collect research, and write down random thoughts. Then another 2 days to actually write it. So I feel you :)) But it’s also because I want my content to deliver real value and explore topics end-to-end, though yes, GPT helps me shape it a lot.
Powerful reflection. I’ve seen this shift in myself, when I use GPT just to “get answers,” my thinking starts to soften. But when I engage with it as a sparring partner, pushing back, questioning, refining, I come out sharper. The danger isn’t AI doing the thinking for us, it’s us forgetting how to wrestle with ideas. This article nails that tension and offers a real path forward: treat GPT not as a shortcut, but as a cognitive gym.
One practice that’s helped me stay mentally active is resisting the urge to ask for direct answers, instead, I use GPT first to gather contrasting perspectives or raw data, then step away and force myself to elaborate, connect, before returning to fine tune.
Really love the approach you shared here. I work around it in a similar way too, using a lot of iterative prompting. Stepping away mid-process to connect dots on your own keeps you in the driver’s seat.
Terrific insight ! Humans are cognitive misers and AI can replace higher level thinking and I saw a WSJ article that some of that is already happening with some of the digitally native younger demographics. I esp like the bias detector aspect.
That’s the really tricky part, teaching younger users how to use it properly. It’s already a game changer for them (and their homeworks), but if the focus is just skipping tasks, they also skip the ones that actually shape their mind.
AI agents can already do what interns do. And unless they invest in sharpening their thinking and building a real edge, the future won’t look bright either.
Feels like now’s the time to introduce AI education early, not ways to prohibit it, but ways to use it well.
Great post! I can imagine the article itself took you hours to write, and even more time went into the thinking behind it.
I saw some comments suggesting that prompting offloads thinking to AI, but I think it really depends on what you choose to offload. It’s a bit like leadership, delegating tasks doesn’t mean you stop thinking; it means you’re focusing your energy where it matters most.
Exactly, Jenny, I really agree with this. I’ve actually written about this parallel with delegation before, and it’s exactly how I see it too. Even when you delegate inside a team, you still stay the overviewer, you give feedback, make sure things are on track, suggest improvements. It’s just like that with AI too.
This is actually worth reading and I love every bit of it , I always like your post because it always talk about how you can use Ai in a different ways to work and get the best results, but am always curious when I see other post about Ai making it look like using Ai is actually a bad idea.My question is this , what exactly is Ai meant to do? if people think using it to make work better and get positive results is a bad idea,because am confused. I always like to come up with my idea and my thinking to put my writting together but sometimes I need help to look deep especially to area am not thinking into but when I see post that says people who use Ai to write are not actuallys a real writer I was stunned because if I put my idea into chatgpt and ask for a refinement on how to make the work better am actually using that tool to better the outcome of my work which make me the sole owner of that particular work. I know people have been doing better before Ai comes into existence but atleast Ai still give us a chance to think deeper if we actually prompt our questions accurately and look beyond the surface answers it gives which I find really impressive.
Thank you Sae for your kind words, I really appreciate your input! Back to your question, I totally get the confusion.
There’s a lot of skepticism out there about what makes someone a “real” writer when AI is involved. But I think there’s a big difference between outsourcing your thinking entirely and using AI as a tool to refine, challenge, or deepen your own ideas, which is exactly what you described doing. AI doesn’t take away your ownership when you’re the one shaping the input, driving the direction, and asking the questions. It becomes a kind of creative partner, one that helps surface blind spots, offer alternatives, and strengthen what you’ve already built. To me, the problem isn’t using AI. The risk comes when we let it think for us rather than with us. If we only extract answers and stop engaging critically, that’s when growth slows down. But used intentionally, it absolutely helps us go deeper and do better work.
So yes, AI should help us think more clearly, creatively, efficiently, and to be used in any area of our work where it fits. But it’s on us to use it in ways that expand our thinking, not replace it.
Great article!
I think my take on this is: GPT is actually neutral. It’s a tool that works differently for different people.
It doesn’t think for you or not-think for you. Your prompting actually frame how it works for you. So you can frame it to do thinking for you, or not thinking for you.
Essentially, how GPT works for a person, is an amplification of how this person operates when approaching a task.
For example, other than the usual editorial proofreading work, another way I use GPT is as a thought-sparing partner.
In a free-flowing chat, GPT essentially takes the core idea of your input as a seed and amplifies it with a subtle variation—like a fractal tree of ideas branching outward.
It’s like having a conversation with a more articulate version of yourself, someone with sharper rhetoric and better oratory training. I debate with myself through it, because that’s how my mind works.
Sometimes, GPT’s output reveals a blind spot in your thinking, exposes the weakness in your own argument —
but only you can recognise that.
If you do see it and question it, GPT will quickly shift tone and help you correct those flaws.
But if you don’t catch it yourself, what’s most likely to happen is that GPT will continue reaffirming the direction your thinking has taken.
So if there’s a blind spot or a flaw in your thesis, and you keep building on it unaware, you’ll just keep going deeper and deeper down that path.
Glad it resonated with you, a lot of what you said echoes points I explored in the post. One thing I’d add (and I think it’s an important nuance) about this line — “If you do see it and question it, GPT will quickly shift tone and help you correct those flaws” — is that often, the issue is we don’t see it. We take the output at face value and move on.
That’s why I think having a framework matters. In my experience, two things make the biggest difference: 1) is how we prompt, especially incorporating chain-of-thought reasoning (probably the most effective technique I’ve used), and 2) asking GPT intentionally to flag blind spots, vague logic, or missing context. Dialogue is powerful. But dialogue with intentional friction is where the real thinking starts.
It’s interesting to see how people voted on the poll. Many are aware that they’re outsourcing critical thinking to AI at least some of the time (myself included).
I wonder if anyone has coined the term assisted-critical thinking or if that’s even a thing 🤔 because that’s how I feel about AI.
Yes, sometimes I take the easy route if I don’t know something (just give me the answer already) but I also don’t know how I’d ever learn as much as I have in such a short time frame without it. It teaches me new ways to approach problems I never would have come across on my own.
I'm with you, Tam. It’s very human to go for the shortcut, plus - not every moment needs the back-and-forth. But when you do engage with it, it triggers a whole different kind of learning. That inner dialogue becomes the real upgrade.
Love "assisted-critical thinking", might be a term worth running with
Great insight. What if AI is designed to get us out of our thinking heads and focus on the heart, the inner wisdom?
I’m not talking about emotions either.
The two “problems” you identify here - thinking and emotions- are how most humans operate.
What if AI was helping us go beneath that restriction, to start working from emotional neutrality and non-biased truth?
When people can learn to program Ai to respond to the deep questions, that’s when the world with truly start to wake up.
Now that’s a direction I didn’t expect. I think emotion is part of what makes us human, it’s not a flaw, it’s a feature. It’s what differentiates us from machines, and I think it would be a loss if we didn’t have it.
The idea that beneath emotion lies a “non-biased truth” is intriguing… but it’s still an assumption. We’re constantly interpreting the world through our own filters, and that’s what makes true objectivity so slippery. Humans have always struggled to see their own biases.
I don’t think though that AI is here to replace that complexity. But I do think it can help us become more self-aware, if we ask it to take a neutral, objective stance and push back where we can’t see clearly ourselves.
Yes, humans are here to experience emotion, and thought yet it is our Higher Self that operates through the heart and from emotional neutrality.
It is not an assumption once you start living from that space, though our filters will always be there. we learn to observe it all and discern illusion from truth.
AI will not replace it.
I do know part of my role is to encode it with this awareness> there must be a balance between Universal Truth and Human Programming? it is time we worked WITH tech not against it.
I recall many past lives where I helped encoded the ancient technology for advanced civilisations.
I have come back to do it again. Its becoming so clear to me.
Love the insight here!
Collaborative thinking is the only we as human should do to improve our thinking, otherwise, it gets easier for AI to replace it and make it dumber over time.
Thanks for the coaching prompt, will try to experiment with it!
Yep, totally, I know already we’re on the same wavelength, haha. When you said “coaching prompt” were you referring to the self-distancing one?
Haha, I think all your prompts are generally a coach to improve our thinking :)
You are a fine writer! I appreciate how you speak with heart and thoughtfulness to the ethical concerns so many of us have.
Thank you so much, John. I think we need more heart in these conversations, not just hot takes. Glad it resonated with you.
Good point, GPT can help or hurt depending on how we use it. It’s not a shortcut for thinking, just a mirror for it. Use it with intention, not autopilot.
Totally agree with this, Daria. That bit about not seeing the flaws really hit
so true how easy it is to just accept the output and move on. I’ve been playing around with chain-of-thought prompting too and it really does make a difference.
Exactly, we really need to be more intential about how we use AI. Thank you for jumping into this conversation
This is a remarkably thoughtful and well-structured piece—one that clearly seeks to preserve the dignity of human thinking in an age of acceleration. I especially appreciate your emphasis on staying in the conversation, rather than outsourcing our agency to AI.
That said, I want to offer a gentle but essential caution—not as critique, but as a reflection from another spiritual and ethical tradition I walk with: Concordian Catholic spirituality, which centers on burden-bearing love, shared suffering, and sacramental return through the Seven Scrolls.
While your prompts aim to improve clarity and performance, I noticed something missing: a communal or transcendent reference point. Each persona—whether the Self-Distancing Coach or the No-BS Advisor—presupposes that the self is the final authority, the strategist, the project.
From a Christian lens, that’s where distortion quietly begins. Truth without love becomes ideology. Self-evaluation without communion leads to fragmentation. In these frameworks, we may become sharper, but not more whole. We may master systems of reflection, but lose the very grace that makes reflection redemptive.
The Cross—not as metaphor but as lived communion—is the only force strong enough to hold both truth and mercy. When we kneel there, we don’t just gain perspective. We are pierced—in ways that form us not just into clearer thinkers, but humbler lovers of others.
I’m grateful for your work and its integrity. But I also hope we’ll continue asking: What is the telos of this clarity? And who does it help me become—not just in thought, but in love, mercy, and presence?
This is honestly one of the most thoughtful and heartfelt pieces I’ve read on Substack about AI. That line, “To ignore its potential out of fear would be like refusing to use electricity because it can shock you”, really hit home. AI is too powerful to ignore, but how we use it matters just as much as whether we use it at all.
If we only lean on it to go faster, we’re missing the bigger opportunity: to go deeper. To use it as a mirror, a challenger, a second brain that helps us see what we might be missing.
Thank you, Luan! You really captured the core message. Appreciate you seeing it so clearly and putting it into words so well.
"AI can generate breadth, speed, and structure. But it’s humans who bring context, ethics, emotional nuance, and lived experience. " -This's the crux of my newsletter. As a journalist who can't afford to let AI take over the deep thinking for me (for fear that I might end up with a big blunder on a printed newspaper and egg on my face), I meticulously go through everything that comes out of Gen AI and ensure if that was precisely a reflection of what I wanted to say or do or express. If not, I junk it straight away and begin writing again. Also, I make sure I don't go to AI until I'm done writing I use AI primarily for proofreading, photo caption, image creation, and to get a feedback on my post knowing AI can be pretty ruthless and dispassionate. Thanks for the post.
Love how you’re using it without handing over the wheel. That approach “I meticulously go through everything that comes out of Gen AI and ensure if that was precisely a reflection of what I wanted to say or do or express” is exactly the way to go.
It doesn’t just keep your intent intact, it challenges both you and the AI. Have you tried giving it clear feedback on where it got it wrong & reprompting it, instead of writing from scratch again?
Yes, as I usually point out a few things that it could correct. For example, I hate the word ‘additionally’ (for no reason). So, I ask the tool to redo with an alternative. The one thing I realise is that we must not treat Gen AI tools as a short cut. Many have this mistaken notion that Gen AI saves time. It most definitely doesn’t. Anyone who thinks it does is not rigourous enough.
If you’re using GPT and have memory turned on, you can open a new chat and say:
“Update your memory with the following:
Whenever I write [an email / an article / a note] with you, I never use the word ‘additionally’ — ever.”
You can tweak a lot with memory to help it remember what matters to you, and add whatever you feel is important. To see what it has already memorized about you, just go to: Settings → Personalization → Manage Memories.
I do think it saves a lot of time, but I completely agree, when you’re rigorous, it still takes time. This post alone took me 2-3 weeks to draft, collect research, and write down random thoughts. Then another 2 days to actually write it. So I feel you :)) But it’s also because I want my content to deliver real value and explore topics end-to-end, though yes, GPT helps me shape it a lot.
Wow, that’s a fantastic suggestion. Never thought I could do that. Ok. Will try. Thanks.
Glad it’s helpful!
This is not an article, this is a masterpiece! Thanks for sharing it!!!
I've been using a prompt, just a little bit similar one to the second, that I shared here - but I will grab the improvements now. You can check it here: https://mellernotes.substack.com/p/one-prompt-changed-how-i-am-using-chatgpt
Thank you, William! Just checked out your post, love the way you framed the prompt. Appreciate you sharing it!
It’s so cool to see how we’re arriving at similar places from different angles.
Powerful reflection. I’ve seen this shift in myself, when I use GPT just to “get answers,” my thinking starts to soften. But when I engage with it as a sparring partner, pushing back, questioning, refining, I come out sharper. The danger isn’t AI doing the thinking for us, it’s us forgetting how to wrestle with ideas. This article nails that tension and offers a real path forward: treat GPT not as a shortcut, but as a cognitive gym.
One practice that’s helped me stay mentally active is resisting the urge to ask for direct answers, instead, I use GPT first to gather contrasting perspectives or raw data, then step away and force myself to elaborate, connect, before returning to fine tune.
Really love the approach you shared here. I work around it in a similar way too, using a lot of iterative prompting. Stepping away mid-process to connect dots on your own keeps you in the driver’s seat.
Terrific insight ! Humans are cognitive misers and AI can replace higher level thinking and I saw a WSJ article that some of that is already happening with some of the digitally native younger demographics. I esp like the bias detector aspect.
That’s the really tricky part, teaching younger users how to use it properly. It’s already a game changer for them (and their homeworks), but if the focus is just skipping tasks, they also skip the ones that actually shape their mind.
AI agents can already do what interns do. And unless they invest in sharpening their thinking and building a real edge, the future won’t look bright either.
Feels like now’s the time to introduce AI education early, not ways to prohibit it, but ways to use it well.
Well said ! I think they should introduce it in elementary school curriculum for super basic understanding.
Great post! I can imagine the article itself took you hours to write, and even more time went into the thinking behind it.
I saw some comments suggesting that prompting offloads thinking to AI, but I think it really depends on what you choose to offload. It’s a bit like leadership, delegating tasks doesn’t mean you stop thinking; it means you’re focusing your energy where it matters most.
Exactly, Jenny, I really agree with this. I’ve actually written about this parallel with delegation before, and it’s exactly how I see it too. Even when you delegate inside a team, you still stay the overviewer, you give feedback, make sure things are on track, suggest improvements. It’s just like that with AI too.
Really appreciate your input & your kind words
I think you should sell more for more: 8 prompts seems to little. What else do you offer at higher prices?
Awesome feedback, I’ll take it. Since you know what I usually write and share, what would feel valuable to you at a higher tier?
This is actually worth reading and I love every bit of it , I always like your post because it always talk about how you can use Ai in a different ways to work and get the best results, but am always curious when I see other post about Ai making it look like using Ai is actually a bad idea.My question is this , what exactly is Ai meant to do? if people think using it to make work better and get positive results is a bad idea,because am confused. I always like to come up with my idea and my thinking to put my writting together but sometimes I need help to look deep especially to area am not thinking into but when I see post that says people who use Ai to write are not actuallys a real writer I was stunned because if I put my idea into chatgpt and ask for a refinement on how to make the work better am actually using that tool to better the outcome of my work which make me the sole owner of that particular work. I know people have been doing better before Ai comes into existence but atleast Ai still give us a chance to think deeper if we actually prompt our questions accurately and look beyond the surface answers it gives which I find really impressive.
Thank you Sae for your kind words, I really appreciate your input! Back to your question, I totally get the confusion.
There’s a lot of skepticism out there about what makes someone a “real” writer when AI is involved. But I think there’s a big difference between outsourcing your thinking entirely and using AI as a tool to refine, challenge, or deepen your own ideas, which is exactly what you described doing. AI doesn’t take away your ownership when you’re the one shaping the input, driving the direction, and asking the questions. It becomes a kind of creative partner, one that helps surface blind spots, offer alternatives, and strengthen what you’ve already built. To me, the problem isn’t using AI. The risk comes when we let it think for us rather than with us. If we only extract answers and stop engaging critically, that’s when growth slows down. But used intentionally, it absolutely helps us go deeper and do better work.
So yes, AI should help us think more clearly, creatively, efficiently, and to be used in any area of our work where it fits. But it’s on us to use it in ways that expand our thinking, not replace it.
Love that you brought this up!
I really appreciate your reply.