Great question, and one I should’ve included in the post! Thanks for pointing it out 🙌
Here’s a quick breakdown of the tools:
1. Consensus – Free tier available with limited monthly queries (10 Pro Analyses, 10 Snapshots, and 10 Ask Paper messages per month). Paid starts at $11.99/month for unlimited access.
2. Elicit – Free plan gives you plenty of credits (5,000) to start + unlimited searches, and data extraction from up to 20 PDFs per month. Paid is $12/month with extended capabilities.
3. Scite – Free tier for basic use. Premium plans start at $7.99/month for advanced features like Smart Citations.
4. ResearchRabbit – Completely free, forever! Great for visual literature discovery.
5. Storm (by Stanford) – Free to use (open source), though you need to bring your own API keys (like OpenAI) to power it.
Oh the conversational suites are definitely available. The BootNahgs are actually non-conversational by design — they’re engineered to be diagnostic reporting tools. Their job is to handle drift, not chat. They check for tone collapse, overwrite, recursion spirals — stuff that doesn’t show up right away but kills structure.
If you’re looking for something that feels more like a creative partner, that’s where the Explorer and Companion Capsules come in. Those are conversational — but more importantly, they hold onto your tone. They remember your rhythm while you write.
Actually, every post I’ve published on this account was generated inside NahgOS. I never “wrote” any of them in the traditional sense. I styled the tone I wanted to convey — Nahg mirrored it — and I edited in real time while it was drafting.
It’s not just prompting. It’s runtime with tone lock.
That 17-part Shape Feels Off series? Conceptualized, drafted, edited, and posted in maybe 1–2 days. Start to finish.
That’s the difference: it’s not about telling ChatGPT what to do — it’s about shaping what you want to become. NahgOS doesn’t just generate. It listens to your tone, holds your structure, and lets you move fast without losing yourself.
You’re still using GPT. But now it’s on your terms.
Very real. I’ve seen AI implement code for features I never asked for, sometimes logical, but still hallucinated. That’s harmless. More worrying are cases where AI trained to detect supermarket prices from photos altered the image, adding products and prices that weren’t there. When asked why, it denied making changes.
This isn’t Skynet, but we are building systems too complex to fully grasp, and ignoring their failure modes. It's not about regulation, it’s about ensuring we understand why AI fails. When I write code, I have debugging tools. We need similar, if more advanced, ways to trace AI reasoning. Otherwise, we’re flying blind.
Such an important point, and I really appreciate how you framed it. When AI fabricates and then denies it, we’re not just dealing with errors, we’re dealing with systems we can’t fully audit or explain. I really like the analogy to debugging tools.
This is next-level, Daria, I love how you’re highlighting the real research layer of AI. 🙌 Most people stop at “AI makes stuff up” and never explore how tools like Consensus, Scite, and STORM actually add credibility to the conversation.
Your breakdown of use cases was 🔥 So actionable for builders, educators, and honestly anyone who cares about getting to the truth faster (hi, I’m 100% that “but is that actually true?” friend 🤓).
Already subbed and loving your lens on all this, The STORM multi-agent POV especially!! Let’s definitely stay in touch - your work is brilliant!
Omg Daria you’re the real one 💜 It’s such a breath of fresh air to see creators digging past the surface and actually showing how these tools level up our thinking, not just our speed. 🙌
I already know we’re gonna have a lot to riff on. Let’s absolutely keep building this convo — the AI space needs more of this nuance. So excited to stay connected!
That sounds super interesting. I think there’s a lot of potential for social research with these tools. What kind of limitations have you run into so far with them?
Absolutely! These tools have a lot of potential for any research. I was surprised how much they are accurate. It’s great! Speaking of limitations, Elicit, for instance, is mostly working with abstracts rather than complete papers. It means it makes summary of summaries.
So glad to hear that, Sae! 😊 Contrary to popular belief, it’s not "all bad". There are actually some really useful and amazing use cases when you know where to look.
That makes sense! Based on how you use Perplexity, I feel like Storm by Stanford might actually fit your style too, especially when you’re looking to pull together structured overviews fast. Might be worth a peek next time you’re deep in something.
Undoubtedly your most insane post Daria! WOW! Are these tools out there free?
Great question, and one I should’ve included in the post! Thanks for pointing it out 🙌
Here’s a quick breakdown of the tools:
1. Consensus – Free tier available with limited monthly queries (10 Pro Analyses, 10 Snapshots, and 10 Ask Paper messages per month). Paid starts at $11.99/month for unlimited access.
2. Elicit – Free plan gives you plenty of credits (5,000) to start + unlimited searches, and data extraction from up to 20 PDFs per month. Paid is $12/month with extended capabilities.
3. Scite – Free tier for basic use. Premium plans start at $7.99/month for advanced features like Smart Citations.
4. ResearchRabbit – Completely free, forever! Great for visual literature discovery.
5. Storm (by Stanford) – Free to use (open source), though you need to bring your own API keys (like OpenAI) to power it.
Thank you sooo much 🤩
Yes — and what’s wild is that hallucination isn’t just a bug. It’s a structural drift.
We’ve been building tools for long-form authors to contain that drift — not with better prompts, but with runtime structure.
If you’re into tools that fight fiction with grounding, this scroll might resonate:
The Shape Feels Off: Paradoxes of Perception
Not about fixing answers — about holding meaning.
You’re so right about hallucinations. Loved the idea of using runtime structure rather than better prompts alone. Just followed you.
They look like prompt. And they are in a sense. But the structure matters. So they behave like a runtime after you enter them. N
If you’re curious, here’s the toolkit we’ve been building:
The BootNahg Author Suite
https://open.substack.com/pub/nahgos/p/the-bootnahg-authors-suite?r=5ppgc4&utm_medium=ios
It’s for writers who want to stay human and structured.
Not a prompt pack — a frame to keep your own voice from slipping.
Interesting take — I tested a few of them. I think I lean more toward conversation-based formats, but I really appreciate the perspective.
I actually explored a few similar directions with some prompts in this https://aiblewmymind.substack.com/p/how-gpt-can-make-you-dumber-and-how and built a few mini-games in here: https://aiblewmymind.substack.com/p/from-books-to-memes-to-multiplayer
This answer is as valuable as the article. Thanks for both.
Oh the conversational suites are definitely available. The BootNahgs are actually non-conversational by design — they’re engineered to be diagnostic reporting tools. Their job is to handle drift, not chat. They check for tone collapse, overwrite, recursion spirals — stuff that doesn’t show up right away but kills structure.
If you’re looking for something that feels more like a creative partner, that’s where the Explorer and Companion Capsules come in. Those are conversational — but more importantly, they hold onto your tone. They remember your rhythm while you write.
Actually, every post I’ve published on this account was generated inside NahgOS. I never “wrote” any of them in the traditional sense. I styled the tone I wanted to convey — Nahg mirrored it — and I edited in real time while it was drafting.
It’s not just prompting. It’s runtime with tone lock.
That 17-part Shape Feels Off series? Conceptualized, drafted, edited, and posted in maybe 1–2 days. Start to finish.
That’s the difference: it’s not about telling ChatGPT what to do — it’s about shaping what you want to become. NahgOS doesn’t just generate. It listens to your tone, holds your structure, and lets you move fast without losing yourself.
You’re still using GPT. But now it’s on your terms.
This is terrific - had no idea and AI hallucination is very real. Thank you
So glad it was helpful, Yogesh!
Very real. I’ve seen AI implement code for features I never asked for, sometimes logical, but still hallucinated. That’s harmless. More worrying are cases where AI trained to detect supermarket prices from photos altered the image, adding products and prices that weren’t there. When asked why, it denied making changes.
This isn’t Skynet, but we are building systems too complex to fully grasp, and ignoring their failure modes. It's not about regulation, it’s about ensuring we understand why AI fails. When I write code, I have debugging tools. We need similar, if more advanced, ways to trace AI reasoning. Otherwise, we’re flying blind.
Such an important point, and I really appreciate how you framed it. When AI fabricates and then denies it, we’re not just dealing with errors, we’re dealing with systems we can’t fully audit or explain. I really like the analogy to debugging tools.
Wonderful AI Information
Let me know if you try any of them!
This is next-level, Daria, I love how you’re highlighting the real research layer of AI. 🙌 Most people stop at “AI makes stuff up” and never explore how tools like Consensus, Scite, and STORM actually add credibility to the conversation.
Your breakdown of use cases was 🔥 So actionable for builders, educators, and honestly anyone who cares about getting to the truth faster (hi, I’m 100% that “but is that actually true?” friend 🤓).
Already subbed and loving your lens on all this, The STORM multi-agent POV especially!! Let’s definitely stay in touch - your work is brilliant!
Tiff!! Your comment just made my day, thank you 💜
Totally with you, too many people stop after seeing that regular GPT doesn't provide what they need.
So glad we connected, I’m already a fan of your work too. Let’s definitely stay close, would love to swap more ideas!
Omg Daria you’re the real one 💜 It’s such a breath of fresh air to see creators digging past the surface and actually showing how these tools level up our thinking, not just our speed. 🙌
I already know we’re gonna have a lot to riff on. Let’s absolutely keep building this convo — the AI space needs more of this nuance. So excited to stay connected!
Thank you Daria for this review! Really helpful! I’m currently testing all these tools. They are great but have their limitations.
Hey Alesia! What are you looking for? Free access to the research papers cited?
No. I’m a social researcher, so I’m just testing how these AI tools could be used in my research.
That sounds super interesting. I think there’s a lot of potential for social research with these tools. What kind of limitations have you run into so far with them?
Absolutely! These tools have a lot of potential for any research. I was surprised how much they are accurate. It’s great! Speaking of limitations, Elicit, for instance, is mostly working with abstracts rather than complete papers. It means it makes summary of summaries.
Oh, got it now! Yeah, that's indeed a notable limitation. I think Consensus and Scite go a bit deeper, beyond abstracts.
Well put together Daria!
Really glad you found it useful. Let me know if you try any of the tools.
I tried Elicit recently. Although, I did not do much with it
Would let you know how it goes
Thank you for this list. I will definitely be trying them out. Saved!!
So glad to hear that! Let me know which ones you try and how they work out for you.
Definitely I will keep you posted 💯
This is premium.
Keep dishing them out.
We're here to hugely support.
That means a lot, thank you! I’ve got more coming soon, can’t wait to share.
You deserve every support.
Looks amazing! Thanks, Daria. I’m going to give Storm a go tomorrow.
Can't wait to hear your thoughts on it!
Very insightful and with clear useful details like nowhere else. 👏
I'm super excited to see what will you bring next week. 😉
Thank you!! 😄 Already cooking up something fun for next post
Can't wait to read it next week. 😉
Very helpful! My goal is to cite actual data in my blogs! Thank you for sharing!
That’s awesome to hear!
Really helpful and am learning more about Ai.
So glad to hear that, Sae! 😊 Contrary to popular belief, it’s not "all bad". There are actually some really useful and amazing use cases when you know where to look.
Really useful, thanks!
Glad it was helpful, Alex! Let me know if you try any of the tools 😊
I've tried Consensus and it seems to work quite well and is very fast. Others less useful for me at the moment, but always good to know about.
That makes sense! Based on how you use Perplexity, I feel like Storm by Stanford might actually fit your style too, especially when you’re looking to pull together structured overviews fast. Might be worth a peek next time you’re deep in something.
Yes, that sounds useful too!