Your AI doesn’t need better prompts. It needs to make you think.
What if your AI interviewed you instead of writing for you? Not “generate a blog post about X” but an actual interview — informed, opinionated, and surprisingly good at follow-up questions. A rubber duck with a research budget. You talk, it pushes back, and the blog post writes itself as a side effect. This one did. The reason I started doing this is depressingly simple: the better AI gets at writing for us, the worse we get at thinking. I decided I’d rather argue with a well-read duck than slowly forget how to have ideas.
Advait Sarkar from Microsoft Research recently gave a TED talk where he described what happens when you give an entire workforce access to the same AI. He called it a hive mind. Except the hive is really boring and keeps suggesting the same five ideas. Meanwhile Brandon Sanderson gave a keynote about writing six terrible novels before he wrote a good one, and his point was: the books aren’t the art. You are. The struggle is what made him a writer. Skip the struggle, skip the becoming.
These two ideas kept bumping into each other in my head. Sarkar saying we’ve outsourced our thinking. Sanderson saying the thinking is the product. And me, staring at a blog post I’d just had Claude draft from a three-sentence prompt, realizing with the quiet horror of a man who just hit “reply all” that I was part of the problem.
Because here’s the thing. Everyone’s using AI the same way: type prompt, get draft, publish. And the output is fine. It’s fine the way elevator music is fine. It’s the literary equivalent of a firm handshake from someone whose name you immediately forget. Researchers have measured this.1 The more people use the same AI the same way, the more everything converges into the same beige paste. We’re all slowly becoming the same author, and that author is not someone you’d want to sit next to at dinner.
Worse: it’s not just the output that gets boring. We get boring. People put less effort into thinking when AI does the heavy lifting.2 You don’t build muscle watching someone else do push-ups. You don’t learn to cook by ordering DoorDash. And you don’t develop expertise by nodding along to a language model like a dashboard bobblehead.
So I tried something stupid.
The Expert Interview
I stopped prompting the AI to write. Instead, I made it interview me.
That’s it. That’s the revolutionary idea. I know. Hold your applause.
Three steps. First, the AI does deep research and fills its brain with context. Second, it uses that context to ask me 15-30 provocative questions, one at a time, each designed to make me go “well actually…” and then talk for five minutes straight while gesticulating at nobody in my apartment. Third, it takes the research plus everything I said and turns it into whatever I need.
I’ll give you a moment to absorb the sheer technological sophistication of “what if the computer asked YOU the questions.” I know. Groundbreaking. Someone alert the Nobel committee.
But here’s why it works: in a normal AI workflow, you struggle to write a good prompt and the AI generates mediocre content from it. In the Expert Interview, the AI struggles to ask good questions and you generate the content by answering them. The effort is reversed. You’re doing the interesting work. The AI is basically a well-prepared talk show host and you’re the guest who forgot they were on camera.
This blog post was written using the Expert Interview. I went in with a vague notion. I came out with a framework, three ideas I didn’t have going in, and the slightly unsettling realization that I understood my own idea better after explaining it to a machine than I did before. Somewhere around question 12, the AI asked me whether I was building a content tool or a thinking tool, and I said “huh” out loud to nobody. The blog post is a side effect. The thinking was the point.
Step 1: Deep Research (or: the AI studies for your exam)
When a journalist prepares to interview a surgeon, they don’t study so they can perform surgery. They study so they can ask better questions. The research isn’t for the expert. It’s for the interviewer.
Same thing here. The AI reads papers, fetches posts, cross-references sources, goes down rabbit holes you didn’t know existed. You never see any of it. It exists for one reason: to make the questions in step two actually good instead of “so, tell me about your topic” delivered with the energy of a podcast host reading your bio off a teleprompter while clearly thinking about lunch.
An AI that has read 30 sources on your topic asks fundamentally different questions than one working cold. It knows the controversies. It knows where the bodies are buried. It can call you on your bullshit with citations. That’s the difference between a conversation and an interrogation, and you want the interrogation. Trust me.

Step 2: The interview (or: the Feynman trick with teeth)
This is the fun part. Genuinely. I-look-forward-to-this fun.
The AI asks you one question at a time. Each comes with researched context and a hot take. Something slightly wrong on purpose. Something calibrated to activate that deep primal part of your brain that cannot physically let an incorrect statement go uncorrected. You know the feeling. Someone is wrong about your thing. On the internet. And now you must explain, at length, with passion and hand gestures, exactly why.
Congratulations. You just produced content. Good content. The kind with actual opinions in it.
The moment you start explaining why the AI is wrong, your real expertise comes out. Not the polished LinkedIn version. The raw one. The weird analogies. The half-formed ideas. The “oh wait, that connects to something I hadn’t thought of” moments. The stuff no language model would produce because language models don’t have epiphanies in the shower at 2am.
Feynman said you only truly understand something when you can explain it. The Expert Interview is the Feynman trick, except instead of a rubber duck you have a duck that has read 30 papers and has opinions. An opinionated, well-read duck. I know this analogy is getting away from me. Stay with me.
It gets better. Each time the AI evaluates your answer, you see your own thinking reflected back in a way you didn’t expect. That sparks new connections. By question 10 you understand your own idea better than when you started. By question 15 it has changed shape in your head. You walked in with a vague notion and you’re walking out with a framework. Not because the AI gave you one. Because explaining forced you to build one. It’s like watching a Polaroid develop, except the Polaroid is your own thoughts, and yes I know Polaroids are from 1972, just go with it.
Now: use voice dictation for this. I will die on this hill. This is not a nice-to-have. This is the load-bearing wall of the entire operation.
When you type, your inner editor wakes up. That little voice that says “that sounds dumb, rephrase that.” You polish before you finish thinking. You delete the weird tangent. You smooth everything into a shape that is professional and correct and sounds exactly like what the AI would have written anyway. Congratulations: you just did the hive mind’s job for it. You homogenized yourself. For free. While sober.
Speaking bypasses all of that. You ramble. You go on tangents. You say “I’m not sure this is right but…” and then say something that turns out to be the best idea in the entire piece. The AI gets every glorious, unfiltered, slightly embarrassing moment of you being a human who has thoughts. That’s the signal. That’s what the beige paste is missing.
Pretend you’re explaining your idea to a very attentive friend at a bar who has read everything ever published on your topic and will not judge you for the tangent about that time in 2019.
Step 3: The artifact (or: the receipt)
The AI takes everything from the first two steps and assembles whatever you need. Blog post, coding guidelines, project brief, love letter to your architecture decisions, whatever.
Every idea traces back to something you actually said. The AI is organizing, not originating. It’s an editor working from interview tapes, not a ghostwriter working from a Post-it note that says “write something about AI, make it sound smart, I have a meeting in ten minutes.”
You can push back on the draft, change angles, tell it “that section is boring” or “you completely missed the point of the duck thing.” The AI remembers what you said and why you said it. Which is more than can be said for most colleagues, let alone most meetings.
To borrow Sanderson’s framing: this step produces the receipt. Step two is where you actually grew. The blog post is proof that thinking happened. A receipt for the struggle. Frame it if you want.

What this is actually for
I built this for blog posts. Turns out that’s the smallest thing it does. It’s like inventing a machine to open jars and discovering it also does your taxes and walks your dog.
The Expert Interview is a thinking tool. The artifact at the end is just the receipt. And once you realize that, the use cases get ridiculous.
I’ve used it to develop a corporate identity for my company. Not by asking AI to “generate a brand identity” (shudder), but by having it interview me about what the company actually stands for, who we are, what annoys us about everyone else in our space, and what we’d want someone to feel after working with us. Turns out I had strong opinions about all of this. I just hadn’t organized them yet because nobody had asked.
I’ve used it to create a writing style guide. The AI researched style guides, then grilled me on my preferences until a consistent voice emerged from my contradictions. I’ve used it to design a new website for my company by talking through what we need, who visits, and what matters. I’ve used it to shape features from vague customer requests into something an engineering team can actually build, by interviewing the person who talked to the customer (often me) about what the customer really meant versus what they literally said. If you’ve ever worked in software, you know those are two very different things.
I’ve used it to just… think. About new product ideas. About strategy. About whether a thing I’ve been mulling over for weeks actually makes sense or if I’ve been fooling myself. You talk, the AI pushes back, and by the end you either have a plan or you’ve saved yourself months of working on something that falls apart the moment you try to explain it.
And then you get a receipt. A document, a brief, a blog post, a set of guidelines. Something you can hand to other people and say “here’s what I think and why.” The thinking happened in the interview. The receipt lets you share it without making everyone sit through your 45-minute monologue about corporate values. They’ll thank you.
You can also interview multiple people on the same research baseline and merge the results. That’s a panel discussion minus the scheduling nightmare and the guy who unmutes just to agree louder.
You know how everyone says “this meeting could have been an email”? Well. Now your meeting can be an interview, and the interview actually becomes the email.
Try it
You need an AI that can search the web (Claude, ChatGPT, or anything with research capabilities), a topic you care about, and your voice. If you use Claude Code — especially for priming context before a development session — there’s a full skill you can install that handles the entire flow. But you can do this manually in two prompts.
Step 1: Research + Interview
Open a new conversation. In Claude, turn on Research (the button at the bottom left — it turns blue when active). In ChatGPT, use Deep Research mode or at minimum enable web browsing. Then start with your opinion and hand it the interview instructions in one go:
I think tabs are better than spaces for coding and I'm tired of pretending this is still a debate. I want to turn this into a blog post.
First, research this topic. Find academic studies, practitioner blog posts, talks, and counterarguments — especially from people who disagree with me. Don't summarize anything back to me. Keep it all in your context.
Then interview me. Ask one question at a time. Each question should include a sentence or two of research context and a hot take — something provocative or slightly wrong that makes me want to correct you. After each answer, tell me what you understood and how it connects to your research or my previous answers. Don't tell me I'm right. Then ask the next question. Adapt based on what I say. 15 questions.
When the first question arrives, answer it with voice input.
Step 2: Artifact
After the interview, tell it what you need:
Take your research and everything I just said and write a blog post. Every claim should trace back to something I told you. Make it funny. I want people to laugh out loud at least twice. Think Douglas Adams writing about software engineering after three pints.
Push back on the draft. The AI remembers the full interview, so it knows what you mean.
The one thing to remember
The output is only as good as the creativity you dump into your answers. Don’t overthink it. Just start talking. Let the weird ideas out. Say the thing that starts with “this might be stupid but…” because in my experience it never is.
The blog post will figure itself out. You’ll be too busy having new ideas to notice.
Further reading:
- Elio Struyf arrived at a similar workflow independently for technical blog writing
- Advait Sarkar’s research on AI provocations and critical thinking (the academic backing for why this works)
- Brandon Sanderson’s full keynote: We Are The Art
-
Doshi & Hauser, “Generative AI enhances individual creativity but reduces the collective diversity of novel content”, Science Advances, 2024 ↩
-
Sarkar et al., “The Impact of Generative AI on Critical Thinking”, CHI 2025 ↩