Talk to Think
Why I've Been Talking to Myself a Lot Lately
My wife caught me in the kitchen last Tuesday, pacing around the island, talking to no one. Hands gesturing at the air. Voice rising. Mid-sentence about an org design I’d been stuck on for weeks.
She stared at me for a good ten seconds. “Are you arguing with yourself again?”
I was. And in fifteen minutes of talking to myself, I’d cracked a problem I’d been staring at in a Word document for three hours.
About a year ago, I wrote my very first post in this series. It was called “Write to Think“ The argument was simple -
In the AI era, writing isn’t about communication, it’s about thinking. Writing forces clarity. It’s a gym for your brain.
I still believe every word of it. But I was incomplete.
Over the past year, something shifted in how I work. I stopped writing as my first step. I started talking. Not into a meeting. Not to a colleague. To myself — and increasingly, to an AI that listens, pushes back, and helps me iterate at a speed my keyboard never could.
And no, this isn’t about dictation. Apple has had that since 2011. Google since 2010. Those tools transcribe your words. What I’m describing is different: a thinking cycle that runs faster when you speak it than when you type it. The AI tools that matured this past year made that gap impossible to ignore.
Speaking Forces Thinking
Here’s what nobody tells you about typing: you self-edit before the thought fully forms. You stare at a blinking cursor. (No, not that Cursor, though I love that one too.) You type half a sentence. Delete it. Retype. The thought never fully arrives because it keeps getting filtered through the bottleneck of your fingers.
When you speak, you can’t do that. You have to push the idea forward, start to finish, in real time. Speaking demands that you organize, commit, and articulate — out loud, in the moment.
And then you catch yourself mid-sentence: “Actually, no, what I really mean is...” You reframe. You self-correct. You discover what you actually think by hearing yourself say it.
That self-correction loop is the entire point.
Turns out, psychologists have known this for nearly a century. Vygotsky showed in the 1930s that talking to yourself isn’t a quirk. It’s how humans develop higher-order thinking. He called it “private speech“, and decades of research have confirmed it improves planning, problem-solving, and self-regulation, well into adulthood.
There’s even a name for it: the “self-explanation effect“. People who verbalize their reasoning show up to 20% improvement on complex tasks compared to those who just re-read the material.
We’ve always known this works. We just spent forty years typing instead.
So the quality of thinking improves when you speak. But that’s only half the story. The other half is speed.
The Conversation Loop — A Faster Flywheel
In “Write to Think“ my thinking cycle was: brain dump -> shape your thoughts -> spar with AI -> make it actionable. Good cycle. But every turn of that flywheel required typing. And typing runs at 40-50 words per minute for most people.
Speaking? About 200 words per minute — roughly 4x faster.
That’s not just a 4x improvement in input speed. It’s a 4x improvement in iteration.
Let’s do the math:
The write-to-think loop: think -> type your question (slow) -> wait for AI -> read -> think -> type your follow-up (slow again). Each full cycle: 8-10 minutes.
The talk-to-think loop: speak your question -> read AI response -> speak your reaction right back -> response again. Each full cycle: 2 minutes.
Same flywheel from “Write to Think” But now the cycle runs 4-5x faster. More iterations in less time. Better thinking, compressed.
Here’s what that actually looks like. Last week I needed to prepare a strategy brief. Normally I’d open a Word doc, stare at it, type for two hours, delete half of it, try again.
Instead, I put on my airpods and opened Wispr Flow (and no, this is not a sponsored post. Nobody’s paying me to say this. It just genuinely changed how I work). I paced my office and spoke my rough thinking for about three minutes. Clean, formatted text appeared on screen. No “ums,” no filler, no cleanup needed.
I pasted that into Claude. “Challenge this reasoning.” Read the pushback. Spoke my response to the pushback. Another round. Three rounds total, fifteen minutes. I had a structured first draft that would have taken an entire afternoon of typing and staring.
Brain dump -> shape -> spar. The cycle I described a year ago. Just running at a completely different speed.
Now, you might be thinking: ChatGPT has a voice mode. So does Gemini. Why not just talk to them directly?
I tried. You start explaining your thinking, pause to collect a thought, and the AI jumps in. It mistakes your pause for a finished sentence. You restart. It happens again. The turn-taking is still clunky enough to break the exact thinking flow you’re trying to protect.
But there’s a bigger reason I don’t use them. Those voice modes lock you into one app. I don’t want to do all my thinking inside ChatGPT.
I want to speak into whatever I’m already working in. Cursor has voice input -- I use it when I’m coding (or better yet, org design, as I discovered lately). Notion when I’m planning. Claude when I’m sparring. Slack when I’m replying to someone. A system-wide dictation layer means I speak, clean text shows up wherever my cursor is, and I pick the best tool for the job.
That’s the real shift. Not “use this one AI’s voice feature.” It’s making your entire workflow voice-native.
Why Now? From Transcription to Comprehension
We’ve had voice dictation for fifteen years. But every tool until recently did one thing: transcribe. Every “um,” every false start, every “so basically” -- faithfully captured and dumped on your screen. You’d spend twenty minutes cleaning it up. The tool created more work, not less.
What showed up this past year is something different: comprehension.
Tools like Wispr Flow, and others emerging in this space, don’t just hear your words. They run LLMs under the hood to understand what you mean. They know what app you’re in and adapt the tone. They read the context around your cursor. You say “so the main thing is we need to, like, rethink how we onboard people from scratch” and it outputs: “We need to fundamentally rethink our onboarding approach.”
The filler is gone. The intent stays.
Old dictation gave you a transcript you had to fix. AI-powered voice gives you a clean first draft you can build on. That’s the difference between a tool that slows you down and one that accelerates the entire talk-to-think cycle.
And it’s getting faster. The speech models powering these tools improved dramatically in the past year. Real-time processing. Better accuracy across accents and technical vocabulary. The gap between what you say and what shows up on screen is now nearly zero.
Which means the bottleneck was never your brain. It was never even your voice. It was the technology between the two. And that bottleneck just disappeared.
One Year Later
A year ago, I wrote “Write to Think” and told you to write more in the AI era. I still believe that. Writing is still the gym. It still forces clarity. It still makes your thinking sharper.
But if I’m honest about what actually changed my productivity this past year, it wasn’t writing more. It was talking more. Speaking rough thinking out loud, letting AI turn it into structured text, then sparring with that text at the speed of conversation instead of the speed of typing.
The “Write to Think” flywheel didn’t break. It just got a motor.
So here is my question to you: when was the last time you solved a hard problem by typing about it, and when was the last time you solved one by simply talking it through?
#Leadership #Productivity #AI #VoiceAI #WriteToThink #TalkToThink #FutureOfWork
Write to Think - Why I Write Even More In the AI Era
You're busy. AI is exploding. So, why on earth should you spend more time writing?



