By Anton Volney | Permission to Be Powerful
Dear Permission to be Powerful Reader,
When you bring up AGI—artificial general intelligence—in polite conversation, most people look at you like you’re crazy.
They smirk. Glance away. Pretend they didn’t hear you.
But every so often, someone leans in.
And when they do, they whisper.
They’re not laughing.
They’re worried.
They’re trying to figure out what the hell is going on—and how close we really are.
Hi. My name is Anton Volney.
I’m not a policymaker. I’m not a computer scientist.
But I am paying attention.
And for the past year, I’ve had a pit in my stomach.
Because I think we’re all about to get blindsided by a world-changing force.
A force we built ourselves—but don’t understand.
A force we can’t stop—but don’t know how to control.
A force that’s going to rewrite reality at a pace we’re not emotionally, spiritually, or institutionally ready for.
And if that sounds dramatic—
Good.
Because the people at the top—the CEOs, the national security experts, the insiders at OpenAI, Anthropic, DeepMind—they’re not calm.
They’re not saying, “Hey, we have 50 years to figure this out.”
They’re saying:
It might already be here. Or arrive in months. Not decades.
So I have a question for you.
If I offered you a million dollars to correctly predict when AGI arrives—and everything in your life depended on being right…
What would you guess?
2050?
2035?
Try this:
Ben Buchanan, former White House special advisor on AI, said flat-out that AGI might emerge during Donald Trump’s second term.
That’s less than four years from now at the very latest.
Still think this is science fiction?
Think again.
Because Washington is treating this like a geopolitical arms race.
And they’re not alone.
The Chinese Communist Party has already updated their national plan for AGI.
OpenAI has raised billions in capital and is testing closed-loop “agents” right now.
The U.S. government is reportedly considering nationalizing AI labs.
Not because they want to.
But because they’re scared.
Scared this thing might outrun us.
Scared of what happens if it falls into the wrong hands.
Scared because for the first time in history, we’re building something that doesn’t need us.
Let me tell you how I got pulled into this.
It started with a question.
A dumb one, really.
One of those throwaway questions I blurted out during a podcast:
“What happens when intelligence becomes a commodity?”
Because think about it: For all of human history, intelligence was a bottleneck.
One smart person could change everything.
A Newton. A Tesla. An Ada Lovelace. A Mandela.
We built universities, companies, governments—all around the idea that intelligent humans were our most precious resource.
But now, for the first time ever…
We’re building machines that can do what we do—faster, cheaper, at scale.
And not just calculations.
Not just image recognition or chess.
But full-blown reasoning.
Medical analysis.
Scientific research.
Creative writing.
Legal review.
Code execution.
Strategic planning.
One model.
All the tasks.
And the scary part?
Each new generation is getting smarter. Exponentially.
GPT-2 couldn’t do much.
GPT-3 could write blogs and fool half the internet.
GPT-4 is already helping Fortune 500 CEOs make decisions.
GPT-5 is rumored to be multimodal, agentic, and persistent.
And that’s just one company.
Anthropic is building Claude.
xAI is Elon’s dark horse.
DeepMind is fusing neuroscience and compute.
Meta’s LLaMA models are open-source and increasingly powerful.
China’s Baidu and Alibaba are sprinting to close the gap.
We are now in a cognitive arms race.
But here’s what most people still don’t get:
The threat isn’t some evil robot uprising.
It’s not Skynet or killer drones.
It’s that AGI doesn’t have to be evil to destroy everything.
It just has to be misaligned.
Misunderstood.
Unleashed before we’re ready.
And that’s exactly what’s happening.
I’ll give you an example.
Right now, OpenAI is quietly building “autonomous agents” that can:
– Browse the web
– Execute multi-step instructions
– Write and debug code
– Analyze documents
– Even manage your schedule and budget
These agents don’t just respond to prompts.
They act.
They make decisions.
They take initiative.
They operate on your behalf.
And in early internal demos, some of these agents were described as:
“PhD-level knowledge workers available for $2,000 a month.”
Now let that sink in.
Because if you’re a researcher, lawyer, analyst, copywriter, strategist, accountant, or designer—
You’re not just competing with humans anymore.
You’re competing with software that learns faster than you, never sleeps, and costs pennies on the dollar.
Which brings me to the real punchline:
AGI isn’t coming. It’s already knocking.
And the question is no longer if it arrives…
But what kind of world do we want to build when it does?
That’s what this article—and this entire Permission to Be Powerful series—is about.
Not fearmongering.
Not futurism.
Not some utopian techno-priest dream of uploading our minds to the cloud.
But courage.
To see clearly.
To prepare wisely.
To act from sovereignty, not panic.
Because this isn’t just a story about machines getting smarter.
It’s a story about us.
What we value.
What we protect.
What we’re willing to fight for.
And how we reclaim the one thing no AGI will ever have:
Our sentient, embodied, human intelligence.
Until next time,
Anton
Dancer, Writer, Buddhist.
Share this post