Transitions and Transformations: Educating in the Age of “Powerful AI”
Why knowledge, judgment, and human discernment matter as AI grows more powerful
Entry-level job postings in the US are down by around 35% since January 2024. Unemployment for recent graduates sits at 5.8%, one of the worst figures in recent years. Harvard Business Review suggests that AI can already handle 50–60% of typical junior-level tasks. In Europe, the picture is similarly stark, with tech companies cutting entry-level hiring by more than 70%.
I have also been noticing this shift much closer to home. In recent conversations with friends who are senior business leaders, I have heard remarkably similar descriptions of how AI is already changing the shape of work.
“Claude is already doing what my junior analysts used to do.”
“We’re replacing functions with AI.”
Work that once took weeks is now happening in days. Teams are shrinking. Roles are shifting. Hiring is being rethought, not towards those who can simply do the analytical work, but towards those who understand how to work alongside AI tools, who can ask better questions, and who can catch what is wrong. These are not predictions. They are descriptions of what is already happening inside organisations.
Then I had an experience that made all of this feel more visceral. A colleague offered to show me her fully autonomous vehicle. I was sceptical. But watching a system navigate urban streets, making decisions, managing space, signalling, braking, and parking with calm precision, was surreal. It wasn’t a prototype. It was operational, deployed, and, in many ways, better than most human drivers.
And then, while the Grand Prix was unfolding in Shanghai, I found myself reading about the new Formula 1 regulations and the response from James Vowles.
It struck me how closely this mirrored the idea of transformation in a very different field.
In the lead-up to major regulatory shifts, teams are often forced to dismantle systems that are currently working, not because they are failing, but because they know those systems will not carry them forward. As Vowles describes, transformation at that level is not incremental. It is deliberate deconstruction. Teams take apart processes, assumptions, and ways of working, even the parts that feel successful, in order to rebuild in response to a new reality.
That idea stayed with me.
Because what I had just experienced, the autonomous system, the conversations with business leaders, and new Formula 1 regulations, felt less like improvement and more like the early signs of something being fundamentally reconfigured.
A System Being Reconfigured
In January 2026, Dario Amodei, CEO of Anthropic, published an essay titled The Adolescence of Technology. He describes “powerful AI” as a model that is not simply a tool you interact with, but something far more capable: a system that can operate across domains, complete complex tasks autonomously over extended periods of time, interact with the digital world much like a human worker, and scale into millions of parallel instances working simultaneously.
It’s worth reading his definition carefully, because this isn’t speculation; it’s a description of systems his team believes are coming.
By “powerful AI,” I have in mind an AI model—likely similar to today’s LLMs in form, though it might be based on a different architecture, might involve several interacting models, and might be trained differently—with the following properties:
In terms of pure intelligence, it is smarter than a Nobel Prize winner across most relevant fields: biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc.
In addition to just being a “smart thing you talk to,” it has all the interfaces available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on. It does all of these tasks with, again, a skill exceeding that of the most capable humans in the world.
It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary.
It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory, it could even design robots or equipment for itself to use.
The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10–100x human speed. It may, however, be limited by the response time of the physical world or of software it interacts with.
Each of these million copies can act independently on unrelated tasks, or, if needed can all work together in the same way humans would collaborate, perhaps with different subpopulations fine-tuned to be especially good at particular tasks.
We could summarize this as a “country of geniuses in a datacenter.”
— Dario Amodei, “The Adolescence of Technology” (January 2026)
The Constraint Has Shifted
Let’s be honest about what Amodei is describing. This isn’t AI that can help with homework or draft essays. This is AI that can prove unsolved mathematical theorems, write advanced codebases, direct scientific experiments, work autonomously for extended periods, and do so at scale and speed far beyond human capacity.
Every one of these capabilities challenges the traditional educational promise: work hard, master this skill, secure your future. Because if AI can do it faster, better, and cheaper, that promise becomes unstable.
But what matters is this: AI cannot decide what questions are worth asking. It cannot determine what problems matter. It cannot choose what kind of future to build. It cannot act with wisdom, judgment, and discernment about what matters.
Those capacities remain human. That’s where education needs to shift.
This shift requires understanding three things:
1. The Jobs Won’t Disappear—The Nature of Work Will
Amodei doesn’t predict that work disappears. He predicts that work *transforms*. The humans who thrive won’t be those competing with AI on speed and capability. There will be those asking: What should we create? What problems matter? How do we want to live? What constitutes a good society?
These are fundamentally different questions. And they require fundamentally different capacities: judgment, values, discernment, the ability to ask powerful questions, and the capacity to think about second and third-order consequences.
2. Fluency With AI Becomes Foundational
Just as literacy became foundational in the industrial age, fluency with powerful AI becomes foundational now. But not in the sense of “learning to use AI tools.” Literacy didn’t mean “learning to use the printing press.” It meant developing the capacity to read, write, think, and communicate in a world where those abilities mattered.
AI fluency means learning to work alongside superintelligent systems. It means understanding what they’re good for, what they’re not, when to trust them, when to question them, and how to delegate to them while maintaining human responsibility and judgment.
This is a skill educators can teach right now. It doesn’t require waiting for perfect technology or perfect understanding. It requires cultivating intellectual habits: curiosity, scepticism, the willingness to test, and the ability to verify. Anthropic offers a great 4D AI Fluency framework for this work.
3. Character and Values Become Decisive
In a world where capability is commodified, where any question can be answered, and AI can solve any problem, what distinguishes human contribution is character: integrity, ethical reasoning, the ability to care about long-term consequences, and the willingness to stand for something even when it’s inconvenient.
These aren’t new virtues. But they become newly urgent when raw intelligence and speed are no longer scarce.
He also states:
“A country of geniuses in a datacenter.”
It is worth pausing on this. Amodei isn’t suggesting that knowledge becomes obsolete. He is pointing to something more subtle.
Where We Risk Getting It Wrong
The constraint has shifted.
For a long time, what limited us was access to information. Schools existed, in part, to provide it. In that context, information was a scarce resource, and schools were the primary distribution mechanism for knowledge. In this system, the following paradigm existed.
Teacher role: Knowledge importer—you memorise what’s told
Curriculum: Coverage-based—breadth of content
Learning: Passive consumption of teaching
Assessment: External exams testing recall
Governance: Bureaucratic control
But as AI begins to handle that more easily, what limits us changes. Information is abundant and free, and a shift in paradigm is required to accommodate this.
Teacher role: Facilitator—helping you learn how to think
Curriculum: Depth-based—real competency
Learning: Active; you have agency in what and how you learn
Assessment: Real-world, competency-based
Governance: Participatory, peer networks
It becomes our ability to interpret what we see, to question it, and to decide whether it is valid, meaningful, or incomplete. And that requires knowledge
The Illusion of Understanding
Anthropic’s most recent AI Fluency Index offers an important insight. Nearly 10,000 real conversations with Claude were analysed to understand how people actually interact with AI. When AI produces something polished, people are less likely to question it. When they iterate—ask follow-up questions, test reasoning—they engage more critically. But when the output looks complete, many assume the thinking is done. They stop asking: Is this true? What is missing? Does this hold up? Why does this happen?
Perhaps it is cognitive offloading, but more fundamentally, it is because you cannot question what you do not understand.
You cannot evaluate a mathematical proof without mathematics. You cannot assess a historical argument without historical knowledge. You cannot recognise when an AI response is incomplete, misleading, or biased without expertise in the field.
In a world where information is instantly available, knowledge does not become less important. It becomes the foundation for discernment. It is what allows us to recognise when something makes sense and when it doesn’t.
What Deep Knowledge Enables
An incredible living example of this is Demis Hassabis, a British AI researcher and co-founder of Google DeepMind, whose work bridges neuroscience, game design, and artificial intelligence, demonstrating what becomes possible when deep knowledge is integrated across disciplines and guided by purpose. After watching The Thinking Game, Google DeepMind’s documentary about his work, his story isn’t just about brilliance. It’s about what happens when knowledge, curiosity, and values integrate.
Demis was a chess prodigy at 4. But he didn’t just get better at chess. He developed pattern recognition, systems thinking, and an understanding of how complex systems work. He moved to neuroscience, studying how the brain works, then to AI. Each domain is built on the last. By the time he tackled protein folding, he had integrated knowledge across multiple fields. That’s not luck. He developed the intellectual foundation.
But here’s the part that stayed with me: when his team solved protein folding—200 million protein structures—they gave it away. They could have built a proprietary platform, charged access, and made billions. Instead, they understood what protein folding meant for medicine. They saw the cascading possibilities: diseases understood, new drugs developed, research accelerated for decades. They chose long-term human benefit over profit.
That decision required deep knowledge, the capability to solve the problem, and values that prioritised human good.
And that, to me, is what education should be developing.
Why Struggle Still Matters
Also in this space, Mustafa Suleyman, Head of AI at Microsoft, insights become crucial. He was asked what parents and educators should focus on in preparing students for an AI world. His answer cuts through everything:
“The discipline of being able to teach yourself. That’s a meta skill. And that comes with friction. You have to introduce discipline and friction into the process because if it’s always on tap, then the child could get used to having everything instantly available and doesn’t learn from hard work.”
This is the meta skill: learning how to learn. And it requires struggle. This is why struggle matters. This is why friction matters. This is why finishing what you start matters. When everything is instantly available, the neurological changes that constitute learning, the struggle, the effort, the productive difficulty, disappear. And with it, the cognitive resilience needed to think alongside powerful AI. Schools must protect the struggle. Not reduce it. Protect it.
This Is a Transition — and Possibly a Transformation
If I step back, what we are seeing is not simply a shift in tools or practices. It is a shift in the underlying conditions that have shaped education for over a century. And that matters, because when conditions change at that level, what follows is not improvement—it is transformation.
I’ll admit—I can imagine Amondi’s prediction as being possible.
Perhaps because I’ve already started to see glimpses of it. Watching robots move in China with a level of fluidity that no longer feels mechanical. Seeing a robot act as a tennis partner and coach, returning serves, adjusting position, and volleying seamlessly with a human. It doesn’t feel like a distant future. It feels like something that is already here.
Too often, we describe incremental adjustments as a transformation. We layer new ideas onto existing systems without questioning whether those systems are still fit for purpose. But real transformation does not work like that.
It is not additive.
It is not comfortable.
And it is rarely clean.
In the work of Brené Brown, transformation begins with breaking. Breaking apart assumptions, systems, and ways of thinking that no longer serve. And that breaking is not easy. It creates uncertainty. It creates resistance. It often feels like loss because what is being dismantled is not just structure, but familiarity, identity, and the sense of competence we have built within existing systems.
It requires the courage to question what has worked, the discipline to sit in uncertainty, and the clarity to understand what we are building towards. And this is where the connection to AI becomes sharper. Because powerful AI is not simply making tasks faster or more efficient. It is exposing the limits of the systems we have built. It is revealing that many of the structures we have optimised, curriculum, assessment, pathways into work, are grounded in a world where information was scarce and human cognitive labour was the constraint.
A Question Worth Sitting With
That world is shifting. And so the question is not whether we adapt. The question is whether we are willing to do what transformation requires: to look closely at what we have built, to identify what no longer holds, and to be willing to take it apart.
Return to Amodei’s description. Read it again.
Share it with your colleagues, with your teams, with educators.
And ask:
Do we believe this could be the world our students are entering?
Because if there is even a possibility that it is, then the questions we ask of education, and of ourselves, need to shift accordingly. Amodei describes this moment as a kind of technological adolescence, a period where our capability is accelerating faster than our wisdom. And perhaps that is what makes this moment feel less like a reform and more like a transition that may become a transformation.
The question is not simply how we respond. But what we are willing to let go of.
Further Reading:
Amodei, D. (2024). ‘Machines of Loving Grace.’ darioamodei.com
Amodei, D. (2026). “The Adolescence of Technology: Confronting and Overcoming the Risks of Powerful AI.” https://darioamodei.com
Anthropic Research Team (2026). AI Fluency Index. Analysis of 9,830+ real conversations examining adoption, fluency, and critical thinking patterns.
The Thinking Game. Documentary by Google DeepMind. Explores Demis Hassabis’s intellectual journey.

