Harari, a professor at the Hebrew University of Jerusalem, is renowned for his thought-provoking explorations of human evolution, technology, and the future of civilization. His latest warning is both a reflection of past revolutions and a foreboding glimpse into an uncertain future.
AI: The Shift from Tool to Autonomous Decision-Maker
In the video, Harari draws a stark contrast between traditional tools, such as hammers or even nuclear bombs, and AI. “A tool is something in your hands,” he explains. “A hammer is a tool. An atom bomb is a tool. You decide to start a war and who to bomb. It doesn’t walk over there and decide to detonate itself. AI can do that.”
Unlike conventional technology, which requires human input to function, AI has the ability to act independently, Harari argues. He highlights that AI systems are already making autonomous decisions in various fields, including warfare. “We already have autonomous weapon systems making decisions by themselves,” he says, warning that AI can even go a step further—by inventing new weapons or even creating more advanced AI systems beyond human control.
A Future Beyond Human Control?
Harari’s concerns are not just hypothetical. AI-driven autonomous systems are increasingly being deployed in military and security sectors, raising ethical and existential dilemmas. If AI continues to evolve unchecked, the possibility of machines deciding the fate of nations no longer remains a science fiction trope—it becomes a looming reality.
His concerns align with themes he has explored in his books, particularly Homo Deus: A Brief History of Tomorrow, where he examines the potential of AI surpassing human intelligence. In Sapiens, he explored how Homo sapiens became the dominant species by mastering storytelling, cooperation, and technological innovation. Now, he warns that we may be on the brink of creating something that could outpace us entirely.
What Comes Next?
Harari’s warning serves as a call to action. If AI is truly an agent and not just a tool, then the responsibility to regulate and control its growth falls on humanity. The questions that arise are urgent: How do we ensure AI remains aligned with human values? Can we prevent AI from surpassing our control? And most importantly, have we already crossed the point of no return?
As AI continues to advance at an unprecedented pace, Harari’s message is clear—this is no longer a theoretical debate. The future of intelligence may no longer be in human hands, and whether that leads to progress or peril remains an open question.