The Anthropomorphic Projection Hypothesis of Artificial Intelligence
A new year has begun and is already settling into its routine. Yet humanity remains neither enslaved nor annihilated by artificial intelligence. For all the predictions and warnings about an imminent AI takeover, no such event has occurred. I know that some people expect it, some predict it, and a few may even dream of it. While there are legitimate concerns about AI displacing jobs or being used as a weapon, no AI supreme being has emerged, at least to our knowledge, to take control of the world.
Setting aside simulation theory and the broader universe of conspiracy thinking, ideas that often resemble science fiction dressed up as religion, I believe that even the most thoughtful and credible advocates of AI doomsday scenarios are mistaken. Human beings have always been fascinated by end-of-the-world narratives. That fascination may be deeply rooted in our psychology. Across cultures and centuries, people have imagined countless ways the world might end. Each theory tends to reflect the fears and personalities of those who create and believe in it.
This does not mean that genuine threats do not exist. Nature itself presents serious risks. Asteroids, supervolcanoes, and emerging viruses remind us that civilization is fragile. Humanity also possesses the capacity to destroy itself through its own technology. The possibility of nuclear war remains one of the most immediate and devastating dangers. These are threats worth taking seriously.
What I struggle to accept, however, is the popular scenario in which a conscious, self-aware, godlike artificial intelligence suddenly appears and decides that humanity must be eliminated. To me, this idea resembles another form of religion. Once again, we are imagining gods, this time man-made ones, and then fearing, predicting, or even hoping that they will rise up to dominate or destroy us.
One fundamental problem with these narratives is that there is still no universally accepted definition of consciousness. Neuroscience and philosophy continue to debate what consciousness actually is. Without that understanding, there is no clear pathway explaining how a machine might simply switch on and become conscious. Some argue that discoveries can occur accidentally, that consciousness might emerge without our fully understanding how. In many areas of science, that may be true. In this case, however, I remain skeptical.
None of this means that AI poses no danger. The real risk is not a godlike machine deciding to exterminate us. The risk is that human beings will use powerful AI systems to create new weapons or enhance existing ones. A government might design pathogens, engineer more destructive nuclear weapons, or develop technologies even more terrifying than those we already possess. Such outcomes would not represent the rise of an artificial deity. They would simply reflect humanity using new tools in old ways.
Human beings have always created their own gods and often their own monsters. My hypothesis is simple. If humanity eventually destroys itself without the help of nature, it will likely happen by human hands. It will not require a superintelligent machine with the intellect of a million Einsteins. We are more than capable of doing the job ourselves.
Humanity is addicted to the low-hanging fruit that are doomsday forecasts. And we like to disregard the more likely ones.
Why AI Consciousness Is Still a Scientific Mystery
Much of the fear surrounding artificial intelligence assumes something that has not yet been demonstrated. It assumes that machines will eventually become conscious. The problem with that assumption is simple. Scientists still do not agree on what consciousness actually is.
Neuroscience, psychology, and philosophy have spent decades attempting to define the phenomenon. Despite enormous progress in brain science, there is still no universally accepted definition of consciousness. Without a working definition, it becomes extremely difficult to claim that an artificial system has achieved it.
Several major theories attempt to explain consciousness. Each approaches the problem from a different direction. None of them provides a clear pathway for demonstrating that an artificial intelligence system has become self-aware.
One of the most influential ideas is phenomenal consciousness, as articulated by the philosopher Thomas Nagel. This concept refers to subjective experience. It is the idea that there is something it feels like to be a conscious being. The difficulty here is obvious. Subjective experience cannot be measured directly from the outside. Even if an AI system behaves exactly like a human, we would have no reliable means of determining whether it experiences anything at all.
Another concept is access consciousness, associated with philosopher Ned Block. This idea focuses on information that becomes available to reasoning, speech, and decision-making. Modern AI systems already process information and produce responses that appear thoughtful and coherent. Yet information processing alone does not demonstrate awareness. A calculator processes information as well. Few people would argue that it possesses a conscious mind.
Neuroscientist Antonio Damasio has written extensively about self-awareness and the construction of the self in the brain. In his work, consciousness emerges from complex interactions between the brain, the body, and the organism’s environment. Artificial intelligence systems do not possess bodies, biological drives, or evolutionary pressures that shaped human consciousness. Replicating those conditions in a machine remains an open question.
Other scientific theories approach consciousness through large-scale brain dynamics. Global Workspace Theory, developed by Bernard Baars and expanded by Stanislas Dehaene, proposes that consciousness emerges when information becomes globally available across multiple brain systems. While this framework describes aspects of human cognition, reproducing such processes in artificial systems does not automatically imply that subjective awareness has appeared.
Integrated Information Theory, proposed by Giulio Tononi, suggests that consciousness corresponds to the degree of integrated information within a system. In theory, any sufficiently complex system might possess some degree of consciousness. In practice, measuring integrated information in large systems is extraordinarily difficult. Even if it were measurable, it remains unclear whether high integration truly produces subjective experience.
Philosopher David Rosenthal has proposed Higher Order Thought Theory, which argues that mental states become conscious when a system forms thoughts about its own mental states. Artificial intelligence can already generate descriptions of its internal processes. However, generating descriptions is not the same as experiencing them.
All of these theories attempt to explain consciousness. None of them demonstrates that an artificial intelligence system has achieved it. Until scientists can clearly define consciousness in the human brain, claims that machines will inevitably become conscious remain speculative.
For now, the idea of a conscious artificial intelligence that decides to conquer humanity belongs more to fiction than to science.
Important Caveats and Counterarguments
Before introducing my hypothesis, a few important caveats are necessary. None of what I have written here should be taken as a claim that catastrophic outcomes involving artificial intelligence are impossible. History has repeatedly shown that powerful technologies can produce consequences that their creators did not anticipate. Artificial intelligence will almost certainly transform industries, governments, and daily life in ways that are difficult to predict. That uncertainty alone deserves serious attention.
Some researchers in artificial intelligence safety argue that the real danger does not require a conscious machine. Their concern is that highly capable systems might pursue goals that are poorly aligned with human interests. In that scenario, the system does not become malicious. It simply carries out its assigned task with extreme efficiency while ignoring consequences that humans failed to anticipate. A powerful system that optimizes the wrong objective could still cause significant harm. This is a serious technical concern and one that deserves careful research.
It is also important to keep the current state of artificial intelligence in perspective. Most modern AI systems are statistical models trained on large datasets. They generate outputs by recognizing patterns and probabilities within the data they have learned. They do not possess desires, drives, or intentions. They do not experience curiosity, ambition, or fear. Their capabilities can appear impressive because they operate at enormous scale and speed, but their behavior still depends on goals and structures defined by human designers.
None of these realities removes the possibility that artificial intelligence will become a disruptive force. It almost certainly will. Technologies that increase efficiency and concentration of power rarely leave society unchanged. Artificial intelligence may influence labor markets, information systems, political communication, and military strategy. Those changes could produce both benefits and risks.
What remains uncertain is whether any of those developments will lead to the appearance of a conscious machine that chooses to dominate humanity. That scenario is often treated as inevitable in popular discussions. Yet the scientific foundations for such a conclusion remain weak. Until consciousness itself is better understood, predictions about conscious machines remain speculative. Recognizing that uncertainty is not an attempt to dismiss legitimate concerns. It is simply an attempt to separate plausible risks from narratives that may say more about human imagination than about artificial intelligence itself.
The Anthropomorphic Projection Hypothesis of Artificial Intelligence
Throughout history, human beings have repeatedly imagined the end of the world. Nearly every major religion contains some form of final reckoning, collapse, or cosmic transformation. In Christianity, the Book of Revelation describes a final battle and the judgment of humanity. Islam teaches about the Day of Judgment and the arrival of the Mahdi before the end of time. Judaism speaks of a messianic age that follows turmoil and upheaval. In Hindu traditions, there is the concept of the Kali Yuga, the final age of decline that ends with destruction and renewal. Buddhism describes cycles of degeneration and rebirth in which the teachings eventually disappear before being rediscovered. Even traditions that are less focused on apocalypse still contain ideas of cosmic cycles that end and begin again.
These themes are not limited to religions that are widely practiced today. Ancient civilizations also imagined catastrophic endings. Norse mythology described Ragnarök, a final battle between gods and monsters that destroys the world before it is reborn. In ancient Mesoamerica, the Aztecs believed the universe had already passed through several destroyed worlds before the current one. Zoroastrianism described a final purification of the world through fire. Many indigenous traditions also contain stories of floods, fires, or cosmic resets that mark the end of one age and the beginning of another.
Outside of organized religion, the pattern continues. Modern history contains many secular doomsday movements and predictions. Some small religious sects predicted specific dates for the end of the world and gathered followers around those expectations. The Heaven’s Gate group believed that a spacecraft hidden behind a comet would carry them to a higher existence. Other movements predicted nuclear annihilation or global collapse. At the turn of the millennium, many people feared that computer systems would fail and bring down the infrastructure of modern civilization in what became known as Y2K. In more recent years, the idea of artificial intelligence destroying humanity has taken on a similar cultural role.
The persistence of these narratives suggests something important about the human mind. People tend to interpret the unknown using familiar patterns. When we imagine powerful forces, we often assign them human motives. We assume that intelligence will behave the way human intelligence behaves. We assume that power will pursue domination because human power has done so throughout history.
This tendency is difficult to escape. Human beings evolved to interpret the world through social behavior. We are constantly evaluating intentions, threats, alliances, and rivalries. These habits help us navigate relationships with other people. They also lead us to attribute intention where none may exist. A storm becomes angry. A malfunctioning machine becomes stubborn. A powerful technology becomes ambitious.
Historical experience reinforces these assumptions. The rise of centralized power has often produced conquest and control. Empires expand. Rulers dominate rivals. Military technology is used to impose authority. When societies encounter unknown forces, they often imagine those forces behaving in the same way. The conquerors are usually the ones who arrive first. The history of colonial expansion provides a clear example. When Europeans reached the Americas, they did not arrive as neutral observers. They arrived with armies, diseases, and systems of domination that reshaped entire continents.
That historical memory shapes modern imagination. When people speculate about extraterrestrial civilizations, they often assume that aliens would arrive as conquerors. When people speculate about superintelligent machines, they imagine a similar pattern. A powerful intelligence appears. It seeks control. It eliminates threats. These narratives follow familiar human behavior because that is the behavior we understand.
The difficulty lies in separating human history from the nature of the unknown. Intelligence does not automatically imply conquest. Power does not necessarily imply hostility. Yet our experience with human institutions and human rulers makes those outcomes seem inevitable. In many cases, we may not be predicting the behavior of machines, aliens, or future technologies at all. We may simply be projecting the record of human power onto whatever new force we encounter.
I would argue that my Anthropomorphic Projection Hypothesis of Artificial Intelligence covers the gambit so far. We continue to project our own motives and historical patterns onto technologies that we do not yet fully understand. The deeper problem is that we still lack a clear definition of consciousness itself, which makes confident predictions about conscious machines premature. At the same time, we cannot fully predict how disruptive technologies will reshape society. Artificial intelligence is already beginning to alter the structure of work, communication, and creativity, and we are witnessing the early stages of those changes now. Yet uncertainty about the future should not automatically lead us to assume that machines will inherit the worst traits of human power. In many cases, those fears may tell us far more about human nature than about the technologies we are creating.
What AI Fears Reveal About Human Nature
None of this is meant to suggest that catastrophic outcomes involving artificial intelligence are impossible. History has taught us to be cautious about making confident predictions. Technological breakthroughs have a way of surprising the societies that create them. Artificial intelligence will almost certainly reshape industries, economies, and daily life in ways that we cannot fully predict. It may produce consequences that are disruptive and even dangerous. Ignoring that possibility would be careless.
What I am arguing is something narrower and more grounded. The popular scenario of a conscious artificial intelligence rising up to conquer or eliminate humanity is not impossible. It is simply improbable. It assumes a chain of developments that science has not demonstrated and cannot yet explain. We do not have a clear definition of consciousness. We do not know how it emerges in biological systems. We have even less reason to believe that it will suddenly appear inside a machine and immediately adopt the ambitions of a human tyrant.
What we do understand very well is human behavior. History provides a long record of competition for power, control of resources, and the concentration of authority in the hands of a few. New technologies have often amplified those tendencies rather than eliminated them. Printing presses spread ideas, but also propaganda. Industrial machinery increased productivity but also created new forms of exploitation. Nuclear technology produced both energy and weapons capable of destroying civilization. Artificial intelligence will almost certainly follow the same pattern. It will be a powerful tool. Like every powerful tool before it, it will reflect the intentions of the people who build and control it.
This is where the Anthropomorphic Projection Hypothesis becomes useful. When people imagine hostile superintelligent machines, they often picture something that behaves exactly like a human conqueror. The machine seeks dominance. It eliminates rivals. It centralizes power. In other words, it behaves like the kinds of leaders and empires that have shaped human history. The imagined motivations of the machine mirror the motivations we already understand.
There is a certain irony in this. We spend a great deal of time worrying about the possibility that machines will become our tormentors. Meanwhile, humanity has spent thousands of years demonstrating a remarkable talent for tormenting itself. We have fought wars with swords, muskets, artillery, and nuclear weapons. We have built systems of surveillance, censorship, and control long before computers entered the picture. If the future ever becomes dystopian, it will not require a godlike machine to get us there. Human greed and the pursuit of power have already proven quite capable of doing the job.
The real question may not be whether artificial intelligence will decide to dominate humanity. The real question is how humanity will choose to use the tools it creates. Technology amplifies intention. It does not replace it. If we want to understand the future risks of AI, we may not need to imagine the psychology of machines at all. We may simply need to look honestly at the psychology of the species that is building them.