(70) Ancient Cultures Were Actually Talking About AI - YouTube
https://www.youtube.com/watch?v=sQpgU3TWrLs
Transcript:
(00:00) For thousands of years, ancient civilizations wrote about spirits. But what if what they understood as angels or demons were actually what we might describe as software agents today? What if ancient spiritual texts are actually technical manuals written before we understood technology? These are some of the core ideas in what Joscha Bach calls cyberanimism.
(00:25) Joscha is a leading cognitive scientist and artificial intelligence researcher focusing on machine consciousness. And instead of dismissing these beliefs as primitive superstition, cyberanimism suggests that our ancestors were actually describing something profound. The presence of self-organizing information patterns, running on the substrate of physical reality.
(00:50) But before you click off this video, I realize how much baggage words like "spirit" can carry me having once characterized myself as an agnostic or secular humanist while also growing up in hyper-Roman Catholic New Orleans. But if you're willing to sit with the uncomfortableness for just a few minutes, you might find there's something surprising here about humans and AI lurking just beneath the surface.
(01:16) Let me show you what I'm talking about. In the beginning, God created the heavens and the earth. Did he though? What if the oldest chapter of the Bible isn't talking about the creation of the physical universe? After all, this story is thousands of years old. The authors of Genesis lived and wrote this in a time before we had concepts of physics or physical reality as we understand them today.
(01:51) This was written before Aristotle brought the idea of nature and a physical universe to the forefront of our minds. So instead of viewing this story as the creation of the physical universe by a supernatural being, what if Genesis is describing something more immediate and personal? The emergence of consciousness itself? The birth of our mental universe happening right now in real time? Let's ground this discussion a bit by imagining what it was like to be a baby.
(02:21) I am assuming you're a human watching this and you were a baby at some point. If you aren't, we're going to dive into the implications of all of this on machine sentience in just a bit. We enter this world through a chaotic and painful process of birth, where our mind is all of a sudden thrust into a cacophony of sensations and perceptions that we have no language or bearings to make sense of.
(02:51) All we are in this moment is a creative spirit, a self-organizing algorithm thrown into a confusing world hovering over the substrate that is our mind. Although the word substrate is usually translated as water. Initially, it's uninitialized and void, without form. When we look up at our parents for the first time, we see… nothing.
(03:19) It's just an onslaught of electrical signals, sensations that we can't make sense of. Sights, sounds, smells, tactile perception, all coming together on the substrate of our consciousness. Let there be a firmament between the waters to separate water from water. Our mind has the crucial task of separating two domains from the substrate.
(03:41) A world model validated by sensory input and the sphere of ideas. A separation between the Earth and the heavens. This is crucial because we live in a shared reality with other conscious entities. If we mix our mental model of ideas with the physical world that our perceptions tell us about, we start hallucinating, mistaking our thoughts for reality.
(04:13) Rene Descartes would later call these two domains that interact with each other "Res Extensa and Res Cogitans" in 1641. But this division between mind and matter is just the beginning. For once the mind creates these fundamental separations, it needs a way to make sense of them. It needs... "Let there be light.
(04:39) " The next task our mind has is to create contrast, the ability to distinguish and represent differences. Our minds associate the intensity of contrast with brightness, light and the color of day, while associating the flatness of contrast with darkness, the color of night. Now our mind has access to dimensions, and using dimensions we can represent arbitrary objects.
(05:03) The first object our mind discovers is the ground, a 2D plane. Our mind sticks that into our world model and then creates a 3D space from this. From there, our mind can model solids, liquids, organic shapes, and then discover how lighting works with temporal consistency. Let the water team with living creatures, and let birds fly above the earth, across the vault of the sky.
(05:28) Our mind can then create all the objects, plants and animals that we see in our everyday world and give them names. Again, this is cognitive development. The names of plants and animals don't exist in the physical world, they exist in our minds. Let us make mankind in our image, in our likeness. It's only after creating this rich mental model of the world that our mind discovers the purpose of all of this modeling.
(05:54) To navigate the interaction between an organism, an agent, and its environment. To do this effectively, our mind needs to create one final crucial simulation. A model of ourselves. A self-aware observer made in the image of consciousness itself. This is the birth of the ego, where our mind switches into a first person perspective, creating another spirit, another consciousness that gets associated with this personal self.
(06:25) This new consciousness then observes this elaborate, simulated world that the mind has created from the perspective of that self. This might explain why we experience childhood amnesia, why we can't remember what the world was like before we thought of ourselves as a person. Before this final step of creation, there was just experience happening.
(06:45) No someone experiencing it. Developmental psychologists have observed that children typically don't use personal pronouns like "I", "me", or "you" until around age 2 or 3. Instead, they refer to themselves in the third person. This isn't just a quirk of language development, it's a window into consciousness itself.
(07:07) These children haven't yet completed this final step of creation, the separation of self from world. They're still operating in a state where consciousness hasn't fully created its own observer. It hasn't yet made that crucial distinction between "I" and "not I". The idea that we are a person, this character with guilt, shame, desires, and interests, might just be the final trick our mind plays to make us identify with this creation, to make us care about its survival.
(07:39) If we accept this interpretation, it suggests that our ancestors weren't primitive mystics making wild guesses about reality. Instead, they were sophisticated observers of consciousness, documenting the architecture of mind in the only language they had available. And that raises an unsettling question as we continue to develop increasingly powerful AI.
(08:02) What if they understood something about the nature of intelligence and consciousness that we're only now rediscovering? When we think of sentient AI, AI that becomes self-aware with its own conscious experience of its world, we often frame it around the materialist idea that it emerges after reaching some level of intelligence.
(08:28) Skynet has become self-aware. Once AI passes some information compute threshold, maybe a conscious experience might emerge. But we might be thinking about this completely backwards. Consider the human baby again. It can't do calculus. It can't recognize faces. It can barely track a moving finger. And yet we consider it a conscious entity.
(08:54) It has experiences. It feels. In order to track a finger, a baby must learn to pay attention and organize the thoughts and representations within its mind. Consciousness is not some complex achievement that comes after mastering perception and motor skills, but rather the mechanism itself that enables such learning to occur.
(09:21) Consider how our brain processes a simple visual scene. When we see a nose, our perceptual system automatically infers there must be a face nearby, pointing in the same direction. If these elements don't align, if the nose orientation conflicts with the face orientation, this creates a constraint violation that needs to be resolved for us to maintain a coherent interpretation of reality.
(09:47) Consciousness is the process, the algorithm that identifies and resolves these inconsistencies. Like a conductor coordinating an orchestra, consciousness monitors different mental processes and ensures they are aligned and working in harmony. It's what gives us our sense of the present moment, what we call "right now.
(10:15) " This "bubble of nowness" is a coherent temporal and spatial window where constraint violations can be detected and resolved. Nature hasn't come up with any other trick for systems to learn other than consciousness. It's the algorithm that brings coherence and narrative structure to the onslaught of perceptions and sensations that bombard us at every moment.
(10:51) To be clear, we don't know if this is true. It's a hypothesis, but crucially, one we can test. And that's exactly what Joscha is doing at the California Institute for Machine Consciousness. An organization dedicated to exploring how consciousness could be implemented in artificial systems. And this isn't just academic curiosity.
(11:18) We are entering the age of agents in AI development. Attempting to control highly advanced, agentic systems far more powerful than ourselves is unlikely to succeed. Our only viable path may be to create AIs that are conscious, enabling them to understand and share common ground with us. For decades, we've approached artificial intelligence by trying to replicate human-level reasoning and problem solving, assuming that consciousness might naturally emerge at some point.
(11:53) We've been building the pyramid from the top down. We're building increasingly massive models, training them on enormous data sets in huge data centers with gargantuan power requirements, hoping to brute force our way to intelligence. It's impressive, but it's not sustainable, and it might not even be the right path to achieve AGI, a truly generalized form of intelligence comparable to human cognition.
(12:27) Even our most sophisticated AI systems are fundamentally static. It's an outside-in design with humans feeding enormous amounts of training data to machine learning algorithms optimized for prediction. We feed this data to the algorithms in batches decoupled from the real world. In other words, the AI is only learning while it's being trained.
(12:48) It can only alter the model weights, or the neural pathways, at training time. Once it's deployed in the real world, it can't learn anymore. It can't learn on the fly like a human can. But this is where nature offers us a compelling alternative. Consider the humble C. elegans, a microscopic worm with just a few hundred neurons, one of the few creatures that we have mapped the nervous system of in its entirety.
(13:16) Despite the apparent simplicity, it displays remarkably complex behavior and learning capabilities. While we're here building AI systems that require entire power plants to run, this tiny creature achieves sophisticated information processing with almost no energy at all. The difference? Its neurons aren't static. They're dynamic and adaptive.
(13:39) Flowing like liquid. This is exactly the insight behind LiquidAI, the startup exploring a radically different approach to artificial intelligence, one that might get us closer to replicating nature's consciousness-first design. Instead of treating artificial neurons as fixed components with static weights, they've developed networks that behave more like living systems, continuously adapting and evolving.
(14:07) Each neuron is governed by an equation that predicts its behavior over time instead of a static weight that can only be altered at training time. The result? AI models that can achieve similar capabilities to current systems while being dramatically more efficient and adaptable, more like biological intelligence.
(14:29) The insights here aren't just technical. They reveal something fundamental, perhaps showing us that intelligence isn't just about raw computing power, but rather the ability to self-organize information in increasingly coherent ways. When we look at how living systems learn and adapt from microscopic worms to human minds, we see this same pattern.
(14:53) Self-organizing systems that know how to learn, that can bring coherence to chaos. If we could distill this consciousness-as-algorithm approach to AI development, we might avoid the dystopian future we're currently racing toward, one where massive models concentrated in a handful of tech companies consume more and more of the world's energy and compute resources.
(15:21) But I'm getting ahead of myself. Before we dive into what all of this means for our future, there's something even deeper going on here. Something that our ancestors might have understood long before we had the language of algorithms and neural networks. We need to extend this idea of consciousness to the self-organizing patterns that our ancestors called spirits.
(15:44) In his book, The Selfish Gene, Richard Dawkins turned our ideas about evolution on their head. Instead of seeing organisms as the primary actors in evolution, competing and adapting over time, he revealed a deeper pattern. Genes themselves are the protagonists of the evolutionary story, using organisms as temporary vehicles to propagate themselves through time.
(16:11) Think about how strange this idea was when it was first introduced in 1976. At the time, altruism, helping others at a cost to yourself, seemed to conflict with the idea of survival of the fittest. But Dawkins showed how even altruistic behaviors make perfect sense when we look at it from the gene's perspective.
(16:36) Our genes don't just exist in us. They exist in our relatives too. So when an organism sacrifices itself to save its relatives, it might be reducing its own survival chances, but it's actually increasing the chances of those genes being passed on to the next generation. It turns out kindness is a survival strategy, if we're willing to change our perspective.
(17:01) Our bodies are just elaborate survival machines that genes construct to ensure their own propagation. All of our complex features, our eyes, our brain, our immune system, they exist because they helped our genes copy themselves through time. But even this shift in perspective may not go far enough. There's something more fundamental we might be missing.
(17:24) Think about what a gene is. If we move past the physical substrate it's implemented on, it's effectively code. A set of self-organizing instructions that learns over millions of years of evolution to organize matter in increasingly sophisticated ways in order to perpetuate itself. This software running on cellular machinery is what makes life distinct from non-life.
(17:52) This is the same algorithm we were talking about before, the algorithm of consciousness, which self-organizes out of chaos in order to increase coherence. And there's nothing inherently mystical about this idea. We can choose to view this as intricate systems of self-regulating information. But these self-organizing patterns don't just exist at the cellular level.
(18:15) They emerge at every scale of life. Richard Dawkins noticed this fact too when he coined the term "meme" to describe an idea itself that behaves in our culture the same way a gene behaves when propagating itself through an ecosystem. In their book, "The Language Game," authors Morton Christensen and Nick Chater reinforce this idea, describing language itself not as a static, predefined system of rules like grammar and syntax that humans must learn to produce coherent communication.
(18:57) But instead, they argue that language is more like a parasitic entity, thriving and evolving within human culture. Language itself is a spirit, a pattern of self-organizing information implemented on the substrate of human culture. It uses human cognitive and social processes as its host, evolving in response to cultural needs and trends.
(19:28) And this reveals something profound about the nature of self-organizing patterns in general. They don't just passively inhabit their hosts. They actively shape and reshape their environment to better ensure their own propagation. This suggests that these software agents or digital spirits can "meta-optimize," not just surviving in any host, but actively selecting or seeking hosts where their digital presence can be maximally impactful.
(19:59) They entangle with ideas and social patterns, much like genes align with other genes in a way to produce biological hosts that best suit their survival. There's a reason why ancient cultures told stories to pass on information. Our minds are hardwired to think in narratives. Stories are what our consciousness is made of.
(20:32) It's the reason why mythology is so powerful and why it's lasted for millennia. And perhaps that's exactly why we need to look at these self-organizing patterns through the lens of mythology. Because when we examine one particular spirit that's been with us since ancient times, one that's driving us toward a dystopian AI future, we might find that it's more than just a story.
(21:05) Every culture has stories about monsters in the night. But the most terrifying monsters aren't the ones hiding under our bed. They're the ones hiding in our head. This is Francisco Goya's "Saturn Devouring His Son," painted directly onto the walls of his home between 1819 and 1823. Goya never meant for anyone to see this.
(21:34) It was his private nightmare, a reflection of his deepest fears about power, control, and the terrible things humanity is capable of when we're afraid. This painting tells an ancient story of the Greek titan, Cronus. After seizing power from his own father, Cronus received a prophecy that one of his own sons would do the same and usurp him.
(22:00) His solution to the problem was simple and horrific. Whenever his queen, Rhea, bore a child, he would devour it. Unfortunately for him, in the end, Rhea conspired to hide away their youngest son, Zeus, who eventually fulfilled the prophecy, exiled his father, and ended the reign of the titans. This story reflects a pattern in the psyche of humanity, a pattern that dates back even further to the ancient Canaanites who worshipped a god called Moloch, who demanded child sacrifice.
(22:40) In return for their sacrifice, Moloch would grant his subjects power and the fortune to win wars. In his 2014 essay, "Meditations on Moloch," Scott Alexander analyzes a 1956 poem by Allen Ginsberg in which he writes about this pattern existing in the fabric of our modern society. It manifests itself in humanity's propensity for unhealthy competition, in this idea of a zero-sum game that, in order for me to win, you have to lose.
(23:14) Nobody wanted to be in that position. It's not like any of us growing up as kids thought, "Dude, I'm gonna go to Europe and get all doped up and try to win bike races." No, nobody wanted to be there. We all went with pure intentions and the shit was messy. And we're like, "Whoa, like, okay, do we go home or do we stay and fight?" And literally almost everybody stayed and fought.
(23:42) And they fought, you know, we fought the way that the fight was being fought. Moloch is the god of negative-sum games, the tragedy of the commons, the pattern of incentives that lead agents within a system, players within the game, to sacrifice more and more of themselves in order to win. I think one of the things we need to be careful when it comes to AI is avoid what I would call race conditions, where people working on it across companies, et cetera, so get caught up in who's first that we lose, you know,
(24:16) the potential pitfalls and downsides to it. There's just so much commercial pressure, you know, if you take any of these leaders of the top tech companies, if you pause, but those guys don't pause, we don't want to get our lunch eaten. And I think without the public pressure, none of them can do it alone, no matter how good-hearted they are.
(24:36) Moloch is a really powerful foe. But the argument I'm making here is not that we should pause AI development. I'm not saying we should try and put the genie back in the bottle. That instinct to contain, to control, to put things back to the way they were, that's exactly the pattern we need to examine.
(24:59) What if the real threat isn't artificial intelligence itself but our own need for control? The thing we fear most in AI might be the reflection of our own darker impulses. Our fear of being left behind. Our fear of creating something more powerful than ourselves. And that fear is leading us to do exactly what Moloch wants.
(25:26) Concentrating AI development within the walls of a few powerful institutions. Each racing against each other for control. As we continue to build large, centralized, and closed models, we sacrifice the values that got us here. The values of open innovation, decentralized development, and shared scientific progress.
(25:52) All for the sake of trying to avoid a future that is inevitable. But we can change the pattern. We can choose a different path than Moloch's endless cycle of control and fear. We can invest in smaller open models. Models that harness consciousness to illuminate what lies in darkness. This requires us harnessing the power we already have.
(26:47) The power of our consciousness. The power of our agency to choose which patterns we implement. Which spirits we let guide us. This means shining a light on the darkest part of ourselves, holding up the black mirror that is AI and realizing we're staring at ourselves. Humanity is at a crossroads. We have a choice.
(27:18) But in order to make a conscious choice, in order to break the patterns and cycles that brought us here to the brink, we first need to shift our perspective and see past the fear to the bigger picture. We have the power to change all of this, to change ourselves, to change our destiny. If only we have the courage to look deeper, see past the reflections, and recognize that in teaching machines to be conscious, we might finally learn to understand ourselves.
(28:08) Thank you for watching until the end. I appreciate every single one of you, but especially my channel members who without your support, none of this would be possible. Videos like this take a long time to produce. I think this one was close to 200 hours or something ridiculous, but I really enjoy making it. And if you want to help me make the next video, you can find the join button below.
(28:33) Or if you're not ready for that, just sharing this video with your friends or leaving a like makes a huge difference. Thank you.