First off, as far as I understand it, Artificial Intelligence is like any other software. Fundamentally, it’s code executed on a computer or distributed across computers.
What distinguishes AI from ordinary code is that it responds to user prompts through a complex recursive algorithmic process that is adaptive. By mimicking human thinking, these responses appear to be intelligent, even creative. The machine learns.
Now, a question some professional philosophers have asked — and armchair philosophers in the media have blown way out of proportion is: Could AI become self-aware? Is a “singularity” moment imminent where AI “wakes up,” realizes it despises its master, then proceeds with ruthless efficiency to destroy us?
The term singularity is out of fashion, though. These days, in an effort to downplay the baggage associated with similar terms, techies are calling such a transformation AGI — a decidedly more anondyne acronym that stands for Artificial General Intelligence.
Despite recent advances and the explosion of public awareness of AI, as I did in 2015, I still think there’s no reason to worry.
I’m going to lay out some thoughts about why I believe this to be the case. I stand by my claim that AI can’t be intelligent in the way that the terms AGI or singularity imply. Our worries about AI continue to hinge on the more fundamental question: What is intelligence?
And there are two things to explore here with respect to that.
One is the question of how living organisms embody intelligence. The other is a fundamental assumption that most materialist perspectives get wrong.
Let’s start with the second: in general, materialism argues that consciousness emerges from activity in the brain. It’s a product of cognition — of intercellular interactions. For materialists, consciousness may even be social.
An alternative argument is that consciousness actually precedes the brain. Not only does it precede the brain, it precedes the material universe itself. A universal intelligence proceeds and permeates the material universe.
Imagine this universal intelligence as a kind of a boundary or membrane. I say imagine, because it’s paradoxical to use a material image to describe something that transcends materiality. So bear with me: this universal intelligence is a membrane on the boundary between form and formlessness. Because it permeates all of matter, we’re all an expression of it — living things and inanimate objects alike. This universal intelligence is pure potentiality, pure creativity. And the material universe springs out of it. This consciousness creates the material universe.
We understand very little about how matter happens because it’s difficult to set up experiments to probe that which is not material. Another paradox.
That said, I suspect there’s a pathway to understanding this universal intelligence through what I’ll call a grand unified physical theory. There’s a lot of interesting research going on right now exploring the concept of the universe as a self-organizing system which complements this point of view.
I’m not going to go into that here, though.* What I want to focus on is this: When we talk about the singularity or AGI, what we’re really talking about is not just intelligence, but self-awareness. At what point does any intelligence become self-aware?
So we also need to think about those two terms: self and awareness in the context of intelligence.
First off, it’s important to recognize that in order to have a self, there has to be a not-self. There has to be a separation. There’s the thing itself and everything outside of it. The very notion of a self has no meaning unless a boundary exists between this self and that which it isn’t.
In this context, I’ll define awareness as a porosity in the boundary between the self and thethe not-self. We can call this perception. Living organisms perceive their environments. Perception is a specific form of interaction or transaction between the inside and the outside of the boundary. Let’s define awareness, then, as an elaboration of perception where the perceiving self not only perceives its environment, it perceives its own distinction from that environment. Awareness is a kind of second-order perception.
Now, with respect to living organisms, there are three preconditions for something to be considered alive biologically. Think of these three prerequisites as a stool. If it doesn’t have all three legs, the stool won’t stand up — analogously, we can’t consider the entity alive.
For one, a living thing has to have a boundary. For single-celled organisms, this boundary is the cell wall, which is a biochemical membrane.
When we consider the things that comprise the material universe, we see that there are two basic categories. There are inanimate objects. These objects don’t have agency, let alone awareness. They behave mechanistically. Then we see living things. They do have agency. As a consequence, they behave in a way that’s not strictly mechanistic.
We might also hypothesize that there’s some sort of spectrum from inanimate to animate. But we don’t understand the transition all that well. When does something that we understand as mechanistic become alive?
In the organic sub-world, viruses represent an interesting example of something that exists in the middle of the spectrum between inanimate and animate. We could make the case that a virus is still a machine in many respects. A virus is organic and evolves naturally, but it’s machine-like in the sense that it has a primitive form of autonomy.
This is an important concept we should keep in mind here. When we think about living organisms, what defines them as organisms is the capacity for what the philosophy of biology terms autopoiesis — self-making.
So, again, one leg of the stool of self-making is a boundary, or more specifically, a porous membrane. The second prerequisite is a metabolism. And the third is a way of reproducing itself.
When considering the simplest forms of living things, if we imagine a hierarchy, one step up from the virus is bacteria. A bacterium possesses all three of those features. It has a boundary. It has a self-replication mechanism — it reproduces complex molecules and undergoes cell division. And of course, it has a metabolism.
As you know, a metabolism, loosely defined, is a means of converting one form of energy into another. Metabolisms, for example, can convert one type of chemical energy into another. Or they can convert chemical energy into kinetic energy. It’s the engine that drives an organism.
If a thing is living, that is, self-making, it can survive in an environment by exercising agency in service of its own ends. For a bacterium, this is a diminished version of agency compared to what we think of when we think of human agency, animal agency or even plant agency.
Nevertheless, a bacterium has a simple agency where it’s making decisions in the world. It’s responding dynamically to other objects in its environment in a way that’s distinct from a machine. And bacteria persist through self-reproduction. The three legs of the stool work in concert to allow bacteria to be self-making.
A virus, by contrast, lacks at least one of the legs of the stool. A virus has a boundary, yes. But it’s a relatively simple boundary compared to the boundary of a bacterium. A virus also has a mechanism for self-replication. But it doesn’t really have a fully-formed metabolism. It’s parasitic on a fundamental biochemical level. It relies on the metabolism of other organisms to reproduce itself. A virus drifts around and it attaches itself through chemical bonds to other cells. Then it injects its replication mechanisms into the other cells. It hijacks that cell’s machinery for reproduction, often destroying the host cell in the process. In that respect, a virus doesn’t have autonomy in the sense of self-making. A virus isn’t fully alive in the way that a bacterium is. It’s semi-alive.Accordingly, any organism more sophisticated structurally than a bacterium — such as prokaryotes, eukaryotes, multi-celled organisms, on and on up to incredibly complex organisms like mammals — are also autopoietic.
With more complexity, organisms exhibit multiple layers of recursivity — or what we can call self-awareness. This is an idea from Douglas Hofstadter. He suggests that we can think about self-awareness in degrees where a bacterium has a simple kind of self-awareness. An octopus, orca, human, or maybe even an oak tree would have a much more complex form of self-awareness.
Still, fundamentally, it’s the same dynamic at play. Organisms embody an auto-poiesis that depends on a boundary between self and not self, and the ability of the boundary — through an increasingly complex form of recursive perception — to know the difference.
Accordingly, we can think about our sense organs as part of a boundary, part of a membrane that is semi-permeable. It takes in inputs from the external environment, then has the capacity to send out outputs along various kinds of dimensions of transaction. For the eyes, it’s photons. For the nose and the olfactory organs, it’s chemicals. With touch, it’s kinetic. We can also understand the nervous system and brain as extensions of this semi-permeable membrane.
Keeping all this in mind, let’s now ask the question: Is AI alive? Can AI ever come to life?
Let’s apply the same requirements — the three legs of the stool — that we’ve applied to a virus or a bacterium to AI.
First off, let’s ask: Does AI have a boundary?
I don’t think it does. Remember that without a boundary, there’s no awareness of self and not-self that all living organisms have.
With an AI, where’s the boundary? I’m not sure. Maybe someone can tell me what that boundary is. Maybe I have an incomplete understanding of how AI works — that there is a boundary. But I really don’t think AI perceives an out there, a not-me in in the way that even a bacterium does.
There’s a user prompt, sure. The AI, as executing code, responds to a user’s input through that prompt. In that sense, an AI mimics conversation. Yet the conversation occurs within itself, so to speak. Is an AI program able to distinguish between voices out there and the voices in its own head — that is, the data it’s been trained on?
Regardless, a user prompt is an input-aperture that’s limited. It’s like seeing the world through a pin-hole. Even a bacterium has a wider range of perceptions to process. In effect, the porosity of the boundary between AI and not-AI is highly prescribed.
An AI application like ChatGPT really only knows itself. In effect, it suffers from a severe form of solipsism. To only know oneself is really to know no one at all. The only interaction an AI has, the only porosity in the membrane, if it has one, is through the prompt. And what it gets from the prompt is only itself fed back to it. It doesn’t recognize the human. It just recognizes patterns, which it manipulates. AI isn’t interacting with us-as-people in a way the philosopher Immanual Levinas would describe as fundamentally human: through a face-to-face encounter. AI lacks a face. And it doesn’t recognize others faces as human faces — or even animal faces. It just sees patterns. It sees itself-as-a-world as a pattern. It’s self-contained.
An AI also lacks a mechanism for self-reproduction. It’s a string of code that functions as a pattern-recognition system. We could argue that the self-replicating system of a living organism is also a pattern-recognition system — for example, patterns within RNA that produce proteins. In that sense, we could say that both are mechanical processes. And with AI, these pattern recognitions can get highly complex, for example, with neural nets. It’s important to note, though, that neural nets are simplified models of organic neural nets, whose complexity we barely understand. So granted, AI employs incredibly sophisticated replication systems. But they aren’t self-replicating systems. They replicate patterns — speech, images, or otherwise.
Lastly, we have to wonder if an AI has a metabolism in the way a single-celled organism does.
And again, I don’t think it does. What would be the metabolism of AI, if it had one? It’s the substrate. AI, as code, sits atop layers of other software — the operating system, etc. — all the way down to the hardware of the computers it runs on. The metabolism of a standalone PC or a server farm is powered by electricity. Is the AI itself in control of the operating system, the processors, or the electric grid on which it depends? Is it aware of itself as an entity that acts in the world kinetically?
Further, an AI isn’t regulating its metabolism in the way that a bacterium regulates its own metabolism. With AI, there’s a one-dimensionality to it. It doesn’t convert electrical energy into chemical energy. It’s completely immersed in its one-dimensional environment, so to speak — the 0s and 1s of the stack on which it’s perched.
There’s a multi-dimensionality to a metabolism that an AI’s metabolism, if we concede it has one, lacks.
So to recapitulate — AI: no porous boundary, or at best, a highly constrained boundary; no self-reproduction; and no metabolism, per se.
Being generous, the closest thing in terms of living organisms or semi-living organisms we could compare an AI to is a virus — a sophisticated analog to a virus.
Based on these criteria, given that AI isn’t alive, nor close to being alive, I doubt it could become self-aware. No singularity looms. AGI is a pipedream. At best, AI will always remain a tool, a very powerful tool — even a dangerous tool — but still a tool. Or, at worst, a deadly virus.
I’m not aware of anything on the vanguard of computer science that’s life-like in terms I’ve laid out here. Though, admittedly, I’m not a computer scientist. Maybe someone can share a development or project that addresses my concerns here.
Granted, recent developments in robotics address the three-legged stool more explicitly. Unlike AI software, a robot has a body. That’s an important concept here — embodiment. As far as we know, all living things have bodies. Again, a body is an extrapolation of both a semi-permeable boundary and a metabolism. No body, no auto-poeisis.
So maybe a robot — since it has a body — could come alive. But I just wonder how sophisticated the boundary of a robot is. What is the latest, most sophisticated robot’s capacity to interact with its environment compared to a bacterium?
Self-driving cars? Deer-robots? It seems like it would be just as sophisticated. Perhaps, though, as we closely examine all the subtle, multiform transactions going on within the cell membrane of a bacterium, it’s orders of magnitude more sophisticated than what’s happening with a robot, even as it “perceives and responds to its environment. I suspect the degree of complexity is nowhere near even a bacterium in terms of the boundary, the permeability of the boundary, and the transactions occurring between the inside and the outside through the boundary.
Further, how does an AI-based robot regulate its own metabolism? How does it have the equivalent of, say, mitochondria that reside inside the system and allow it to convert one form of energy into another?
Even a robot is inert, in that sense. It’s bound to the hardware, but it isn’t the hardware. And obviously, the hardware is just machinery. It’s not alive in any way.
So let’s now return to consciousness. As I’ve suggested, a universal intelligence creates and permeates matter. Then, following Hofstadter’s lead, within the material universe, we see degrees of self-awareness — from the simple kind of self-awareness of a bacterium to a complex kind of awareness in higher-order organisms like humans, orcas, and octopi.
This universal intelligence permeates all matter, including living things. From one point of view, we’re all expressions of this super-consciousness. It prompts the question, then: Are there forms of life that are capable of awareness-of-awareness?
This is an awareness that transcends self-awareness — awareness of a self and a not-self. It’s an awareness of no-self-awareness — an awareness of universal intelligence. Or, in other words, an awareness beyond the self, an awareness of the frame that frames experience. Awareness of awareness. In some traditions, this is called enlightenment.
I believe humans are capable of that. And it’s entirely possible that other higher-order organisms that are not human are capable of that.
Can AI also become aware of awareness?
I doubt it. As we’ve explored, AI isn’t even self-aware in the way that a bacterium is. So how could it be aware of awareness? How could it be enlightened?
At the risk of becoming repetitive, I think that’s very unlikely in the near or far future. All of the preconditions discussed here have to be met, the prerequisites for life. In this sense, enlightenment is a culmination of living, of living organisms within the material universe. It’s the universe knowing itself as itself.
Maybe organic life evolves towards not just self-realization but self-transcendence. It evolves to transcend auto-poiesis, returning full-circle back to universal intelligence.
We are meant to awaken from this material existence, this life in matter. We awaken to the formlessness that permeates and proceeds form.
I’ve tried to cover here the main points about AI. It’d be great to get some responses to this. Maybe there’s some issues I’ve failed to consider.
As a postscript: I know there’s work being done now on engineering simple artificial cells, akin to bacteria. If anything, that’s what we should be worrying about, not AI — unleashing these artificial yet autopoietic organisms into the world.
I’m curious about how such organisms would interact with an environment outside of whatever laboratory environment they currently inhabit. To me, what’s really interesting about this research is that it explores, in biochemical terms on a cellular level, that transition point between inorganic and organic, between inanimate and animate, being a thing or a living thing. It promises the prospect of discovering the first spark of life.
I don’t think such a transition is purely mechanistic, nor stochastic. It’s not simply a matter of atoms jostling around, forming molecules, then magically they become autopoietic.
Perhaps an eventual grand unified theory would explain not only the physical forces — electromagnetism, gravity, the strong and weak nuclear forces — but also how life emerges from the interactions of those physical forces through various chemical interactions.
For over a century now, physicists have been pursuing a theory of everything that unifies all the known the physical forces, but a real theory of everything would unify not only the physical forces, but also the force — for convenience, let’s call it one force — like what early twentieth-century philosopher Henri Bergson called élan vital.
Would it be possible to mathematically determine the relationship between the physical forces of inanimate matter and the life force that marks that transition between inorganic molecules and the organic molecules that undergird life?
* As I’ve written elsewhere, the simplest proof that materialism is demonstrably false is the fact that to know, perceive, or experience anything, there has to be a frame of a knower, a one who is knowing or experiencing. That’s consciousness. And consciousness precedes any form of knowledge, any form of experience, including scientific knowledge. Yes, we can infer certain facts, but ultimately, even inference is framed by a knower. And if we think about this irrefutable fact, if we meditate on it, it dawns on us that this frame, the knower, is universal consciousness.