AGI

Can AI gain Consciousness by themselves?

Consciousness in AI is a topic which is argued by not only computer and cognitive scientists, but also philosophers. Philosophers like John Searle and Hubert Dreyfus have argued against the idea that a computer can gain consciousness. For an example, arguments like Chinese Room have been proposed against the idea of strong AI. But there are also philosophers like Daniel Dennett and Douglas Hofstadter, who have argued that the computers can gain consciousness.

Although there are debates about the how to create a conscious machine, for this article I choose to look at the creation of machine consciousness in another way.  Do we have to design the AI’s architecture with the conscious from the beginning to make a conscious AI? Will the AI be able to gain consciousness of its own? Or will the consciousness be emerged form when the AI’s architecture when it gained sufficient enough complexity by evolution or by self modification without human interference?

Consciousness without Human Design

Although consciousness is an important quality, defining it clearly is a somewhat difficult task. But we can roughly define it with two main components, the Awareness (Phenomenal Awareness) and the Agency. Awareness is ability to the external world and also feel or sense the content of the own mind. And the Agency is the control over external world and also the control over our self or the mental states. Which means the control the both behavioral (control external organs, hands, feet, etc.) and mental aspects. We should also be aware of the control  it to become conscious. We should know/feel that we have the control (or that we are doing it). The actions we are not aware like beating of the heart, breathing or things we do without thinking  (for example, walking or driving without thinking or concentrating on it or thinking about something else) aren’t taken as conscious actions. So after including all of these, we can define the consciousness as (or at least I’m using this definition for this article) the awareness and control over external objects and also awareness of ones own mental content. Another way of putting it is having a sense of self-hood.

According to the above definition of consciousness, we can see that the concept self is also linked with the consciousness. So, what is self? The self can be defined  as the representation of  one’s identity or the subject of experience. In other words self is the part that receives the experiences or the part that has the awareness. The self is an integral part in human motivation, cognition, affect, and social identity.

Concept of self may not be a something that we are born with. According to the psychoanalyst Sigmund Freud, the part of the mind which creates the self is developed later in the psychological development of the child. In the beginning a child only has the Id. Id is a set of desires which cannot be controlled by the child and only seeks pleasure (Pleasure Principle). But later in the development process a part of the Id is transformed into the Ego. And this Ego creates the concept of self in the child. Now the question will be, Can AI be developed into a stage where it can also create something like Ego like the human mind? If the AI has a structure which contains the necessary similarities to a human mind or the AI has an artificial brain similar to the human brain and nervous system, then AI may be able to undergo a process which create some sort of an Ego similar to human Ego. And for humans this Ego is created because of the interactions which a child has with the external world. So like that, maybe the influences which AI faces can trigger the creation of the Ego in AI.

According to the theory of Jacques Lacan the process of creating the self of a child happens in the stage called the mirror stage. In this stage the child (in 6-18 months of age) sees an external image of his or her body (trough a mirror or represented to the child  through the mother or primary caregiver) and identify it as a totality. Or in other words the child realized that he or she is not an extension of the world, but a separate entity from the rest of the world. And the concept of self  is developed through this process. So, can an AI go through this kind of a process or a stage and develop a self? Regardless of whether the structure of the AI is similar to a human mind or not, the realization of the fact that it is a separate individual from the first time will be a new and  revolutionary experience to AI (if the AI is sophisticated enough to process that kind of realization or experience in a proper way). So this kind of an experience may be able to make a change in the AI which may be able to give the AI an idea about self. But if this stage of AI is similar to the mirror stage, then the AI must also have a way of seeing its own reflection in order to undergo this kind of a process. If the AI has a body (robot, maybe) and doesn’t extend beyond that body then this won’t be a problem. But if the AI can be copied into new hardware or extend itself through a network or hardware, then defining its boundaries can be somewhat difficult. So seeing it as something that is not fragmented and has clear boundaries will a bit tricky. But if the architecture of the AI may allow a different way of defining boundaries and see it as an individual then this would work.

When we consider the other animals, we can see that an animal must have a certain complexity to have the self awareness (or consciousness). Methods like Red Spot Technique have shown that animals like some species of ape and dolphins have shown self awareness and some animals haven’t. So we can assume that AI must also have an architecture with sufficient enough complexity for it to develop a consciousness. So at some point in the process of evolution,  the AI must be able to achieve the necessary complexity, in order for the  AI to become conscious. But if the evolution of the AI is similar to the evolution process in Darwinian theory, then the AI which finally achieve the consciousness won’t be the ones that the process of evolution begins with because the new generation of AI is built by merging the best architectures of the old generation of AI and mutating them. So for this merging and mutating process the AI may need human assistance.

But a single AI also can undergo a sort of an evolutionary process of its own. And such process would be self improvement or more precisely recursive self improvement. Recursive self-improvement is the ability of an AI to program its own software or add parts to its structure or architecture (maybe  hardware vise too). So this process also will be able to make the AI achieve necessary complexity in some point.

Like that, maybe the AI will be able to produce consciousness through self modification, or through  a stage in their psychological development process by themselves without humans specifically designing it to be conscious from the beginning.

Habang pinaguusapan 'yung opera ko...

Sir: Wala ‘yan, malayo ‘yan sa bituka.
Agi: Sir, kapag binaril ka sa utak kahit na malayo ‘yan sa bituka, patay ka.