Artificial Intelligence vs Artificial Consciousness: Does it matter?
In discussions of the mind-body issue, the argument over strong vs. weak artificial intelligence (AI), and bioethics, consciousness plays an essential role. Surprisingly, it does not appear in current arguments about the ethical implications of AI and robots. This article investigates this void and offers two claims: we need to discourse more about artificial consciousness and the lack of awareness in today’s robots and AI.
Is it possible for machines to have a consciousness?
The debate over whether robots may have consciousness isn’t new; proponents of strong artificial intelligence (strong AI) and weak artificial intelligence (weak AI) have been exchanging philosophical ideas for quite some time. Strong AI is defined by John R. Searle as thinking that “…the correctly designed computer truly is a mind, in the sense that computers with the right programs may be literally claimed to understand and experience cognitive states,” despite his criticism of strong AI (Searle, 1980, p. 417). Weak AI, on the other hand, thinks that robots lack awareness, mind, and sensibility, and instead simply imitate cognition and comprehension.
When it comes to artificial consciousness, there are a number of issues to consider (Manzotti and Chella, 2018). The challenge of explaining consciousness, or how subjectivity may originate from matter (sometimes referred to as the “hard problem of consciousness”) is the most basic (Chalmers, 1996). Furthermore, our perceptions of human awareness are influenced by our own sensory experiences. Artificial consciousness will only be available to us from the third-person perspective, unlike human consciousness, which we know from the first-person perspective. The topic of how to tell if a computer is sentient is related to this.
Artificial consciousness is based on the notion that it may be discovered in the physical world of machines and robots (Manzotti and Chella, 2018). Furthermore, any human definition of artificial consciousness must be derived from a third-person perspective, rather than depending on subjective consciousness.
One method is to avoid offering a definition of machine awareness that is too limited, or to avoid giving one at all. David Levy (Levy, 2009, p. 210), for example, likes to take a pragmatic approach in which it is necessary to have a broad consensus about what we mean by awareness and recommends, “Let us simply use the word and get on with it.”
Others concentrate on self-awareness. “…the underlying principles and methods that would enable robots to understand their environment, to be cognizant of what they do, to take appropriate and timely initiatives, to learn from their own experience, and to show that they know that they have learned and how,” Chatila et al. (2018, p. 1) consider relevant in the context of self-aware robots. Kinouchi and Mackin, on the other hand, emphasize on system-level adaptation (Kinouchi and Mackin, 2018, p. 1): “Consciousness is considered as a function for successful system-level adaptation, based on matching and arranging the individual outputs of the underlying parallel-processing units.” This consciousness is thought to correspond to how our minds are “aware” when making decisions in our daily lives.”
It’s important to think about philosophical contemplation on consciousness, which focuses on human (and animal) consciousness, when trying to answer issues about artificial consciousness. Consciousness may be defined in a variety of ways. We usually distinguish between (a) a conscious entity, which is sentient, awake, and has self-consciousness and subjective qualitative experiences, (b) being conscious of something, such as a rose, and {c} conscious mental states, which are mental states an entity is aware of being in, such as being aware of smelling a rose (Van Gulick, 2018; Gennaro, 2019).
Dehaene et al. (2017) identify two key aspects of conscious computation: global availability (C1) and self-monitoring (C2) (C2). They compare global availability to Ned Block’s access consciousness, which they define as knowledge being globally available to the organism (Block, 1995). They define self-monitoring (C2) as “a self-referential connection in which the cognitive system is able to monitor its own processing and receive knowledge about itself” (pp. 486–487), which they believe corresponds to introspection.
Different writers emphasize different elements of artificial consciousness, as seen by the examples of methods to define artificial consciousness provided above. There is definitely opportunity for additional thought and investigation into what third-person conceptions of artificial consciousness may entail.
Human-Robot Interaction and Artificial Consciousness
Despite a large number of science fiction portrayals that seem to indicate otherwise, experts generally agree that existing machines and robots are not sentient. However, in a study of 184 students, the responses to the question “Do you believe that current electronic computers are conscious?” were as follows: No: 82 percent; Uncertain: 15%; Yes: 3%. (Reggia et al., 2015). Surprisingly, the survey’s question was on “modern electronic computers,” not AI or robots.
With social robots and human-robot social contact, consciousness-related questions are likely to occur most frequently (Sheridan, 2016). A social robot, according to Kate Darling’s description (Darling, 2012, p. 2), is “a physically embodied, autonomous entity that communicates and interacts with people on a social level.” Kismet from MIT, Aldebaran NAO from Aldebaran Robotics, and Hanson Robotics’ humanoid social robot Sophia are all examples of social robots.
Social robots have many qualities that distinguish them from humans: they can make limited decisions and learn, they can display behavior, and they can interact with people. Furthermore, nonverbal immediacy of robot social behavior (Kennedy et al., 2017), voice recognition and verbal communication (Grigore et al., 2016), facial expression, and a perceived “personality” of robots (Hendriks et al., 2011) all have a part in how people react to robots.
As a result, people have a tendency to form unidirectional emotional relationships with social robots, project realistic traits, attribute human attributes (anthropomorphizing), and ascribe intents to them (Scheutz, 2011; Darling, 2012; Gunkel, 2018). The social humanoid robot Sophia was awarded Saudi-Arabian citizenship in 2017, which is a typical illustration, if not a climax, of this trend (Katz, 2017).
All of this raises concerns about the status of robots, as well as how to respond to and engage with them (Gunkel, 2018). Are social robots just objects? Or, as Peter Asaro suggests, are social robots quasi-agents or quasi-persons? Others who are socially active? Quasi-others? Should robots have the same rights as humans?
Despite the fact that most experts believe that existing robots lack awareness or consciousness, certain writers (such as Coeckelbergh, 2010; Darling, 2012; Gunkel, 2018) have claimed that robots should be given rights. Kate Darling, for example, believes that treating robots more like pets rather than ordinary items is in keeping with our societal ideals, based on studies on aggressive behavior toward robots.
While the specific arguments in favor of giving robots rights vary, they always center on the social roles that humans assign to them, the connections and emotional attachments that people form with robots, or the social environment in which humans engage with robots. They advocate for rights based on the role robots perform for humans, rather than ascribed status based on robot skills.
However, there is a fundamental flaw in this “social roles” approach. Its recommendations about how to interact with robots are inconsistent with how we interact with humans (see also Katz, 2017). When applied to humans, the “social roles” concept asserts that a person’s worth or rights are heavily influenced by his or her social duties or the interests of others. This assertion runs counter to the widely accepted belief that human individuals have moral status regardless of their social roles. An entity has moral standing “if and only if it or its interests morally matter to some degree for the entity’s own sake,” according to this viewpoint (Jaworska and Tannenbaum)
Personhood is essential for the attribution of status and rights to human beings. Rationality, consciousness, personal stance (the attitude chosen toward an entity), capability of reciprocating the personal stance, linguistic communication, and self-consciousness are all key features in the idea of a person (Dennett, 1976). All of these, according to Daniel C. Dennett, are essential prerequisites of moral personality.
The “social roles” approach, on the other hand, assigns rights to robots based on the social roles they perform for others, rather than on their moral standing or capacities. This explains why awareness is irrelevant in this situation. Because contemporary robots lack awareness or consciousness, it is impossible to argue that they are ethically significant in and of themselves.
However, this might change in the future. Then it would be reasonable to consider a concept of “robothood” and assign moral standing to future robots depending on their skills. There is currently a lively and contentious debate raging on whether or not robots should be granted legal personhood (Bryson et al., 2017; Solaiman, 2017). A deeper understanding of artificial consciousness, artificial reason, artificial sentience, and related ideas is required for the discussion on the moral and legal status of robots, as well as the larger topic of how to respond to and interact with machines. We need to discuss more about artificial consciousness and how existing AI and robots lack consciousness. Focusing on third-person concepts of artificial consciousness and access consciousness will be very beneficial in this case.