Can AI Dream? Rethinking Consciousness Through the Lens of Evolution

Authors:
(1) Michele Farisco, Centre for Research Ethics and Bioethics, Department of Public Health and Caring Sciences, Uppsala University, Uppsala, Sweden and Biogem, Biology and Molecular Genetics Institute, Ariano Irpino (AV), Italy;
(2) Kathinka Evers, Centre for Research Ethics and Bioethics, Department of Public Health and Caring Sciences, Uppsala University, Uppsala, Sweden;
(3) Jean-Pierre Changeux, Neuroscience Department, Institut Pasteur and Collège de France Paris, France.
Table of Links
Abstract and Introduction
-
Extents and ways in which AI has been inspired by understanding of the brain
1.1 Computational models
1.2 Artificial Neural Networks
-
Embodiment of conscious processing: hierarchy and parallelism of nested levels of organization
-
Evolution: from brain architecture to culture
3.1 Genetic basis and epigenetic development of the brain
3.2 AI and evolution: consequences for artificial consciousness
-
Spontaneous activity and creativity
-
Conscious vs non-conscious processing in the brain, or res cogitans vs res extensa
-
AI consciousness and social interaction challenge rational thinking and language
Conclusion, Acknowledgments, and References
Abstract
We here analyse the question of developing artificial consciousness from an evolutionary perspective, taking the evolution of the human brain and its relation with consciousness as a reference model or as a benchmark. This kind of analysis reveals several structural and functional features of the human brain that appear to be key for reaching human-like complex conscious experience and that current research on Artificial Intelligence (AI) should take into account in its attempt to develop systems capable of human-like conscious processing. We argue that, even if AI is limited in its ability to emulate human consciousness for both intrinsic (i.e., structural and architectural) and extrinsic (i.e., related to the current stage of scientific and technological knowledge) reasons, taking inspiration from those characteristics of the brain that make human-like conscious processing possible and/or modulate it, is a potentially promising strategy towards developing conscious AI. Also, it cannot be theoretically excluded that AI research can develop partial or potentially alternative forms of consciousness that are qualitatively different from the human form, and that may be either more or less sophisticated depending on the perspectives. Therefore, we recommend neuroscience-inspired caution in talking about artificial consciousness: since the use of the same word “consciousness” for humans and AI becomes ambiguous and potentially misleading, we propose to clearly specify which level and/or type of consciousness AI research aims to develop, as well as what would be common versus differ in AI conscious processing compared to human conscious experience.
Introduction
Since Helmholtz, DuBois-Reymond and even Freud pledged the solemn oath (1842) that “no other forces than the common physical chemical ones are active within the organism”, there is wide scientific agreement that the brain is a “physico-chemical system” and that “consciousness” is one of its most sophisticated features, even if there is no consensus on explaining how, specifically, this is the case. Therefore, it can be argued, it is theoretically possible that sooner or later one should be able to artificially emulate the brain’s functions, including consciousness, through physico-chemical methods. Yet the situation is analogous to the case of “life in the test tube” with the simplest living organisms: all their molecular components are known but up to now nobody has been able to reconstitute a living organism from its dissociated components. The issue is not only theoretical but also, importantly, practical.
The prospect of developing artificial forms of consciousness is increasingly gaining traction as a concrete possibility both in the minds of lay people and of researchers in the field of neuroscience, robotics, AI, neuromorphic computing, philosophy, and their intersection (Blum & Blum, 2023; Butlin et al., 2023; LeDoux et al., 2023; Oliveira, 2022; VanRullen & Kanai, 2021). The challenge of artificial conscious processing raises also social and ethical concerns (Farisco, 2024; Farisco et al., 2023; Hildt, 2023; Metzinger, 2021). Therefore, it is very timely to critically evaluate the feasibility of developing artificial conscious processing from a multidisciplinary perspective, as well as analyzing what that concept might mean. Relevant attempts in this direction have recently been proposed (Aru, Larkum, & Shine, 2023; Godfrey-Smith, 2023; Seth, 2024).
Current discussions about the theoretical conceivability and the technical feasibility of developing artificial conscious processing hinges, to begin with, upon a semantic ambiguity and polysemy of the word “consciousness”, including the distinction between phenomenology (i.e., a subjective first-person experience) and underlying physiology (i.e., a third-person access to consciousness)(Evers & Sigman, 2013; Farisco, Laureys, & Evers, 2015; Levine, 1983), and the fundamental distinction between conscious and non-conscious representations (Piccinini, 2022). Also, conscious processing may have different meanings depending on the context of analysis and it has different dimensions, which may possibly exhibit different levels resulting in different profiles of conscious processing (Bayne, Hohwy, & Owen, 2016; Dung & Newen, 2023; Irwin, 2024; Walter, 2021). At the origins, consciousness comes from the Latin conscientia, cum scire: knowledge in common, oscillating between confidence and connivance, up to the classic “faculty that man has of apprehending his own reality” (Malebranche 1676) or for the neuropsychiatrist Henri Ey “the knowledge of the object by the subject and reciprocally, the reference of the object to the subject itself”. Accordingly, the individual is both the subject of his knowledge and the author of it. Lamarck, in 1809, speaks of a singular faculty with which certain animals and even humans are gifted, which he calls “sentiment interieur”, approximately inner feeling. More recently Ned Block introduced the distinction between access and phenomenal consciousness. Access consciousness refers to the interaction between different mental states, particularly the availability of one state’s content for use in reasoning and rationally guiding capabilities like speech and action; phenomenal consciousness is the subjective feeling of a particular experience, “what it is like to be” in a particular state (Block, 1995). Accordingly, cognition and subjective experience are two central components of conscious processing, which basically may be defined as “sensory awareness of the body, the self, and the world” (Lagercrantz & Changeux, 2009), including “inner, qualitative, subjective states and processes of sentience or awareness” (Searle, 2000). Among the embodied components of conscious processing we may consider, in addition, at the individual level, the ability to express emotions, memory, symbols, language, capacity for autobiographical report and mental time travel, as well as the capacity to introspect and report about one’s mental state, and at the social level, sustained inter-individual interactions which give access to various kinds of social relationships such as empathy and sympathy (Lagercrantz & Changeux, 2009).
Among the many theories and computer science models currently proposed, none of them, in our assessment, reach the overall species-specific aspects of the human higher brain functions (van Rooij et al., 2023). The question arises: can these models reach those aspects with time, when further developed, or is the gap irremediable? In parallel, more and more citizens are confronted with AI simulations of human behaviour, including conscious processing, and feel concerned about it (Lenharo, 2024): the prospect of artificial conscious systems raises the risk of impacting human self-understanding, for instance if AI were to replace humans in performing tasks that require a capacity for awareness. It thus appears necessary to challenge AI models with actual representations of human brain organization and human cognition and behaviour. Therefore, the question is whether or not any theoretical computer science representation of human conscious processing can lead to human-like artificial conscious systems: could machines ever develop a human-like consciousness, or rather a different kind of consciousness, or is it impossible for them to develop consciousness at all? Does the notion of artificial consciousness even make sense, and if so, how? To paraphrase Voltaire: Can a machine awaken?[1]
In the past decades, a large number of models were elaborated mainly by neuroscientists with a more humble aim: to reconstruct elementary functions of the nervous system (e.g., swimming in the leech (Stent et al., 1978) or the lamprey (Grillner et al., 1995)) from known anatomical and physiological building blocks. Some of these models have even been designed to simulate more elaborated cognitive tasks like the Wisconsin Card sorting task (Dehaene & Changeux, 2011) and even trace vs. delay conditioning (Grover et al., 2022). It is necessary to further develop the interface between AI, philosophy and neuroscience, which thus far has resulted in a mutual epistemic and methodological enrichment (Alexandre et al., 2020; Farisco et al., 2023; Floreano, Ijspeert, & Schaal, 2014; Floreano & Mattiussi, 2008; Hassabis, Kumaran, Summerfield, & Botvinick, 2017; Momennejad, 2023; Poo, 2018; Zador et al., 2023). In fact, although significant, this collaboration is still insufficient to address the issue of artificial consciousness. The crucial, still open question is: what kind of concrete similarities vs. differences between AI and the brain may need to be examined and accounted for to more adequately approach artificial conscious processing? In other words, what is the right ‘level of description’ to either model, or even generate artificial conscious processing given what we know about conscious processing in the human brain?
Moreover, in the neuroscience field, the word “consciousness” remains rather ill-defined and, as we shall see below, human conscious processing is not an all-or-none irreducible feature but one that develops stepwise (Changeux, 2006, 2017; Lagercrantz & Changeux, 2009; Tomasello, 2022; Verschure, 2016). Given these different possible developmental stages, AI attempts to develop artificial conscious processing should precisely specify which one (if any) of these developmental stages is selected.
In this paper we want to re-evaluate the issue of artificial consciousness within the context of our present knowledge of the biological brain, taking a pragmatic approach to the conceivability and feasibility of developing artificial consciousness and using the human brain as a reference model or benchmark. We aim to complement recent attempts in this direction (Aru et al., 2023; Godfrey-Smith, 2023) with a more encompassing analysis of the biological multilevel complexity of the human brain in relation to its evolution, not only in order to progress in the understanding of conscious processing itself but also to eventually inspire ongoing AI research aimed at developing artificial conscious processing. Accordingly, our aim is theoretical and philosophical but also highly practical as an engineering issue: we review scientific evidence about some features of the brain that are key in enabling human consciousness or in modulating it (or both), and we argue for the utility of taking inspiration from these features for advancing towards the development of conscious AI systems.
We do not claim that it is necessary to integrate the mechanisms identified for conscious processing in the human brain to develop artificial consciousness. In fact, we recognize that artificial features of conscious processing that are different from the brain ones cannot theoretically be excluded offhand. What we propose is rather to take the presently identified brain mechanisms of conscious processing as a benchmark in order to pragmatically advance in the building up of artificial models able to simulate accessible features of conscious processing in humans. Given the high controversy around the possibility to build up an artificial consciousness unrelated to brain mechanisms and the related risk of ending up in overly abstract views that are not sufficiently informed by empirical data, we think that starting from the biology of consciousness is a more productive strategy.
A question we may nevertheless ask is what are the benefits to pursue artificial consciousness in the first place, for science, or society at large? There are different possible answers. On an epistemological level, consistently with the medieval scholastic view reiterated by i.a. Paul Valéry that “we can actually understand only what we can build”, it is clear that to elaborate artificial models of some concrete features of conscious processing could perhaps eventually allow us to better understand biological consciousness in general, whether in terms of similarities or differences. At a technical level, it is possible that the development of artificial consciousness would be a game-changer in AI, for instance giving AI the capacity for intentionality and theory of mind, and for anticipating the consequences of its own “actions”. At the societal and ethical level, especially the last points could arguably help AI to better inform humans about potential negative impacts on society, and to help avoid them while favouring positive impacts. Of course, on the negative side, intentionality in machines might not at all favour human interests any more than human intentionality has favoured out-group individuals or species, or indeed the planet as a whole. This is indeed a discussion that would merit deeper analyses, but it is beyond the aim of the present paper. In the following, we will summarize relevant evolutionary, structural, and functional properties of the human brain that are of specific relevance to this discussion (for a recent overview, see (Barron, Halina, & Klein, 2023). Against that background, we will outline what the brain may inspire to current research on AI for advancing towards artificial conscious systems.
Finally, concerning the conceivability and feasibility of developing artificial consciousness, we will distinguish between:
(a) the replicability of human consciousness (which we exclude, at least in the present state of AI-development, a stance which is scarcely controversial);
(b) the possibility of developing an artificial conscious processing that may bear some resemblances but still is profoundly different than human (which we do not exclude in principle, but consider difficult to elaborate for both conceptual and empirical reasons). In the end, this paper starts from a selective examination of data from brain sciences with the aim to propose an approach to AI consciousness alternative to what appears to be the leading one today. This approach may be qualified as theory-based because it relies not upon experimental data but on selected components of a priori scientific theories which are then applied to AI systems (Butlin et al., 2023). Our approach, on the opposite, consists in starting from empirically established brain mechanisms and processes which are directly relevant to human consciousness and infer from them hardware building blocks or algorithms that are relevant and perhaps even necessary (if not sufficient) to the development of artificial conscious processing.
[1] The philosophers of the Enlightenment already wondered: what in the brain’s architecture might explain why and how it became conscious? What made matter awaken? Cf. e.g., a letter from Voltaire to d’Alembert, November 28, 1762.