Markets

Robots Don’t Improvise: The Art of Spontaneity from Brains to Bots

Abstract and Introduction

  1. Extents and ways in which AI has been inspired by understanding of the brain

    1.1 Computational models

    1.2 Artificial Neural Networks

  2. Embodiment of conscious processing: hierarchy and parallelism of nested levels of organization

  3. Evolution: from brain architecture to culture

    3.1 Genetic basis and epigenetic development of the brain

    3.2 AI and evolution: consequences for artificial consciousness

  4. Spontaneous activity and creativity

  5. Conscious vs non-conscious processing in the brain, or res cogitans vs res extensa

  6. AI consciousness and social interaction challenge rational thinking and language

Conclusion, Acknowledgments, and References

4. Spontaneous activity and creativity

To date computers (and AI in general) operate prevalently, even if not exclusively, in an input-output mode. This is strikingly not the case for the human brain which works in a projective – or predictive – mode constantly testing hypotheses (or pre-representations) on the world including itself (Changeux, 1986; Friston et al., 2016; Pezzulo, Parr, Cisek, Clark, & Friston, 2024). This projective/predictive mode relies on the fact that the brain possesses an intrinsic – spontaneous – activity (Dehaene & Changeux, 2005) (i.e., a baseline activity independent from external stimulation). The earliest forms of animals do exhibit a nervous system where spontaneously active neurons can be recorded (e.g., jelly fish, hydra). Such spontaneous oscillators (or pacemakers) result from universal molecular circuits consisting of two fluctuating channels present throughout the animal world up to the human brain. There it was demonstrated early on by EEG (Berger, 1929) and more recently by other brain recordings the existence of a “resting state” or “default” activity distributed in a set of regions that involve mostly association cortex and paralimbic regions (Buckner & Krienen, 2013; Lewis, Baldassarre, Committeri, Romani, & Corbetta, 2009) which might even be correlated across multiple brain systems (Biswal, Yetkin, Haughton, & Hyde, 1995).

At the conceptual level, the brain’s spontaneous activity may be compared to a concept elaborated by the philosopher Spinoza, referred to as conatus, according to which “each thing, as far as it lies in itself, strives to persevere in its being” (Ethics, part 3, prop. 6). In this way, the Dutch thinker outlined that living organisms have an intrinsic predisposition to survive. More recently Maturana and Varela (1991) similarly claimed that a characteristic feature of living systems is autopoiesis, that is their capacity to literally “build themselves” or to constantly self-realise fighting against entropy and the dissipation of their own energy.

The importance of such spontaneous activity for the brain, especially for conscious perception, was already mentioned in the first simulations of the GNW (Dehaene, Kerszberg, & Changeux, 1998) and shown to be necessary for conscious access in a simple cognitive task (Dumas, Nadel, Soussignan, Martinerie, & Garnero, 2010). Last in agreement with our views, the active inference theory formalizes how autonomous learning agents (whether artificial or natural) shall be endowed with a spontaneous active motivation to explore and learn (Friston et al., 2016), which other studies confirmed to be sufficient for the emergence of complex behavior without the need for immediate rewards (https://arxiv.org/abs/2205.10316).

Neural network models designed to perform continuous life-long learning have, by design, spontaneous activity that is required to learn (Lewis et al., 2009; Parisi, Kemker, Part, Kanan, & Wermter, 2019), and even the classical models of AI, namely Boltzmann machines ( have spontaneous activity at their very core which has recently fruitfully inspired neuromorphic computing (Dold et al., 2019; Petrovici, Bill, Bytschok, Schemmel, & Meier, 2016). For instance, neuromorphic research has attempted to exploit the spontaneous generation of activity, including generation of world models, and self-evaluation that are at the core of brain activities like the phases of wakefulness and sleep (Deperrois, Petrovici, Senn, & Jordan, 2022, 2024). Yet still rather unique features of spontaneous activity in humans compared to artificial systems are that: 1. this spontaneous activity is not a level of “noise” in the system (as it at present prevalently is in AI, notwithstanding attempts to achieve it without noise (Dold et al., 2019)), but well defined stochastic and globally distributed activity all over the brain, without evident sign of large scale correlations but following well-defined patterns of brain areas (Lewis et al., 2009); 2. Importance of reward and emotions in the biological brain to link the internal world to itself and to the physical, social, and cultural environment; 3. the ability to link together far-distant activities through, for instance, brain scale modulatory systems (Dehaene & Changeux, 1997, 2000), which may include cortical and non-cortical (e.g., limbic) neurons leading to the stabilization of adequate (“harmonious”) relationships consciously perceived by the subject. The interconnections established between them may include epigenetically stored information unique to the life time experience of every individual subject. This means a gigantic capacity of flexibility and creativity for personal and cultural artefacts like in science (Changeux & Connes, 1995), art (Changeux, 1994), or ethics (Changeux, 2023; Evers & Changeux, 2016).

Embodiment, that is the sensorimotor abilities that humans use to explore the world through multisensory integration within a constitutive interaction between brain, body, and environment (Pennartz, 2009), plays a crucial role in humans for making possible the three features above. In fact, as a result of the interaction between brain, body, and environment humans are capable of flexible and online sensorimotor learning (i.e., of developing a realistic representation of the world that is adapted in real time) which may be intentionally (i.e., consciously) exploited. Values and the capacity for evaluation are essential for learning because without any values, or any capacity to evaluate stimuli, a system cannot learn or remember: it has to prefer some stimuli to others in order to learn. This classical idea in learning theory was expressed in neuronal terms by (Dehaene & Changeux, 1989, 1991) and by Edelman in his accounts of primary consciousness (Edelman, 1992), and was taken up in philosophical terms by (Evers, 2009) suggesting that evaluation understood as operations of selective responsiveness to reward signals is a presupposition for human consciousness to develop.

In biological organisms, embodiment is intrinsically related to the capacity to evaluate the world, discriminating between what is good and what is bad. This evaluation is mediated by emotions, reward-systems, and preferences which eventually allow the organism to discriminate between the different affordances, assigning them either positive or negative values (e.g., pleasure and pain) calibrated on the Umwelt. We agree with (Aru et al., 2023) that the Umwelt of present AI (e.g., LLMs), if any, is very limited because of its design, specifically because of how it processes information (i.e., abstract statistical processing). This impacts upon the prospect of conscious AI. Importantly, as argued by a number of cognitive scientists and philosophers, conscious processing does not depend only on the processed information, but also on internal properties of the system, as well as on its embodiment, and particularly on its emotionally charged motivations and goals, which are crucially linked to the embodiment of the agent (Damasio & Damasio, 2022; Damasio & Damasio, 2023; Roli, Jaeger, & Kauffman, 2022; Shapiro & Spaulding, 2024; Varela, Thompson, & Rosch, 2016). If so, the sole optimization of the current AI capacity for processing information is likely insufficient to achieve artificial conscious processing (Aru et al., 2023), especially if not combined with embodiment and related reward-based and emotionally charged motivations. Present AI systems may be equipped with the capacity for evaluation (Dromnelle et al., 2023), eventually instrumental for achieving the goals they have been trained for, but it is not clear if and how these goals have the same emotional correlates as in humans, so that in the end it would really feel like something for AI to achieve a goal (Boden, 2016). A relevant approach different from traditional views of AI based on outwarddirected perception and abstract problem-solving has been recently proposed by (Man, Damásio, & Neven, 2022). They start from the premise that the central problem and drive of life is homeostasis (i.e., regulation of internal states instrumental to maintaining conditions compatible with life). Therefore, sense-data are meaningful if relevant to homeostatic needs. On this premise, they propose a homeostatic neural network in which a classifier has a needful and vulnerable relation with the objects it computes.

From this perspective, the limitation of LLMs is that they presently lack the multidimensional and multisensory representation of the world that in humans is mediated by the body, and therefore they are intrinsically exposed to limited and distorted knowledge eventually insufficient for a multi-modal (i.e., conscious) representation of the world. Robotics, including lately its combination with LLMs (Zhang, Chen, Li, Peng, & Mao, 2023), as well as neuromorphic AI, promise to provide AI with a form of embodiment that in principle can make possible an artificial form of multimodal experience. For example, significant results have been obtained with robotic systems endowed with the capacity for self-monitoring, which replicates at least some aspect of human self-consciousness (Chella, Frixione, & Gaglio, 2008), particularly relying on the capacity for inner speech (Chella, Pipitone, Morin, & Racy, 2020; Pipitone & Chella, 2021).

Yet the apparent lack of any emotional stake for AI in its interaction with the world remains. Given the important role for conscious experience played by the evaluative capacity of the human organism mediated by embodied features like emotions, this lack is a potential stumbling block to further investigate and to overcome on the path towards conscious AI. In conclusion, brain features that are only partially implemented in current AI and that current research on conscious AI should try to better simulate are an embodied sensori-motor experience of the world, the spontaneous activity of the brain, which is more than simple noise as in current AI systems, autopoiesis (as the capacity for constant self-realisation), and emotions-based reward systems.

Authors:

(1) Michele Farisco, Centre for Research Ethics and Bioethics, Department of Public Health and Caring Sciences, Uppsala University, Uppsala, Sweden and Biogem, Biology and Molecular Genetics Institute, Ariano Irpino (AV), Italy;

(2) Kathinka Evers, Centre for Research Ethics and Bioethics, Department of Public Health and Caring Sciences, Uppsala University, Uppsala, Sweden;

(3) Jean-Pierre Changeux, Neuroscience Department, Institut Pasteur and Collège de France Paris, France.


Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblocker Detected

Please consider supporting us by disabling your ad blocker