Silicon Doesn’t Dream: Why Computers Lack Brains’ Biochemical Magic

Table of Links
Abstract and Introduction
-
Extents and ways in which AI has been inspired by understanding of the brain
1.1 Computational models
1.2 Artificial Neural Networks
-
Embodiment of conscious processing: hierarchy and parallelism of nested levels of organization
-
Evolution: from brain architecture to culture
3.1 Genetic basis and epigenetic development of the brain
3.2 AI and evolution: consequences for artificial consciousness
-
Spontaneous activity and creativity
-
Conscious vs non-conscious processing in the brain, or res cogitans vs res extensa
-
AI consciousness and social interaction challenge rational thinking and language
Conclusion, Acknowledgments, and References
2. Embodiment of conscious processing: hierarchy and parallelism of nested levels of organization
The hardware of the most common computers is made up of microprocessors, which are fabricated on semiconductor transistors (including oxide-based memristors, spintronic memories, and threshold switches) or neuromorphic substrates (Billaudelle et al., 2020; Pfeil et al., 2013) integrated into the brain circuit chips. These physical elementary components of the hardware are made up of a few chemical elements and compute at a high speed, much higher than the brain (see below), despite the fact that information transfer between two logical components in a system requires multiple stages of encoding, refreshing, and decoding, in addition to the pure velocity of electrical signals (which is about half the speed of light in standard circuit boards). In contrast, the brain is built from multiple nested levels of organization from highly diverse chemical elementary components, much more diverse than in any computer hardware. It is true that computational models like neuromorphic systems may be designed to account for such multiple nested levels of brain’s organization (Boybat et al., 2018; Sandved-Smith et al., 2021; Wang, Agrawal, Yu, & Roy, 2021), but the fact remains that their primary outcome is a limited emulation of brain functions while these computational models cannot account for the biochemical diversity of the human brain and its pharmacology, including the diseases which possibly affect conscious states and conscious processing (Raiteri, 2006). In fact, it is possible that the high level of biochemical diversity characteristic of the brain plays a role both in making possible and in modulating conscious processing (e.g., with psychotropic drugs). This possible role of the brain’s biochemical diversity for enabling conscious processing should be further explored in order to eventually identify relevant architectural and functional principles to be translated into the development of conscious AI systems.
In the real brain, the molecular level and its constraint play a critical role often insufficiently recognized. Proteins are the critical components. These macromolecules are made up of strings of amino acids which fold in a highly sophisticated organization able to create specific binding sites for a broad diversity of ligand including metabolites, neurotransmitters, lipids, and DNA. Among them there are enzymes which catalyse/degrade key reactions of the cell metabolism, the cytoskeleton and motor elements, DNA binding proteins like transcription factors ion channels, and, most of all, neurotransmitter receptors together with the many pharmacological agents which interact with them. The number of different proteins in all organisms on earth is estimated to be 1010–1012. In the human brain 3.710-3.933 proteins are involved in cognitive trajectories (Wingo et al., 2019) which is a significant fraction of the total number of genes, that approximates 20.000 in humans. There is a fundamental heterogeneity of the protein components of the organism which contribute to brain metabolism and regulation. This is true also for the control of the states of consciousness and their diversity, as assessed by the global chemical modulation of sleep and wakefulness or the diversity of drugs that cause altered states of consciousness (Jékely, 2021). In all cases the modulation ultimately takes place at the level of proteins. Access to conscious processing in the human brain thus requires a rich and dedicated biochemical context that is widely absent from the current AI attempts to produce artificial conscious processing: computers have no pharmacology and no neuro-psychiatric diseases. Data indicate a crucial significance of the molecular components in the understanding of life and thus of biological conscious processing, as increasingly recognized in both neuroscience and philosophy (Aru et al., 2023; Godfrey-Smith, 2023; Seth, 2021, 2024; Thompson, 2018). Indeed, in principle the crucial role played by pharmacological factors for human consciousness does not oppose the fact that AI systems may instantiate alternative, non-human like forms of consciousness (Arsiwalla et al., 2023). Our reference to the abovementioned molecular and pharmacological conditions is to identify what factors current AI systems lack that may be taken as inspiration for advancing towards artificial consciousness.
Another factor that is related to the different physical architectures of biological brains and computers and that presently makes a difference between them is the velocity of information processing. As mentioned above, the speed of information processing in the brain is lower than the speed of sound (~ 343 m/s), while computers work at a speed which approaches the speed of light (3×1010 m/s). This creates an insurmountable difference in the speed of propagation of information in computers vs. brains. The conduction velocity of the nerve impulse may reach 120 m/s (depending on the diameter of the axon) and is largely consequence of the allosteric transition of the ion channels in some myelinated axons. But additional constraints are imposed by the synapse which creates delays in the millisecond range including the local diffusion of the neurotransmitter and the allosteric transition of the receptor activation process (Burke et al., 2024). Because of the cumulation of several of these successive delays, psychological timescales are in the order of 100 (50-200 ms) (Grondin, 2001). Last, access of a peripheral sensory signal to consciousness may take delays in the order of 300ms (Dehaene & Changeux, 2011). In short, standard computers process information up to 8 orders of magnitude faster than the human brain. It is expected that for highly standardized tasks – such as playing chess – the computer is more efficient. This is in part due to different computational strategies, with the computer being able to test millions of possibilities in parallel while the human brain is able to test only a few during psychological time scales. As a consequence, the computers are able to process amounts of data so huge that they are not accessible to the brain in a human life time.
Moreover, despite some significant improvements obtained through neuromorphic systems, including increased computational power per unit of energy consumed (Cramer et al., 2022; Esser et al., 2016; Göltz, Kriener, Sabado, & Petrovici, 2021; Park, Lee, & Jeon, 2019), the human brain consumes orders of magnitude less energy than standard computers, even more in the case of massive models like GPT-4. The brain is not only much more energy efficient, but also much more sample efficient: deep-learning models are considered sampleinefficient when compared to human learning due to the huge amounts of training data they require to achieve human-level performance in a task (Botvinick et al., 2019; Botvinick, Wang, Dabney, Miller, & Kurth-Nelson, 2020; Waldrop, 2019).
Thus, the exceptional success of AI in simulating brain processes results largely from the speed of processing of increasingly sophisticated algorithms and computer programs. This huge difference in processing speed may be interpreted as an argument against artificial consciousness (Pennartz, 2024): since AI processes much more data in a much faster way than human brains, it may be that the parallel processing of data is sufficient for AI to accomplish those tasks that require integrated (i.e., conscious) processes in humans. In fact, one of the defining features of conscious processing is the integration of the information processed in different neuronal populations, which eventually result in a unified multimodal experience. This integration can be fundamentally defined as a broadcasting of representations mediated by an “ignition” pattern of activity, i.e. of organized neuronal activity patterns (Mashour et al., 2020b). From an evolutionary point of view, because of intrinsic biological constraints that determine the quantity and complexity of the sensory modalities received, as well as the timescale of operation of the cognitive and control systems, this integrative and broadcasting activity serves the need of handling complex information, that is information composed by multiple elements that cannot be processed in parallel by the brain. In the case of AI, the capacity of processing complex information produces integration and broadcasting that is not at the same level and in the same modality as in the human brain. In fact, modern AI systems are hugely modular, and they are based on the broadcasting of representations, but the constraints that characterize AI modules and eventually the capacity of the whole AI system to compute in parallel are significantly different than those characterizing the human brain.
This point may be illustrated in terms of quality (i.e., the kind of operations that the modules are capable of performing) and in terms of quantity (i.e., the amount of data that the modules are capable of processing). From a qualitative point of view, the diversity of the modules composing an AI system may consist in the specific task they perform and may approximate the diversity of sensory modalities the brain has access to, while those modules cannot rely on the different valence attributed to the processed information and to its representation, like the human brain can do. This is why, for instance, deep convolutional networks are eventually collections of “abstract concepts that lack experiential content, imagination and integral perception of the environment” (Pennartz, 2024). Therefore, at least to date, compared to the human brain AI modules are different in nature, because of their inability to attribute experiential value to information. To use the abovementioned differentiation between access and phenomenal consciousness, modular AI systems may reach the capacity to broadcast information among different modules (i.e., they may approximate access consciousness), while they appear presently unable to replicate phenomenal conscious processing or subjective experience. Dehaene, Lau, and Kouider propose a similar interpretation (Dehaene, Lau, & Kouider, 2017). They conclude that current AI systems may have the capacity for global availability of information (i.e., the ability to select, access, and report information) while they lack the higher level capacity for metacognition (i.e., the ability for self-monitoring and confidence estimation) (Lou, Changeux, & Rosenstand, 2016).
This interpretation converges with the distinction between the implementation of attention and access consciousness in AI and the (current) impossibility to engineer phenomenal consciousness. Along the same line, (Montemayor, 2023) outlines that present AI systems are limited to representational values that are instrumental to satisfy epistemic needs, while current AI has no experience of moral and aesthetic needs which are based on the intrinsic value of phenomenal consciousness and subjective awareness.
From a quantitative point of view, the capacity for memory storage and the computing power of AI systems is so much greater than that of the human brain that it eventually makes the system capable of performing the same tasks which in humans require a conscious effort at a completely non-conscious level. In fact, conscious processing may be considered an evolutionary strategy to enhance the survival of biological systems with limited capacities for memory storage and data processing.
In conclusion, different qualitative and quantitative constraints result in different operations (conscious vs non-conscious). Interestingly, when applying the ‘right kind of constraints to an artificial model’, some properties can develop as a result (Ali & al., 2022). It may be that conscious processing derives as a necessity from the constrained computational capacities of the human brain.
An additional question is what still makes the brain highly performing in psychological time scales up to conscious processing. The answer is mainly in the hardware architecture, which in turn may impact the kind of computations that biological brains are capable of. In the brain, the neurons and their neurites as well as their sophisticated connectivity are far more complex than the idealized artificial neurons. The biological neurons originate from definite supramolecular assemblies which create an important chemical complexity and diversity of cell types (maybe up to a thousand), the neuron types differing also by their shape, their axonal-dendritic arborizations and the connections they establish. An important chemical difference are the neurotransmitters (up to hundreds) they synthesize and release (Jékely, 2021).
Depending on the neurotransmitter they release, two main classes of neurons may be distinguished (i.e., excitatory vs. inhibitory). To pass a simple-minded conscious task, such as trace vs delay conditioning mentioned above, several superposed levels of organization are required and also the contribution of both inhibitory and excitatory neurons appears necessary (Grover et al., 2022; Volzhenin, Changeux, & Dumas, 2022). It is true that also in the case of deep network the weight matrix contains as many negative (inhibitory) entries than positive (excitatory) ones, and that there are neurobiologically-plausible backpropagation algorithms for deep neural networks that include both excitatory and inhibitory neurons (Guerguiev, Lillicrap, & Richards, 2017; Lillicrap, Santoro, Marris, Akerman, & Hinton, 2020; Scellier & Bengio, 2017; Whittington & Bogacz, 2017), but, as mentioned above, the diversity of the types of biological neurons is so vast that we still do not completely account for it, so those computational algorithms may eventually miss architectural and functional details of the brain that are crucial for conscious processing. Inhibitory and excitatory neurons establish basic neuronal networks levels, which themselves establish higher levels of nested networks of neurons, and so on. Such sophisticated connectomic relationships might contribute to the genesis of cognitive functions and even consciousness. For instance a cellular theory of consciousness has been suggested (Aru et al., 2020) based upon the dendritic integration of bottom-up and top-down data streams originating from thalamo-cortical neuronal circuits. Such hardware complexity is most often absent from the most elaborated super-computers, even if exceptions do exist, notably in neuromorphic hardware making use of cortical microcircuits (Haider et al., 2021; Max et al., 2023; Senn et al., 2023). Also, it is true that systems like LLMs show emergent abilities likely deriving from emergent structures that are present in their learned weights. In fact, at an architectural level, they are arranged as a large number of layers, which potentially allows for the sorts of organization (i.e., hierarchical, nested, and multilevel) that arguably plays a crucial role for conscious processing in the brain. Therefore, we cannot exclude in principle that large AI systems have the potentiality to develop some forms of conscious processing with characteristics shaped by the specific components of the AI systems’ structural and functional organization.
In conclusion, the hierarchical, nested, and multilevel organization of the brain indicates some architectural and functional features that current AI only partially emulates and that we propose should be further clarified and engineered in order to advance towards the development of conscious AI, including massive biochemical and neuronal diversities (i.e., excitatory vs inhibitory neurons, connectomic relationship, circuits, thalamo-cortical loops), and a variety of bottom-up/top-down regulations between levels. Among other things, these organizational features constrain the speed of information processing in the brain, up to several orders of magnitudes slower than in AI: this difference may eventually make conscious processing not necessary for AI to execute those tasks that require conscious efforts in humans.
Authors:
(1) Michele Farisco, Centre for Research Ethics and Bioethics, Department of Public Health and Caring Sciences, Uppsala University, Uppsala, Sweden and Biogem, Biology and Molecular Genetics Institute, Ariano Irpino (AV), Italy;
(2) Kathinka Evers, Centre for Research Ethics and Bioethics, Department of Public Health and Caring Sciences, Uppsala University, Uppsala, Sweden;
(3) Jean-Pierre Changeux, Neuroscience Department, Institut Pasteur and Collège de France Paris, France.