1. Introduction
Consciousness, especially perception, is an everyday experience that seem so trivial that humans take it for granted. However, have you ever wondered why you can feel a cut in your finger in the finger itself? Assuming that your finger is not conscious (i.e., it does not contain the capacity to generate consciousness itself), this thought can be quite inspiring. If the brain is understood to be the seat of consciousness, then this means that at least one’s own body parts, to which the brain is connected, can be made to feel conscious as if they could generate their own consciousness. Or, in other words, a conscious feeling can be projected onto (or into) a part of the body to which the brain is connected. The skin, especially in sensitive areas like our fingertips, is full of different types of sensory receptors. When you cut your finger, these receptors (specifically nociceptors, which detect painful stimuli) are activated by the tissue damage. These activated receptors generate electrical signals (voltage fluctuations along cellular membranes) that travel along sensory nerves that carry the information from your finger, up your arm, through your spinal cord, and finally to your brain. There, they end up at the somatosensory cortex, ready to be perceived (consciously felt). Unless these nociceptors are capable of experiencing a feeling themselves, the only solution for the brain is to project the felt experience down to the finger. This all sounds strange, but it makes sense; if you just felt a general feeling of pain in your head, you would not know where the problem was. The brain undertakes the processing, generates a feeling, and “projects” it back to the site of the initial stimulation. From an evolutionary perspective, one could possibly speculate that the feeling of body pain might have been the very first version of perception at all, but that is no more than an interesting thought. Anyway, more importantly, this notion of consciousness projection does not even seem to depend on neural connections, which has the potential to explain the existence of the phenomenon known as phantom limb pain (PLP) [
1]. Kern et al. [
2] found that 74.5% of 537 amputees suffered from PLP. There seems to be a strong need to understand this phenomenon; the idea presented here might provide a new avenue for thinking about PLP. Anyway, if one keeps thinking further about the idea of the projection of consciousness, the brain’s ability to project the sensation of sound to its point of origin in space, known as auditory localization [
3], shows that, also without any physiological connection, a constructed perception can be projected to somewhere even far away.
Auditory localization [
4] is a remarkable result of the complex processing of subtle cues received by our two ears. While this pure neural processing behind the phenomenon has long been researched [
3], it is the fact that perceived sound is projected to its brain-calculated source that is of particular interest here. Respective perception does not occur at the site of the auditory receptors in our ears; instead, it is projected to the calculated origin of air particle vibrations. Imagine a projector displaying an image on a screen. The image originates from the projector (your brain), but it appears on the distant screen (your finger or the source of sound waves). The screen itself does not generate the image; the screen is just where the projection appears. Similarly, the conscious feeling of finger pain originates in your brain just like the perceived sound originates in your brain, but then your brain projects the subjective experience to your finger or to the source from which the sound waves originated. Of course, this also applies to an object that we see somewhere in the distance. Our brain calculates the distance and projects the recognized (perceived) object to the place where it is calculated to be in the form of a conscious percept. The author acknowledges that this sounds strange, but believes it to be a very helpful and potentially thought-provoking way of thinking about consciousness and perception.
The auditory localization phenomenon teaches us that this is not a physiological process in which the brain sends something tangible back to the place of action. While this may be contentious in the case of the finger, it is not contentious in the context of sound localization and seeing. Instead, it becomes evident that the projection of a constructed conscious percept to the location, where the initial, basic sensory information (the so-called distal stimulus; the physical object itself [
5]) originated, is a pure mental phenomenon, created by the brain based on patterns of neural activity. While subjective experience is undoubtedly a remarkable phenomenon, it begs the question: is it truly a wonder? It could be argued that subjective experience is an extraordinary event that contradicts the laws of nature, and is therefore attributed to the direct influence of a divine power or supernatural forces. While this article seeks to enhance our comprehension of the underlying mechanisms of consciousness through the utilization of established scientific knowledge, it does not dismiss the possibility that subjective experience may not be an evolutionary consequence; rather, it emphasizes that it may be attributable to an alternative causative factor. However, the following discussion will emphasize that, if subjective experience is indeed created by the brain itself through neural activations, then it can be broken down to very rapid voltage changes (millisecond range) on cellular membranes, causing anything from basic organ control up to perception and even self-referential processing [
6,
7,
8]. At least, it is possible to demonstrate very robust correlations between brain activity and perception, which is one of the most remarkable forms of consciousness. Finally, and most importantly in the current context, the most crucial topic here is, perhaps, physiological “threshold”. Numerous brain imaging data show that the human brain processes a lot of information non-consciously (subliminal or below the threshold of awareness) [
9,
10,
11].
Most interestingly, it seems that information entering the brain needs to elicit enough “neurophysiological energy” (i.e., voltage values) in order to become conscious (i.e., to be perceived). Thus, a certain energy threshold needs to be crossed, which is true at the sensor–neuron level as well as within the brain, with all its parallel and hierarchical processing levels. Exactly this notion forms the basis for the current theoretical paper, which uses known neurophysiological phenomena (single-cell- and neural-circuit-related phenomena) to introduce stochastic resonance as a supporter, turning sub-threshold signals into conscious perceptions. Even though stochastic resonance is a more complex physical phenomenon, all it means in the current context is that the addition of noise might result in sub-threshold signals crossing the threshold and finally becoming perceived content. Of course, this does not include an explanation for the subjective experience itself, but it may offer an interesting notion on underlying neural mechanisms that mediate subjective experience. As will be outlined later, stochastic resonance has already been introduced to the neurosciences, but the possibility that it is an important underlying phenomenon in the occurrence of perception is something new that will be discussed in the frame of this paper.
Anyway, the next chapter shall provide a basic insight into the fact that voltage changes occurring at the cellular membranes of neurons are the “signals” (or information) that are processed: the signals that can become conscious or stay non-conscious (i.e., unconscious or subconscious), while still being processed. All this will finally lead to the idea of stochastic resonance, representing an important mechanism related to the phenomenon of perception. The herewith-stated hypothesis is that stochastic resonance could play a crucial role in bringing some subconscious information into consciousness. Many pieces of information that are processed by the brain might generate neural signals that are too weak to trigger the widespread neural activity associated with conscious perception; the addition of noise could represent the mechanism that is used to cause perception. Because it might be difficult to imagine that noise (which, as per its definition, does not contain any “information”) can be supportive at all, further down, an interesting analogy is used to demonstrate how this can and actually does happen. It will be shown how a completely white image can turn into an image depicting an object simply by adding noise to the white image. However, first things first.
1.1. Information (Signals) Coded as Varying-Voltage Values and as Quantities of Constant-Voltage Values
The human brain processes lots of information through all its heavily connected neurons. At first, this statement seems straight forward; however, a second thought results in the following question: what does the term “information” (or signal) actually mean in this context? It certainly deserves clarification, which is crucial in understanding the proposed idea about stochastic resonance contributing to the generation of perception. Let us begin with the fact that the human brain does not house an actual image in it, nor does it have any sound or music inside or any other thing that we finally perceive through the amazing subjective phenomenon of perception. Perception equals a conscious construct of something that is literally made up by the psyche in response to sensory “information” processing, just like the above-mentioned finger pain or sound localization. Crucially, sensory information is nothing more than voltage changes generated by some receptive elements on the cellular membranes of sensory neurons [
12]. While perception is the result at the conscious end, at the beginning of the spectrum are neurons that “communicate” with each other. Respective communication needs something to exchange something that contains content, or, in other words, “information”.
Quite impressively, we gained a lot of knowledge about what neurons receive, transmit, and exchange, which finally makes the whole brain a perfect organ to produce adapted behavior (this is, by the way, its main function, besides controlling general life-supporting functions, such as breathing and pumping blood, sleeping, and being awake, in principle) [
13,
14]. Neurons integrate, process, and transmit voltage changes or fluctuations. A volt is the “standard unit of potential difference and electromotive force”, as defined in the International System of Units [
15]. This means that one volt is used to measure the difference in electric potential between two electrical points, with a conductor carrying a constant current of one ampere, and when the power resistance between these points is one watt. It is essentially the amount of force pushing charged electrons through a circuit. Most importantly, it is exactly what can be measured as neurophysiological activity via electroencephalography (EEG). While this is further explored later in this paper, we first dive a bit deeper into the small world of voltage changes on the cellular membranes of neurons by also introducing the fact that thresholds play a crucial role in neurophysiology. It is challenging to conceptualize that the fundamental bases on which human beings perceive a distinct, vibrant image of our surrounding environment are voltage changes on cellular membranes; this does seem to be a fact, and actually represents knowledge gained over decades [
16]. Those changes occur as results of alterations in ion concentrations along the cellular membranes (i.e., voltage changes) of neurons. Ions, as charged particles, can passively or, via some kind of selective support (this is well understood), flow through specific channels in the membranes of neurons, the so-called ion channels [
17]. Due to their negative or positive charges, flowing in and out of a neuron changes its membrane’s potential [
12].
One such fluctuating potential is forming up at the receiving end of a neuron. It is the sum of often thousands of incoming small potential changes (i.e., postsynaptic potentials, PSPs; one at each synaptic connection) that are able to trigger the generation of an all-or-nothing potential with a constant voltage amplitude, a so-called action potential (AP) [
18]. Such an AP can then carry information over longer distances via a potentially quite long axon (can be more than 1 m; cable-like structure of a neuron) to other areas that contribute to the overall processing. Critically, a single neuron will only generate an AP if the sum of all postsynaptic potentials (also known as graded potentials) received by its dendrites and cell body (i.e., the receiving end of a neuron) results in a membrane potential that crosses a certain threshold at a certain membrane location on the neuron [
19]. This location is known as the so-called axon hillock, which is the part of the cell body where the axon protrudes away from the cell body, reaching out to its following neurons to connect with. An AP can be generated only if the sum of all the graded potentials exceeds that threshold (threshold potential) [
20]. And only then, if an AP is generated, can information be carried on to the following neural networks or finally muscles to let them contract and execute behavior. Exceeding the threshold means depolarizing the membrane at the axon hillock, which in turn makes it less negative (inside relative to outside). A neuron at rest (i.e., in its non-firing state, where no APs are generated) has a negative membrane potential at the axon hillock. There are more negatively charged ions inside the neuron compared to outside, and an AP can be generated only if the concentration of ions changes so that this difference gets smaller. Then, 1 AP or up to around 250 APs per second are carrying information to other neural circuits and influencing their summed PSPs [
21].
Figure 1 visualizes all of that, while also localizing these potentials on a classical, standard neuron. Importantly, the location where the crucial threshold matters for an AP to be generated is marked in red color. With respect to the coding of information, it is important to know that, before the axon hillock, “information” is coded in the form of voltage–amplitudes (amplitude-modulated coding); after the axon hillock, it is coded in the form of voltage frequencies (frequency-modulated coding).
This can all be quantified in a manner analogous to the measurement of a battery’s potential by a voltmeter, which has been achieved in the past in the frame of numerous experiments. On a larger scale, amplitude-modulated processing can be measured by using electroencephalography (EEG) (see further below). On a little side note, the respective gained knowledge has enabled the creation of psychotropic substances that are capable of modifying affective states, including the ability to alter feelings, which are subjective phenomena, such as those experienced while taking antidepressants. These substances can also induce unusual perceptions, while the known effects of such chemicals are merely influencing the above-mentioned potential changes in understood, known, and largely controllable ways. This all demonstrates that such voltage changes are indeed essential to in fact everything that is processed in the brain; most impressively, they form the very basis for phenomena like perception, general consciousness, and even self-reflection. The aim of this opinion paper is to propose the idea that stochastic resonance is an important basis for perception, as the constructed subjective experience it is. Crucially, this paper links single-cell threshold-related neurophysiological phenomena supported by neural activation systems, with subconscious information becoming conscious. In the end, it might be that the neuroscience-related aspect of stochastic resonance explains the occurrence of illusions.
The next paragraph aims to demonstrate how well electroencephalography (EEG), and event-related potentials (ERPs) in particular, is capable of showing that measured graded potentials (amplitude modulation) can indeed reflect threshold-related perceived phenomena and finally even mark the border between non-conscious and conscious information processing. From a methodological perspective, it is considered helpful to know how ERPs are generated, because their components are directly subjected to threshold phenomena, as can be seen in the three datasets shown in the next paragraph.
1.2. Electroencephalography (EEG), Event-Related Potentials (ERPs), and Their Sensitivity to Fine-Graded Potential Differences
1.2.1. Theoretical Background of Event-Related Potentials (ERPs)
Event-related potentials (ERPs) [
22] represent small voltage fluctuations in the electroencephalogram (EEG) that are time-locked to repetitive sensory, affective, cognitive, or motor events. Theoretically, ERPs (just like raw EEG signals) arise from the synchronized post-synaptic potentials of large populations of cortical neurons in response to an event, rather than the firing of individual action potentials [
23]. When thousands or millions of neurons in a certain brain region (mainly, but not exclusively, the cortical region) are activated in a similar temporal pattern following a stimulus, their summed electrical activity creates a measurable voltage change on the scalp. The generation of an event-related potential (ERP) from raw electroencephalogram (EEG) signals relies critically on a process called signal averaging. This technique is necessary because the neural activity specifically related to a particular event (the ERP) is extremely small, typically in the microvolt range, and is buried within much larger, ongoing background EEG activity and various forms of physiological and environmental noise. For this purpose, first, the continuous raw EEG recording is segmented into smaller time windows, known as “epochs” or “trials”. Each epoch is time-locked to the onset of a specific event (e.g., the presentation of a stimulus or a motor response). These epochs typically include a period before the event (the “baseline” period) and a period after the event (in the present case, 1 s). Baseline correction is performed before averaging. This involves calculating the average voltage within the pre-event baseline (in the present case 100 ms) period and then subtracting this average from all data points within that specific epoch. This sets the average voltage of the baseline to zero, removing any DC offsets and making the ERP components comparable across trials and participants [
24].
After adequate artifact rejection or correction, averaging is performed. For each specific stimulus category, all the corresponding epochs are summed together point-by-point across time and then divided by the total number of epochs. Due to this averaging procedure, the ERP component (the “signal”) is consistent and time-locked to the event across all trials, whereas the background EEG activity and other noise sources are random and asynchronous with respect to the event. When many trials are averaged, the random fluctuations of the noise tend to cancel each other out, while the consistent, time-locked ERP signal sums coherently. Mathematically, the amplitude of the random noise in the averaged waveform decreases proportionally to the square root of the number of trials. This technique offers excellent temporal resolution, reflecting brain activity with millisecond precision, and provides a non-invasive window into the neural processes underlying perception, affection, cognition, and action [
25]. Different ERP components, characterized by their polarity (positive/negative), latency, and scalp topography, are associated with distinct stages of information processing. For instance, early components like the N100 and P200 typically reflect sensory processing [
26,
27], while later components such as the P300 are linked to attention allocation, stimulus evaluation, and memory updating [
26]. The N400, another well-known component, is specifically associated with semantic processing and expectancy violation [
28]. By analyzing these components, researchers can infer the cognitive operations engaged by a task and their temporal progression in the brain. Affective processing, on the other hand, is less easy to measure via ERPs. Nevertheless, due to heavy connections between neural circuits involved in affective information processing and those involved in cognition, one can still investigate affective neural responses as well (one might call this second-hand affection).
1.2.2. Examples of Correlations Between Voltage–Amplitudes and Perceptions
Through ERPs, it is possible to show a clear correlation between voltage–amplitudes and distinct perceptions. At this stage, any causative connection might still be uncertain; however, as a matter of fact, there is no perception if there are no such voltage–amplitudes, which makes it pretty causative. For instance, information regarding brightness is coded in varying voltage amplitudes, which can be compared across different brightness conditions in the form of ERPs over the occipital lobe in human subjects, which is where “seeing” is happening. The brighter the light that enters the brain via the eyes, the higher the potential change (see
Figure 2).
Figure 2 shows the result of an unpublished study with 63 participants (the author expresses thanks to Minah Chang for respective data collection). On a black background, a dark circle, a light circle, and a bright circle were presented in random order (50 presentations for each circle-brightness condition). At two distinct time points (68 ms and 120 ms), t-tests revealed significant differences between the mean ERP amplitudes elicited by the dark circles and the light circles (
p = 0.28), the dark circles and the bright circles (
p = 0.001), and between the light and the bright circles (
p = 0.020) (
Figure 2). Without any further details, this tells us that the physical property of the number of photons (i.e., the physical dimension of brightness) is translated into corresponding graded potential changes. What can be inferred from this is that the perception of brightness (i.e., the psychological dimension) correlates directly with a distinct potential value. One could easily differentiate by only seeing brain activity data in the form of ERPs between brighter and darker stimuli, without having seen them. This relationship is well known and has been shown in the past (e.g., [
29]).
While this example demonstrates a correlation (perhaps even causation) between the voltage in the brain and perception on a pure physical–sensory level, the following example (also from unpublished data, collected in the lab of the author of this paper) demonstrates that graded, summed potential amplitudes are also found to vary as a function of psychological phenomena like the perception of faces. It can be shown that the so-called N170-ERP amplitude [
30], which is well known for its connection with face perception [
31] (measured near the fusiform gyrus of the right hemisphere), varies as a function of “how much face is consciously perceived” in response to controlled face image presentations. Crucially, real faces, as well as simple face drawings, buildings with face-like features, and standard buildings without such features, were used as visual stimuli, while brain potential changes were recorded with EEG (see
Figure 3; unpublished data from the lab of the author of this publication). Besides the very obvious amplitude differences elicited by the different visual stimuli that vary in “face intensity”, the most interesting finding was that explicit ratings of how much a face is perceived by viewing each of the different images correlated directly with the N170 amplitudes (the corresponding organ-pipe-like changes seen in
Figure 3). In other words, the higher the measured N170 amplitude, the more a face was perceived when viewing an image. The fact that the N170 ERP component varies as a function of face-likeness has been shown in the past [
32]. However, such fine-graded ERP differences have not been published before. In summary, it can be inferred that the grade of face perception too (i.e., conscious awareness of “face”) depends very directly on (or is even caused by) brain potential amplitudes, just like the perception of brightness.
But there is even more evidence contributing to these ideas: in a recently published study about subliminal word processing, Pavlevchec et al. [
9] demonstrated how obviously measurable ERP amplitudes show that conscious recognition of verbal information depends on a certain potential threshold. Even though they could show that small signals in the brain still represent word-related information processing in the non-conscious mind, the essence of their study reflects that a certain potential-threshold needs to be crossed in order to elicit conscious awareness of respective information in the brain, which in this case is semantic information (elicited by visually presented words). In this study, when words were presented for only 17 ms (solid black line in
Figure 4), no recognition occurred, meaning that participants were not aware that words were presented on the monitor in front of them. Nevertheless, ERPs elicited by words still differed from ERPs elicited by shapes (dashed black line) in a brain region known for semantic information processing (electrode position P7), which indicates that the brain performed word processing in the absence of respective awareness (i.e., subconscious semantic processing). In this study, proper word recognition started in conditions where stimuli were presented at least 67 ms (solid green line versus dashed green line).
Figure 4 shows respective ERPs. In other words, recognizing words (conscious awareness of words being presented) is associated with larger amplitudes (i.e., a suprathreshold amplitudes), which in turn means higher neurophysiological energy.
Strikingly, this finding is absent when we look at the corresponding electrode location on the right hemisphere (P8) (see
Figure 5), which highlights that even subliminal word processing has a left hemisphere dominance, which corresponds to the well-known left hemisphere dominance of language processing in general.
In summary, those three examples and an important conclusion from them form the most important basis for this paper. It is clear and objectively measurable that crossing certain potential thresholds (i.e., certain brain activity levels, or neurophysiological energy levels) is key for perception to occur. At the same time, it can also be concluded that sub-threshold “information” exists and is processed below the level of awareness (most likely very dominantly). However, the here-proposed connection between stochastic resonance (the addition of noise) and neurophysiological phenomena has its focus on perception, and perception is always linked to consciousness.
In order to derive a solid impression of the way in which the addition of noise can indeed turn non-conscious information into conscious information, a python-based software was created (I express endless thanks to Samuil Pavlevchev for this software) for the purpose of modifying the pixel values of grayscale images. The following paragraph explains how the respective algorithm was used to demonstrate stochastic resonance being able to turn a completely white image into an image depicting a recognizable object (see also [
33]).
2. Visualizing the Power of Noise
In order to visualize the idea of stochastic resonance to support the generation of consciousness, including perception, it was decided to use a grayscale image composed of 307.200,00 pixels as a starting point (640 × 480) (
Figure 6). Each pixel has a value between 0 (black) and 255 (white).
The crucial idea here is to understand this “original” image as an actual object in the environment that can be seen. The following procedure shall add a distinct number to all pixel values, with the result of finally lifting up all values above the threshold number of 255. This can be achieved by adding the number 255 to all values. What happens is that pixels with the original value 0 (black) now become 255, which is white. The result is an image without any visible pixels (it is all white); consequently, nothing can be recognized.
In analogy to consciousness, a pixel value of 255 can be understood as the threshold between conscious and non-conscious information. All the pixels of the resulting white image carry their new number, which is above 255, but nothing can be seen, because all the pixels are outside the visible range. However, the relative differences between the pixel values stay the same, which means the information still exists, but outside awareness, which is analogous to subliminal information.
Now, stochastic resonance comes into play. In 1981, Benzi et al. [
34] published a letter explaining the effect, which they named “stochastic resonance”. They found that noise is not necessarily detrimental (especially in nonlinear systems), but that it instead can actually increase the signal-to-noise ratio in a system [
35]. While the original explanation is rather complex and very mathematical, the authors pointed out that their newly described phenomenon is likely to have interesting applications [
36]. Indeed, stochastic resonance has been introduced to the neurosciences, which has led to very interesting output (see discussion section). However, it has not yet been introduced (to the best of my knowledge) as an overall possible mechanism underlying the subjective experience of perception in combination with existing activating systems in the human brain. The current theoretical paper uses this phenomenon to define the notion that, for a brain that contains most of its information below the consciousness level, adding noise can make this non-conscious information cross a threshold to become conscious. In order to build a better understanding of this idea, a visualization is now provided with the addition of random numbers (noise) to each of the pixels that have values above 255. In fact, negative random numbers have been added, or, in other words, positive numbers have been subtracted. Anyway, noise has been introduced to the completely white image, and three different results are shown in
Figure 7. The difference between the three results is the varying range set for the random number generator. It was once set between 20 and 200, then between 20 and 250, and finally between 20 and 300. These ranges were chosen without any underlying concept: they could have been different and just represent examples. Anyway, depending on those ranges, the resulting images look a bit different, but in all the images, the chosen insect can be recognized very easily (
Figure 7). In essence, through this procedure (adding noise to invisible information), the existing information that was outside the range of visual perception was lifted across a threshold, becoming visible and recognizable.
The idea that the human brain mostly contains information outside awareness was strongly promoted by Sigmund Freud [
37,
38]. Meanwhile, since the advent of brain imaging technology, numerous studies have demonstrated that the human brain indeed processes information outside consciousness, as mentioned in the Introduction Section. In summary, or, more accurately, in analogy, the addition of noise lifted the sub-threshold signals above the threshold, generating perception. The author of this paper plans to undertake experiments to show that subliminal word presentations become conscious precepts if the right amount of the right noise is added, as well as experiments including subliminally presented images. Determining the optimal kinds and quantities of noise requires empirical investigations. However, prior research reports that pink noise amplifies the output signal in an artificial neuron up to twenty times more than white noise [
39].
3. Discussion
The human brain is a highly complex nonlinear system, constantly processing vast amounts of information, much of which remains below the threshold of conscious awareness. Besides large amounts of unconscious information that does not have the potential to become conscious at all, there is “subconscious” information, which includes sensory inputs that are too faint to consciously perceive, semantic content that is processed outside awareness (subconscious thinking), memories that are not readily accessible, or subtle internal states, etc. The herewith-stated hypothesis is that stochastic resonance could play a crucial role in bringing some of this subconscious information into consciousness. Many pieces of information processed by the brain might generate neural signals that are too weak to trigger the widespread neural activity that is associated with conscious perception. Adding noise in the form of neurophysiological energy might be an appropriate mechanism for lifting those signals across a respective threshold to induce perception.
The brain is inherently noisy. This is not just random static, it includes background neural activity, spontaneous firing of neurons, and fluctuations in neurotransmitter levels. Traditionally, this “noise” has been viewed as detrimental to information processing. While it could be such noise, it might also be possible that some of the known activating systems are actually providing respective noise activity (or, in other words, neurophysiological energy). The brain’s “activating systems” (known as ARAS, which stands for Ascending Reticular Activating System) are often seen as crucial for arousal, attention, and general consciousness. These systems originate from small but mighty clusters of neuron cell bodies in the brainstem and basal forebrain, sending widespread projections (axons) throughout the cerebral cortex [
40]. Brown et al. [
41] wrote about the control of sleep and wakefulness in general; given the fact that the ARAS is even known for being involved in consciousness, as in being awake, it is here questioned whether it could also be involved in the one so interesting form of consciousness, namely perception, providing noise (as in “neurophysiological energy”), sent to the neural networks that process sensory input.
The key players of the ARAS mainly include four systems that operate with different neurotransmitters: (i) the noradrenergic system (Locus Coeruleus) located in the pons (this nucleus is the primary source of norepinephrine, projecting broadly to the cortex to enhance arousal, attention, and stress responses [
42]); (ii) the serotonergic system (raphe nuclei), which is distributed along the midline of the brainstem and releases serotonin, influencing mood, sleep–wake cycles, and sensory processing [
43]; (iii) the dopaminergic system (ventral tegmental area and substantia nigra), which is primarily known for reward and motor control, but it has also projections to the prefrontal cortex, where it contributes to attention, motivation, and cognitive functions [
44]; (iv) the cholinergic system (pedunculopontine tegmental nucleus, laterodorsal tegmental nucleus, basal forebrain), the nuclei of which release acetylcholine, which is critical for cortical activation, learning, memory, and REM sleep [
45].
These brainstem nuclei act like control centers, modulating the excitability and activity of vast cortical networks, and allowing us to shift between states of sleep and wakefulness, focus attention, and respond to our environment. Most critically in the context of the current paper, their functional contribution to turning non-conscious information, including semantic information, into conscious, fully perceived content might be providing random voltage-changes (neurophysiological energy) following the phenomenon or concept of stochastic resonance. In other words, if a weak, subconscious signal coincides with an optimal level of this inherent neural noise, the combined input could be strong enough to push the signal across the firing threshold of neurons or neural networks to result in full, conscious perception. Once a sub-threshold signal crosses the threshold due to stochastic resonance, it can then be amplified and propagate through wider neural networks. This propagation, particularly to areas associated with higher-order processing and attention, could lead to the information becoming consciously accessible.
Besides the general idea that the remarkable phenomenon of perception might make use of stochastic resonance effects, an important part of this paper is the visualization of the way in which the addition of noise can indeed turn an image (i.e., visible content or information) that is clearly outside awareness into information that can be seen and even recognized. This is already shown in a chapter that has been accepted and is currently in press [
33]. Anyway, it has been replicated for this paper. The starting point was a grayscale image of a praying mantis. A certain tone of gray is coded by a number between 0, which means straight black, and 255, which means complete white. In the first step, each pixel of that image had a constant number added to it, resulting in all the pixels becoming values above 255. In the new image, nothing could be seen, because only pixels with values below 255 are visible as a certain gray tone. However, the information describing the mantis was still there, just outside a visible range. The crucial step was when each pixel had a random number (noise) subtracted from it (in other words, negative numbers were added). Despite the randomness, enough pixel values fell back below the threshold of 255, resulting in the mantis becoming visible again. In summary, a completely white image was turned into an image showing a mantis, simply by adding noise. Likewise, information in the human brain, processed below the consciousness threshold, could become conscious by a neural network that simply adds noise to it. This is obviously just a theoretical idea. However, its innovative character might assist us in thinking in a distinct direction when it comes to the search for the origins of consciousness.
It is important to mention that the phenomenon of stochastic resonance has already been introduced to the neurosciences. Gammaitoni et al. [
46] wrote a seminal review that provides a comprehensive overview of SR across various fields, including its early applications and theoretical underpinnings, which are relevant for understanding its application in neuroscience. Simonotto et al. [
47] reported on stochastic resonance as a measuring tool that can be used to quantify the ability of the human brain to interpret noise-contaminated visual patterns. They conducted a psychophysics experiment, through which they showed that the brain is able to quantitatively interpret details in a stationary image that has been obscured with time-varying noise. Zeng et al. [
48] mention in their article that stochastic resonance has been described in a variety of physical and biological systems, whereas its functional significance in human sensory systems remained mostly unexplored by then. They report psychophysical data, showing that signal detection and discrimination can be enhanced by noise in human subjects. Their focus was on hearing by either normal acoustic stimulation or electric stimulation of the auditory nerve or the brainstem. The authors suggested that noise might be an integral part of the normal sensory process and that stochastic resonance effects should be added to auditory prostheses. McDonell and Ward [
49] mentioned that understanding the diverse roles of noise in neural computation will require the design of experiments based on new theory and models. McDonell and Abbott [
50] published a review paper that clarifies the definition of SR and its various manifestations in biological systems, including detailed discussions relevant to neuroscience. The authors argue that it would be more surprising if the brain did not exploit randomness provided by noise, via stochastic resonance or otherwise, than if it did.
Kitajo et al. [
51] provide evidence that stochastic resonance within the human brain can enhance behavioral responses to weak sensory inputs. The study participants were asked to adjust their handgrip force to a slowly changing, sub-threshold gray level signal presented to their right eye. They found that participant`s behavioral responses were optimized by presenting randomly changing gray levels separately to the left eye. The authors also mentioned that their findings might be useful in designing optical human interfaces that could help in the response to weak visual inputs (e.g., while driving a vehicle in twilight). Manjarrez et al. [
52] show psychophysical evidence in a yes–no paradigm for the existence of a stochastic-resonance-like phenomenon in auditory–visual interactions. They found that the detection of a weak visual signal was an inverted U-like function of the intensity of different levels of auditory noise. Breen et al. [
53] investigated the question of whether sub-sensory electrical noise stimulation enhances somatosensory function. They applied vibration (50 Hz) to certain aspects of the foot in the presence or absence of sub-sensory electrical noise and measured the vibration perception thresholds (VPTs). They found a considerable improvement (∼16%) in vibration detection in the presence of noise. Collins et al. [
54] reported on the use of SR in enhancing somato-sensations and improved the performance of the balance control system in humans. Lefebvre et al. [
55] analyzed a computational model of the thalamo–cortical system in two distinct states (rest and task-engaged) to identify the mechanisms by which endogenous alpha oscillations (8 Hz–12 Hz) are modulated by periodic stimulation. They found that the different responses to stimulation observed experimentally in these brain states can be explained by a passage through a bifurcation combined with stochastic resonance—a mechanism by which irregular fluctuations amplify the response of a nonlinear system to weak periodic signals. In their article, Moss et al. [
56] concluded that SR is a phenomenon compatible with the neural theories of brain function. The available evidence they report justifies further research. Encouraged by all this promising pre-work, the current theoretical paper links SR, representing the addition of noise with a possible mechanism underlying perception as a conscious phenomenon. Noda and Takahashi [
57] explored the role of ongoing spontaneous neural activity (which can be considered internal noise) in enhancing the detection of weak sensory inputs in the auditory cortex, suggesting a “sparse network stochastic resonance in rats”. The effect of added noise on vowel-like supra-threshold discrimination in cochlear-implant listeners was studied in 2003 [
58]. The instruction for participants was to detect different tones in the absence (control) and presence of a white noise presented over a 20–35 dB range, from inaudible to loud. The authors found that discrimination of supra-threshold harmonic stimuli was significantly enhanced in combination with supra-threshold noise. Quite interestingly, Schwarzkopf et al. [
59] suggest in their paper that Trans-cranial Magnetic Stimulation (TMS) might at least at times act by adding noise to neuronal processing. Mostly, TMS parameters impair behavior, but occasionally they can induce behavioral facilitations.
Perhaps, most importantly, a study by Méndez-Balbuena [
60] investigated the phenomenon of multisensory stochastic resonance, specifically seeking to determine whether mechanical tactile noise could enhance the amplitude of visual evoked potentials (VEPs) in humans. The researchers found that presenting tactile noise to the fingertip at a specific, optimal intensity significantly increased the amplitude of VEPs evoked by visual stimuli, particularly those that were near the perceptual threshold. This effect demonstrated a characteristic inverted U-shaped relationship, a hallmark of stochastic resonance, where both too little and too much noise were less effective than an intermediate level. Harper [
61] demonstrated that visual flicker sensitivity was an inverted U-like function of the intensity of different levels of auditory noise from 50 to 90 dB (SPL). Here, we can imagine a subtle signal that is too weak to cross a detection threshold on its own. In a linear system, adding noise would simply make it harder to discern the signal. However, in a nonlinear system with a threshold (like a neuron firing, or a sensory receptor activating), adding an amount of noise that is “just right” could intermittently push the sub-threshold signal over that threshold. This makes the signal detectable, improving the overall signal-to-noise ratio. If too little noise is added, the signal remains undetected. If too much noise is added, the signal would be drowned out again. This creates the above-mentioned “inverted U-shaped” curve where performance (signal detection) peaks at an optimal noise level.
Anyway, the findings by Méndez-Balbuena [
60] suggest that noise in one sensory modality (tactile) can cross-modally enhance neural processing and signal detection in another modality (visual), supporting the idea that stochastic resonance is a general principle contributing to sensory perception across different sensory systems. A very recent article by Herrmann [
62] highlights how minimal background noise can enhance the neural tracking of the amplitude-onset envelope, which can be understood as a generalized enhancement of neural speech tracking due to SR. In other words, the neural representation of speech, especially with respect to the auditory system, can improve through the addition of noise. All of that seems quite supportive for the idea that stochastic resonance is utilized by the brain to generate consciousness in general and perception in particular.