Next Article in Journal
Data-Driven Electricity Load Analysis in Smart Buildings: A Multi-Driver Automatic Dependency Disaggregation Approach
Previous Article in Journal
Design of LDMOS Power Amplifier Based on D-CRLH Bandpass Filter Matching Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inducing and Detecting Hallucination-like Auditory Experiences in Healthy Subjects via Conditioning and Electroencephalogram Analysis: A Proof of Concept

1
Electrical and Computer Engineering, Lamar University, Beaumont, TX 77705, USA
2
Speech and Hearing Science, Lamar University, Beaumont, TX 77705, USA
3
Electrical & Computer Engineering and Computer Science, Jackson State University, Jackson, MS 39213, USA
*
Author to whom correspondence should be addressed.
Electronics 2026, 15(5), 931; https://doi.org/10.3390/electronics15050931
Submission received: 30 January 2026 / Revised: 16 February 2026 / Accepted: 24 February 2026 / Published: 25 February 2026
(This article belongs to the Section Bioelectronics)

Abstract

Background: Auditory hallucinations (AHs)—perceptions of sound without external stimuli—are common in clinical populations but rarely investigated in healthy individuals. This study aimed to employ Pavlovian conditioning to induce AH-like experiences in healthy subjects and to examine their neural correlations using electroencephalography (EEG). Methods: Seven healthy volunteers were exposed to auditory, non-auditory, and conditioned non-auditory stimuli while recording their EEG with a 32-channel system. Results: When comparing “sound” (auditory) and “conditioned no sound” (conditioned non-auditory) scenarios, the differences in average EEG power were much less pronounced compared to regular sound/no sound scenario. However, significant alterations (p = 0.05) in β and γ rhythms were observed in bilateral temporal regions when comparing the “no sound” and “conditioned no sound” scenario, resembling the spectral patterns observed during real sound perception. These EEG alterations indicate successful induction of hallucination-like auditory experiences through Pavlovian conditioning. A three-class k-nearest neighbor (kNN) classifier detected AH-like events with >80% accuracy in six out of seven participants. Conclusions: Pavlovian conditioning can induce AH-like perceptions in healthy individuals, accompanied by measurable EEG alterations. Therefore, EEG-based methods have the potential for objective detection and assessment of auditory hallucinations and offer a foundation for future research on their neural mechanisms.

1. Introduction

Auditory hallucinations (AHs) are defined as “the perception of sounds (particularly voices) in the absence of auditory stimuli” [1]. This phenomenon is common among individuals with schizophrenia—occurring in approximately 60% to 80% of patients [2]—and is also reported in other psychiatric and neurological disorders, such as Alzheimer’s disease [3,4]. AHs frequently occur among hearing-impaired individuals [5,6], and their prevalence appears to increase with the degree of hearing loss [7]. Moreover, hallucinations are often associated with traumatic experiences [8] and have been described as a posttraumatic response, particularly among veterans [9]. Hallucinations are intrusive experiences that can substantially increase the burden of illness, and some patients may struggle to reliably distinguish between real and internally generated speech.
Importantly, auditory hallucinations are not limited to clinical populations. They may also occur in healthy individuals under certain conditions [10]. Powers and colleagues demonstrated that even people who report never having experienced AHs can manifest them experimentally. This finding builds on earlier work by Ellson, who showed that repeated pairing of auditory and visual stimuli could induce AH-like experiences in healthy participants [11]. Despite such evidence, most hallucination research continues to focus on populations with diagnosed mental health disorders.
Several theoretical models have been proposed to explain the origin of AHs, with the corollary discharge hypothesis being among the most widely accepted [12,13]. According to this model, when a person speaks or engages in inner verbal thought, speech-production regions in the frontal lobes send a “corollary discharge” signal to the auditory cortex, preparing it to recognize the resulting auditory input as self-generated. This predictive signal normally suppresses self-produced sensory activity. Similar mechanisms help distinguish between self-generated and external stimuli across other sensory systems. A failure in the corollary discharge process, however, may lead individuals to misinterpret self-generated speech as an external auditory event.
Regardless of their underlying cause, numerous studies have identified functional alterations in the brains of individuals experiencing AHs. Various neuroimaging techniques have been used to examine these changes, with electroencephalography (EEG) being one of the most accessible tools for such analysis [1,14,15]. Event-related potential (ERP) studies have reported reduced P300 amplitudes during hallucination episodes [16], though ERP analysis—requiring the averaging of multiple stimulus-evoked epochs—is not ideal for real-time detection. In contrast, EEG studies of spontaneous activity have observed reduced alpha power in Broca’s area and the left primary auditory cortex during AHs [17,18,19]. Other findings suggest a disturbed interaction between speech production and perception regions [19], including increased bilateral coherence between the left and right auditory cortices during hallucinations [1].
Recent advances in wearable EEG technology have enabled reliable data collection during everyday activities. For example, García-Martínez and colleagues used a portable EEG headset (EPOC+ by Emotiv) to detect auditory hallucinations in schizophrenia patients, finding greater activation in right frontal regions during hallucinations and stronger left-hemisphere activity during hallucination-free periods [20]. However, given that schizophrenia profoundly affects brain function, these EEG patterns may not generalize beyond clinical populations. It is therefore of interest to investigate whether similar neural signatures can be observed in healthy individuals.
In the broader field of biomedical signal classification, recent work has emphasized hybrid paradigms that integrate physiological signal modeling with artificial intelligence techniques. For instance, Laganà and colleagues [21] developed an integrated hardware–software system for surface electromyographic signal acquisition and analysis using convolutional neural networks, demonstrating the value of coupling circuit-level signal processing with AI-based pattern recognition. Similarly, hybrid deep learning architectures combining convolutional neural networks with temporal modeling (e.g., LSTM or Transformer layers) have shown improved EEG classification performance compared to purely data-driven approaches [22]. These hybrid paradigms highlight the potential benefits of combining domain-specific signal models with machine learning classifiers, an approach that could be explored in future extensions of the present work.
Currently, studies of auditory hallucinations rely primarily on subjective reports, where individuals must self-identify and describe their experiences. To our knowledge, no objective methods exist for detecting or assessing hallucination events in real time. The primary goal of this study is to explore the feasibility of an objective assessment approach. Specifically, we aim to:
  • Develop stimulation protocols capable of inducing perceptual experiences resembling auditory hallucinations in healthy participants;
  • Demonstrate that such conditioned hallucinations can be correlated with distinct EEG markers that may serve as potential classification features.

2. Materials and Methods

The experimental procedure was based on classical (Pavlovian) conditioning, in which a conditioned stimulus was repeatedly paired with an unconditioned stimulus [23]. This paradigm enables controlled induction of perceptual experiences through associative learning, modeling conditions that may contribute to AHs. A simultaneous conditioning approach [24,25] was employed, with visual and auditory stimuli presented concurrently [11] across multiple trials to induce AH-like experiences in healthy participants [10]. Only individuals with no prior history of auditory hallucinations were recruited to ensure that any perceptual effects resulted solely from conditioning. Following the experiment, participants completed a brief interview regarding their subjective experiences. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of Lamar University (IRB-FY23-275) on 9 October 2023. Each participant has signed the Informed Consent Form (see Appendix A).
For the purposes of this study, an “AH-like experience” is operationally defined as a perceptual event in which a participant reports hearing a sound in the absence of an actual auditory stimulus, occurring as a consequence of conditioning. This definition is supported by converging evidence from two sources: (1) post-experiment self-report, in which participants indicated whether they believed they heard the tone during visual-only stimulus presentations; and (2) indirect EEG-based inference, whereby the spectral power patterns observed during conditioned no-sound trials were statistically compared to those observed during actual sound perception. The conjunction of subjective report and objective neural similarity to real auditory processing constitutes the basis for identifying AH-like experiences in this paradigm. We acknowledge that no real-time behavioral measure (e.g., button press) was employed during stimulus presentation, which represents a limitation that should be addressed in future studies.

2.1. Hard- and Soft-Ware

Auditory stimuli were delivered via insert earphones (EAR-TONE ER-3A, 10 Ω by Etymotic, Elk Grove Village, IL, USA) connected to a multimedia receiver (Sony STR-DN1080 by Sony, Tokyo, Japan) interfaced with the stimulation computer. Visual stimuli were projected using an overhead projector (Epson Home Cinema 8350, by Epson, Suwa, Nagano, Japan) onto a white screen positioned approximately 2 m in front of the participant. The projected image measured 2.2 m × 1.25 m, covering most of the participant’s visual field with horizontal and vertical viewing angles of approximately 57° and 34°, respectively. Audio and visual stimuli were presented and synchronized with EEG acquisition using Eevoke software, ver. 3.1 (ANT Neuro, Hengelo, The Netherlands).
Continuous EEG was recorded with an ASA-Lab 40 system (ANT Neuro, Hengelo, The Netherlands) using 32 electrodes placed according to the extended International 10–20 system. Signals were pre-filtered between 0.3 Hz and 50 Hz, notch-filtered at 60 Hz, and sampled at 512 Hz.
It is important to note that EEG, while offering excellent temporal resolution (on the order of milliseconds), has inherent limitations as a non-invasive sensing modality. Its spatial resolution is on the order of centimeters due to volume conduction effects, whereby electrical signals propagate through the skull and scalp tissues, resulting in spatial smearing of the recorded potentials. EEG recordings are also susceptible to physiological artifacts (e.g., eye blinks, muscle activity, cardiac signals) and environmental noise (e.g., line frequency interference), which can compromise signal quality if not adequately controlled. Despite these limitations, EEG remains one of the most practical and widely used tools for studying cortical dynamics in real time, particularly in paradigms requiring high temporal precision. For a comprehensive review of the capabilities and constraints of biomedical electronic sensing devices, including EEG-based systems, the reader is referred to Coluccio et al. [26].

2.2. Stimulation

Auditory stimuli consisted of a 3 s pure tone generated by a 1-kHz sinusoid. Visual stimuli were black-and-white checkerboard patterns composed of 8 × 8 squares at a resolution of 1000 × 1000 pixels.
Before the conditioning experiments, each participant’s individual hearing threshold was determined using the adaptive psychophysical method described by Watson and Pelli [27]. During this procedure, the same 1 kHz tone was presented at random intervals and varying intensities, and participants indicated when they detected the sound. The stimulus intensity corresponding to approximately 100% of the detection threshold was then used throughout the remainder of the experiment.
The conditioning stage followed the protocol described by Powers and colleagues [10] and consisted of repeated presentations of paired audio-visual (AV) stimuli. Each AV stimulus lasted 3 s and was followed by a 3 s interval of silence and a blank (black) screen. Each stimulation sequence contained 20–25 AV stimuli, followed by a single visual-only (V) stimulus—the checkerboard image presented without accompanying sound. This sequence was repeated twelve times, resulting in a total of 137 AV and 12 V stimuli per participant.
Prior to conditioning, the V stimulus was presented alone for 30 s to establish a baseline for comparison. EEG data were recorded during both baseline and conditioning stages. To reduce fatigue, the experiment duration was limited to 20 min.

2.3. Participants

Seven healthy adults (three females, four males; age range 23–30 years) participated in the study. All participants reported normal hearing, no fatigue, no history of mental or neurological disorders, and no prior experience of auditory hallucinations. Participants were seated comfortably in a private, dimly lit room with only the experimenter present to minimize distractions.
To reduce EEG artifacts, participants were instructed to remain still, keep their eyes open, and avoid blinking or body movements during stimulus presentation. Eye blinks were permitted only during the interstimulus intervals, when a black screen was displayed.

2.4. EEG Analysis

EEG data were analyzed using MATLAB, ver. 7.6.0 (R2008a). Because the experimental protocol was designed to minimize artifacts, no additional artifact removal was implemented. Preprocessing included DC offset removal and application of a common average reference (CAR) spatial filter.
Continuous EEG recordings were segmented into epochs time-locked to the stimulus presentations. To avoid onset-related evoked potential, the first 300 ms and last 200 ms of each epoch were discarded. The remaining data were resampled to 128 Hz to suppress high-frequency noise and divided into non-overlapping 0.5 s fragments. After DC removal from each fragment, spectral estimation was performed using a parametric autoregressive (AR) modified covariance method of order 17. The mean power was then computed for the following frequency bands: δ (1–4 Hz), θ (4–8 Hz), α1 (8–10 Hz), α2 (10–12 Hz), β1 (12–20 Hz), β2 (20–30 Hz), γ1 (30–40 Hz), and γ2 (40–50 Hz). The estimated average EEG power within each band was treated as a random variable drawn from an unknown statistical distribution.
To test for differences across stimulation conditions, the average spectral power was analyzed using the Kruskal–Wallis test, a nonparametric alternative to ANOVA appropriate for non-normal data. The H-statistic was computed as described in [28] and compared to the χ2 distribution with one degree of freedom [29]. At a significance level of p = 0.05, H > 3.84 was considered evidence against the null hypothesis of equal population medians. To account for the inflated Type I error rate arising from multiple simultaneous comparisons across frequencies, channels, and conditions, the false discovery rate (FDR) correction per [30] was applied to p-values estimated by Kruskal–Wallis test. Both uncorrected and FDR-corrected p-values are reported in corresponding tables in Results; corrected values are indicated by italic font, and those remaining significant after correction (p < 0.05) are indicated by bold italic. This dual reporting allows the reader to assess both the raw statistical outcomes and the results after controlling for multiple comparisons. The Kruskal–Wallis test effect size was evaluated via the ηH2 per [31].
The mean spectral power values were then used as features for a three-class k-nearest neighbors (kNN) classifier. Each sample represented a point in a 30-dimensional space, corresponding to EEG channels (excluding mastoid electrodes). The number of neighbors, k varied from 4 to 150. The classifier distinguished between three conditions:
  • Class I: baseline (30 s visual-only stimulation; 59 samples);
  • Class II: initial 12 epochs of audio–visual (AV) conditioning stimuli (60 samples);
  • Class III: conditioned visual-only (V) stimuli (60 samples).
The classification objective was to identify Class III instances, corresponding to the conditioned no-sound percept. Therefore, the classification problem was reduced to the detection of Class III here. A modified leave-one-out cross-validation approach was used: data from one Class III epoch (5 samples) served as the validation set, while the remaining samples (59 from Class I, 60 from Class II, and 55 from Class III) formed the training set. All classifications were performed within-subject to account for the high inter-individual variability observed in conditioning effects. The within-subject design was chosen deliberately: given the well-documented inter-individual variability in EEG spectral characteristics [1,20] and the small sample size (n = 7), cross-subject generalization would be unreliable and potentially misleading. Within-subject classification is a standard and accepted approach in brain–computer interface (BCI) research when between-subject variability is high. Detection sensitivity was computed for each participant and frequency band. Only sensitivity is reported, as confusion matrices and other binary metrics provide limited value in multi-class problems.
The kNN classifier was selected for several reasons appropriate to this proof-of-concept study. First, kNN is non-parametric and makes no assumptions about the underlying data distribution, which is advantageous when sample sizes are small and distributional properties are unknown. Second, it is simple and highly interpretable, with minimal hyperparameters (only k), reducing the risk of over-tuning. We acknowledge that alternative classifiers such as support vector machines (SVM) with appropriate kernel functions or regularized linear discriminant analysis (LDA) may offer better generalization performance in small-sample settings and should be explored in future comparative studies. For a detailed discussion of generalization and validation challenges in biomedical signal analysis systems, including the importance of robust cross-validation strategies, the reader is referred to Laganà et al. [21].

3. Results

The estimated hearing thresholds were consistent among all participants; no significant deviations were observed. During the post-experiment interview, all participants reported that they believed they heard the tone each time the checkerboard image was presented.
Table 1, Table 2 and Table 3 present both uncorrected and FDR-corrected p-values evaluated by Kruskal–Wallis test between different experimental scenarios and for individual EEG rhythms. The corrected p-values are indicated by italic font; those below 0.05 are indicated by bold italic.
We observe in Table 1 that FDR-corrected p-values are below 0.05 for many EEG channels for higher-frequencies, i.e., for α1 through γ2 rhythms. We may conclude that EEG spectral power within these rhythms is statistically different (with the significance level of 0.05) between the sound and no sound perception scenarios.
To better visualize differences in EEG spectral power across conditions, the H-statistics obtained from the Kruskal–Wallis test were color-coded and projected onto a schematic brain map. These topographic representations, referred to as H-maps, display the spatial distribution of statistical differences between experimental conditions. The corresponding color bars indicate the numerical values of the H-statistic. At a significance level of p = 0.05, regions with H > 3.84 were considered to exhibit statistically significant differences in spectral power. Thus, the H-maps provide a visual summary of cortical areas showing condition-dependent changes within specific EEG frequency bands.
Figure 1 illustrates example H-maps comparing average EEG spectral power between the sound and no-sound scenarios for two representative participants (Subject 2: male, 25 years; Subject 7: female, 24 years) as well as for the group average (bottom row). The sound scenario corresponds to the perception of audio-visual (AV) stimuli during the conditioning stage, whereas the no-sound represents the baseline period when participants viewed the visual-only (V) stimulus for 30 s prior to conditioning.
As shown in Figure 1, the H-statistics exceed the critical value of 3.84 in EEG channels associated with specific rhythmic bands. In particular, frontal channels show high H-values within the θ rhythm for both participants, while the β2 and γ rhythms exhibit strong effects in bilateral temporal channels, surpassing the critical threshold in individual and averaged data.
Beyond these consistent patterns, some subject-specific differences in spectral power suggest substantial inter-individual variability in the neurophysiological processes underlying sound perception and processing. ηH2 varied from 0.002 to 0.0023, suggesting small effect.
We see in Table 2 that majority of FDR-corrected p-values greatly exceed 0.05; therefore, EEG spectral power is not statistically different (with the significance level of 0.05) for most EEG electrodes and rhythms between the sound and conditioned no-sound perception scenarios.
Figure 2 shows the corresponding H-maps comparing average EEG spectral power between “sound” and “conditioned no sound” scenarios. As in Figure 1, the “sound” scenario reflects perception of audiovisual (AV) stimuli during conditioning, whereas the “conditioned no sound” represents the post-conditioning response to visual-only (V) stimuli.
As shown in Figure 2, spectral power still differs significantly between the “sound” and “conditioned no sound” scenarios across several EEG channels and frequency bands, although these differences are generally less pronounced—H-statistics values are 10–20 times lower than in the “non-conditioned” case shown in Figure 1. Notably, the α1, α2, and β1 power in the right parieto-temporal region, as well as the α1 and α2 power in the left frontal region, differ between the experimental conditions on average, while individual participants may show stronger effects in other rhythms or channels. ηH2 varied from 0.002 to 0.0868, suggesting moderate effect.
We observe in Table 3 that FDR-corrected p-values are below 0.05 for many EEG channels for α2, β2, γ1, and γ2 rhythms. We may conclude that EEG spectral power within these rhythms and for these electrodes is statistically different (with the significance level of 0.05) between the no sound and conditioned no sound perception scenarios.
Figure 3 presents H-maps comparing average EEG spectral power between the “no sound (baseline)” and “conditioned no sound” scenarios. Both involve visual-only (V) stimuli, but the “conditioned no sound” case reflects post-conditioning activity, recorded when the same image was shown without accompanying sound following repeated audiovisual pairings (concurrent presentation) during the conditioning.
As shown in Figure 3, the H-statistics exhibit a notable resemblance to those in Figure 1. On average (bottom row of Figure 3), the β1, β2, γ1, and γ2 spectral powers differ significantly in both temporal EEG channels. In addition to these consistent effects, conditioning also produces subject-specific alterations in EEG spectral power. ηH2 varied from 0.002 to 0.3206, suggesting large effect. The results above suggest that conditioning may modify neural activity, potentially giving rise to perceptual experiences resembling auditory hallucinations.
Table 4 summarizes the percentage of correct detections of Class III (hallucination-like experiences) across all seven participants, based on the average power in each EEG rhythm. Values exceeding 80% accuracy are indicated by bold. Given the substantial inter-individual variability in the effects of audiovisual conditioning, all classifications were performed separately for each participant.
As shown in Table 4, spectral power across all EEG rhythms may contribute to the detection of AH-like events, though the specific rhythms yielding the highest accuracy vary among participants. Detection accuracy exceeded 80% for six subjects, while for Subject 6, it reached only 76.7% in the γ2 rhythm and remained relatively low across other bands.

4. Discussion

The main limitation of the present study is the small sample, which precludes generalization to the broader population. With only seven participants, the statistical power to detect subtle effects is limited, and individual differences may disproportionately influence the group-level results. Small sample sizes (N = 5–10) are common in exploratory EEG studies involving novel experimental paradigms, where the primary goal is to establish proof of principle rather than population-level inference [10]. As such, the findings reported here should be interpreted as preliminary evidence supporting the feasibility of the proposed approach, rather than as definitive claims about neurophysiological mechanisms. Nevertheless, the findings of this study yield several important conclusions.
First, perceiving a sound under regular conditions produces statistically significant changes in frontal EEG θ activity (Figure 1), likely reflecting cognitive processes associated with higher-level auditory processing. In contrast, differences in θ, β2, γ1, and γ2 spectral power originating from the temporal channels may relate to lower-level sound perception, as these channels are positioned over the primary auditory cortices (Brodmann areas 41 and 42).
Second, the statistical comparisons between the baseline no sound and conditioned no sound perception (Table 3 and Figure 3) closely resemble those observed between the baseline no sound and sound scenarios (Table 1 and Figure 1). Combined with the observation that differences between sound and conditioned no sound (Table 2 and Figure 2) are much less pronounced, this suggests that cortical activity during the conditioned no sound perception more closely resembles that of real sound perception than the baseline no sound event.
These findings indicate that conditioning during sound perception induces significant EEG alterations, consistent with hallucination-like auditory experiences. This interpretation is further supported by participants’ reports that they perceived a sound whenever the image was displayed (even when the image was not accompanied by actual sound).
It is important to note that β and γ band activity is not exclusively specific to auditory perception; these frequency bands are also associated with general cognitive processes, including attention, working memory, and multisensory integration. What supports an auditory interpretation in the present study is the topographic distribution of the statistically significant effects: the β and γ alterations were concentrated in bilateral temporal channels (T7, T8 and neighboring electrodes), which overlie the primary auditory cortices (Brodmann areas 41 and 42). This spatial specificity, combined with the fact that these spectral patterns emerged only after auditory conditioning and closely resembled those observed during actual sound perception, supports the interpretation that the observed activity reflects auditory-related processing. Nevertheless, contributions from general cognitive processes such as heightened attention or expectation cannot be fully excluded.
A related limitation is that the present study did not include a dedicated active control condition specifically designed to dissociate conditioning-induced perception from effects of attention, expectation, or voluntary auditory imagery. The conditioning protocol was based on the validated paradigm established by Powers et al. [10], and our baseline condition (pre-conditioning visual-only stimulus) serves as a within-subject control insofar as the same visual stimulus was presented both before and after conditioning. Any differences in neural activity can therefore be attributed to the conditioning process rather than to the visual stimulus itself. However, an active control condition—such as unpaired stimulus presentations, a different visual stimulus not associated with sound, or a condition in which participants are explicitly informed that no sound will accompany the image—would strengthen causal inference by isolating the specific contribution of conditioning. Such control conditions should be incorporated in future studies.
At the same time, the H-maps in Figure 2 and p-values in Table 2 demonstrate some statistically significant differences in EEG spectral power between the sound and conditioned no sound scenarios (i.e., potential AH-like events), suggesting that EEG may be used to detect auditory hallucinations or related experiences. However, it remains unclear whether such induced hallucinations share similar cognitive mechanisms with the spontaneous ones, and whether both are associated with similar alterations in EEG patterns.
Detection results presented in Table 4 are promising. The finding that optimal EEG rhythms for detection vary among subjects highlights the high inter-individual variability of cortical processes underlying sound perception. Future work should examine whether these EEG alterations persist over time—i.e., whether repeated testing after several hours or days yields similar results within individuals. Further studies could also explore whether combining multiple rhythms or judiciously selecting EEG channels enhances detection accuracy, potentially through AI-based classifier optimization. Additionally, alternative classifiers such as support vector machines (SVM) with radial basis function kernels or regularized linear discriminant analysis (LDA), which are known to perform well in small-sample settings, should be systematically compared to kNN in future work to determine the optimal classification approach for this application.
Notably, the present study did not assess the persistence or stability of the conditioning-induced hallucination-like experiences beyond the experimental session. It remains unknown whether the conditioned association between the visual stimulus and the perceived auditory experience would persist after minutes, hours, or days, or whether it would extinguish rapidly once the audio-visual pairings ceased. Evaluating the temporal durability of these effects—for example, through recall testing at 1 h, 24 h, and multi-day intervals—is an important direction for future research. Such data would clarify whether the induced experiences reflect a transient perceptual bias or a more robust learned association, and would have direct implications for understanding the persistence of conditioned perceptual phenomena in both healthy and clinical populations.
While this pilot study provides valuable preliminary insights, its small sample size limits generalizability. Future research with a larger and more diverse participant pool is needed to confirm these findings and explore individual differences in susceptibility to AHs. Comparative studies between individuals with and without prior AH experiences could offer deeper insight into neural susceptibility. Additionally, testing with more complex auditory stimuli—such as voices or music instead of pure tones—could determine the generality of these effects.
An important open question is the extent to which findings obtained in healthy subjects using conditioned hallucination-like experiences can be transferred to clinical populations with pathological AHs. The use of healthy subjects in this study was intentional: it allows investigation of the perceptual and neural correlates of hallucination-like experiences in isolation, without the confounding effects of psychiatric medication, comorbid conditions, or chronic illness-related alterations in brain function. However, conditioned hallucination-like experiences may not share identical cognitive or neurological mechanisms with the spontaneous, often distressing AHs experienced by individuals with schizophrenia or other psychiatric disorders. Future studies should include direct comparative analyses between healthy conditioned subjects and clinical populations to determine whether convergent EEG signatures are present. If similar neural markers are identified across both groups, this would substantially strengthen the case for developing objective, EEG-based assessment tools applicable across populations.
If similar EEG alterations are observed across diverse populations and stimuli, it may become possible to develop objective electrophysiological markers for hallucinations, reducing reliance on subjective self-reports. Such an approach could transform the assessment of auditory hallucinations and improve clinical management for affected individuals. Finally, the high interpersonal variability observed in EEG responses suggests that subject-specific approaches may be more effective than universal models for assessing AH-related neurophysiological changes. This pilot methodology could be extended to study expectation effects in perception, but clinical application requires validation in larger samples with active control conditions.

5. Conclusions

In conclusion, this study demonstrates that hallucination-like perceptual experiences can be experimentally induced and that such experiences correspond to detectable EEG alterations. These findings represent an important step toward understanding and objectively characterizing the neural basis of auditory hallucinations. As a proof of concept, this work establishes the feasibility of combining Pavlovian conditioning with EEG spectral analysis and machine learning classification for the objective assessment of AH-like events. Future research should extend these findings with larger samples, active control conditions, and comparative analyses across healthy and clinical populations.

Author Contributions

Conceptualization, G.T. and K.A.; methodology, G.T. and L.F.; software, G.T.; validation, G.T., L.F. and K.A.; formal analysis, G.T.; investigation, K.A.; resources, G.T. and L.F.; data curation, G.T.; writing—original draft preparation, G.T.; writing—review and editing, G.T., L.F. and K.A.; visualization, G.T.; supervision, K.A.; project administration, G.T.; funding acquisition, G.T., L.F. and K.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to privacy concerns.

Acknowledgments

Authors would like to acknowledge study participants, who contributed to EEG data collection.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AHsAuditory Hallucinations
EEGElectroencephalogram
AVAudio-Visual
ERPEvent-related potential
IRBInstitutional Review Board
VVisual-only
CARCommon Average Reference
ARAutoregressive
FDRfalse discovery rate
kNNk-nearest neighbors

Appendix A

Informed Consent Form
Lamar University, Drayer Department of Electrical Engineering, College of Engineering ID: ____________________________
SUBJECT CONSENT TO PARTICIPATION IN RESEARCH
Title of Study: EEG-based study of sound/voice perception
Name of Investigators: Dr. Gleb Tcheslavski
Phone Numbers: 409-880-7622
I understand that I am agreeing to participate in electroencephalography (EEG) recording for development of novel sound perception methods. Data acquisition will be performed in a private area with only the researcher and/or his/her assistant present. Before the EEG acquisition, a neutral non-allergic conductive gel may be applied to my scalp to improve the quality of signals recorded. If necessary, a random identification number may be assigned to the experimental data. My name or other identities will not be available to anyone but the investigator(s). The entire procedure will take at most one hour.
Project Summary:
We aim to develop a novel EEG-based model for the human hearing system that includes responses to both auditory and non-auditory stimuli. The EEG is a cost-effective approach in comparison the MRI and fMRI. If we are able to determine the utility of this measure, it will provide an insight to the mechanisms involved into the way we process color information.
Risks:
The EEG collection is a standard non-invasive procedure widely used in clinical practice. Therefore, the procedure does not entail any foreseeable risks. I understand that I may quit at any time. All data will be maintained in a password-protected computer with no public access. Documents containing my name or other identities will be maintained in a locked cabinet in the investigators office. Consent forms may be forwarded to the Office of Research and Grants, Lamar University.
Benefits:
The main benefit to individuals in the experimental group is that they will learn about Electroencephalography and will contribute to the advance in experimental research.
Participation:
I specify that I have no known history of epilepsy or related neurophysiological disorders. I understand that my participation in this study is voluntary and that I may withdraw from the study at any time. My refusal to participate will involve no penalty or loss of benefits to which I am otherwise entitled. I understand that I will not be compensated for my participation. An offer has been made to answer all of my questions and concerns about the study. If desired, I will be given a copy of the dated and signed consent form to keep.
Consent to participate in study:
Signed_______________________________________________________ Date___________
Investigator___________________________________________________ Date___________

References

  1. Sritharan, A.; Line, P.; Sergejew, A.; Silberstein, R.; Egan, G.; Copolov, D. EEG coherence measures during auditory hallucinations in schizophrenia. Psychiatry Res. 2005, 136, 189–200. [Google Scholar] [CrossRef] [PubMed]
  2. Lim, A.; Hoek, H.W.; Deen, M.L.; Blom, J.D. Prevalence and classification of hallucinations in multiple sensory modalities in schizophrenia spectrum disorders. Schizophr. Res. 2016, 176, 493–499. [Google Scholar] [CrossRef] [PubMed]
  3. El Haj, M.; Roche, J.; Jardri, R.; Kapogiannis, D.; Gallouj, K.; Antoine, P. Clinical and neurocognitive aspects of hallucinations in Alzheimer’s disease. Neurosci. Biobehav. Rev. 2017, 83, 713–720. [Google Scholar] [CrossRef] [PubMed]
  4. Ismail, Z.; Creese, B.; Aarsland, D.; Kales, H.C.; Lyketsos, C.G.; Sweet, R.A.; Ballard, C. Psychosis in Alzheimer disease-mechanisms, genetics and therapeutic opportunities. Nat. Rev. Neurol. 2022, 18, 131–144. [Google Scholar] [CrossRef]
  5. Marschall, T.M.; Ćurčić-Blake, B.; Brederoo, S.G.; Renken, R.J.; Linszen, M.M.J.; Koops, S.; Sommer, I.E.C. Spontaneous brain activity underlying auditory hallucinations in the hearing-impaired. Cortex 2021, 136, 1–13. [Google Scholar] [CrossRef]
  6. Saunders, G.H.; Griest, S.E. Hearing loss in veterans and the need for hearing loss prevention programs. Noise Health 2009, 11, 14. [Google Scholar] [CrossRef]
  7. Linszen, M.M.J.; van Zanten, G.A.; Teunisse, R.J.; Brouwer, R.M.; Scheltens, P.; Sommer, I.E. Auditory hallucinations in adults with hearing impairment: A large prevalence study. Psychol. Med. 2019, 49, 132–139. [Google Scholar] [CrossRef]
  8. Wearne, D.; Curtis, G.J.; Melvill-Smith, P.; Orr, K.G.; Mackereth, A.; Rajanthiran, L.; Hood, S.; Choy, W.; Waters, F. Exploring the relationship between auditory hallucinations, trauma and dissociation. BJPsych Open 2020, 6, E54. [Google Scholar] [CrossRef]
  9. Crompton, L.; Lahav, Y.; Solomon, Z. Auditory hallucinations and PTSD in ex-POWS. J. Trauma Dissociation 2017, 18, 663–678. [Google Scholar] [CrossRef]
  10. Powers, A.R.; Mathys, C.; Corlet, P.R. Pavlovian conditioning-induced hallucinations result from overweighting of perceptual priors. Science 2017, 357, 596–600. [Google Scholar] [CrossRef]
  11. Ellson, D.G. Hallucinations produced by sensory conditioning. J. Exp. Psychol. 1941, 28, 1–20. [Google Scholar] [CrossRef]
  12. Ford, J.M.; Palzes, V.A.; Roach, B.J.; Mathalon, D.H. Did I do that? Abnormal predictive processes in schizophrenia when button pressing to deliver a tone. Schizophr. Bull. 2014, 40, 804–812. [Google Scholar] [CrossRef] [PubMed]
  13. Ford, J.M.; Mathalon, D.H. Corollary discharge dysfunction in schizophrenia: Can it explain auditory hallucinations? Int. J. Psychophysiol. 2005, 58, 179–189. [Google Scholar] [CrossRef] [PubMed]
  14. Barros, C.; Roach, B.; Ford, J.M.; Pinheiro, A.P.; Silva, C.A. From Sound Perception to Automatic Detection of Schizophrenia: An EEG-Based Deep Learning Approach. Front. Psychiatry 2022, 12, 813460. [Google Scholar] [CrossRef] [PubMed]
  15. Dauwan, M.; Linszen, M.M.; Lemstra, A.W.; Scheltens, P.; Stam, C.J.; Sommer, I.E. EEG-based neurophysiological indicators of hallucinations in Alzheimer’s disease: Comparison with dementia with Lewy bodies. Neurobiol. Aging 2018, 67, 75–83. [Google Scholar] [CrossRef]
  16. Aubonnet, R.; Banea, O.C.; Sirica, R.; Wassermann, E.M.; Yassine, S.; Jacob, D.; Magnúsdóttir, B.B.; Haraldsson, M.; Stefansson, S.B.; Jónasson, V.D.; et al. P300 Analysis Using High-Density EEG to Decipher Neural Response to rTMS in Patients With Schizophrenia and Auditory Verbal Hallucinations. Front. Neurosci. 2020, 14, 575538. [Google Scholar] [CrossRef]
  17. Allen, P.; Modinos, G.; Hubl, D.; Shields, G.; Cachia, A.; Jardri, R.; Thomas, P.; Woodward, T.; Shotbolt, P.; Plaze, M.; et al. Neuroimaging auditory hallucinations in schizophrenia: From neuroanatomy to neurochemistry and beyond. Schizophr. Bull. 2012, 38, 695–703. [Google Scholar] [CrossRef]
  18. Arora, M.; Knott, V.J.; Labelle, A.; Fisher, D.J. Alterations of Resting EEG in Hallucinating and Nonhallucinating Schizophrenia Patients. Clin. EEG Neurosci. 2021, 52, 159–167. [Google Scholar] [CrossRef]
  19. van Swam, C.; Dierks, T.; Hubl, D. Electrophysiological Exploration of Hallucinations (EEG, MEG). The Neuroscience of Hallucinations; Jardri, R., Cachia, A., Thomas, P., Pins, D., Eds.; Springer: New York, NY, USA, 2013. [Google Scholar]
  20. García-Martínez, B.; Fernández-Sotos, P.; Ricarte, J.J.; Sánchez-Morla, E.M.; Sánchez-Reolid, R.; Rodriguez-Jimenez, R.; Fernández-Caballero, A. Detection of auditory hallucinations from electroencephalographic brain-computer interface signals. Cogn. Syst. Res. 2024, 83, 101176. [Google Scholar] [CrossRef]
  21. Laganà, F.; Pratticò, D.; Angiulli, G.; Oliva, G.; Pullano, S.A.; Versaci, M.; La Foresta, F. Development of an Integrated System of sEMG Signal Acquisition, Processing, and Analysis with AI Techniques. Signals 2024, 5, 476–493. [Google Scholar] [CrossRef]
  22. Nguyen, A.H.P.; Oyefisayo, O.; Pfeffer, M.A.; Ling, S.H. EEG-TCNTransformer: A Temporal Convolutional Transformer for Motor Imagery Brain–Computer Interfaces. Signals 2024, 5, 605–632. [Google Scholar] [CrossRef]
  23. Pavlov, I.P. Conditional Reflexes; Dover Publications: New York, NY, USA, 1927. [Google Scholar]
  24. Miller, J. The rate of conditioning of human subjects to single and multiple conditioned stimuli. J. Gen. Psychol. 1939, 20, 399–408. [Google Scholar] [CrossRef]
  25. Prével, A.; Krebs, R.M. Higher-order conditioning with simultaneous and backward conditioned stimulus: Implications for models of Pavlovian conditioning. Front. Behav. Neurosci. 2021, 15, 749517. [Google Scholar] [CrossRef]
  26. Coluccio, M.L.; Pullano, S.A.; Vismara, M.F.M.; Coppedè, N.; Perozziello, G.; Candeloro, P.; Gentile, F.; Malara, N. Emerging Designs of Electronic Devices in Biomedicine. Micromachines 2020, 11, 123. [Google Scholar] [CrossRef]
  27. Watson, A.B.; Pelli, D.G. Quest: A Bayesian adaptive psychometric method. Perception. Psychophysics 1983, 33, 113–120. [Google Scholar] [CrossRef]
  28. Tcheslavski, G.V.; Gonen, F.F. Alcoholism-related alterations in spectrum, coherence, and phase synchrony of topical electroencephalogram. Comput. Biol. Med. 2012, 42, 394–401. [Google Scholar] [CrossRef] [PubMed]
  29. Pearson, E.S.; Hartley, H.O. Biometrika Tables for Statisticians, 2nd ed.; Cambridge Press: Cambridge, UK, 1958; Volume 1. [Google Scholar]
  30. Benjamini, Y.; Hochberg, Y. Controlling the false discovery rate: A practical and powerful approach to multiple testing. J. R. Stat. Soc. Ser. B (Methodol.) 1995, 57.1, 289–300. [Google Scholar] [CrossRef]
  31. Pinnegar, J.K.; Tomczak, M.T.; Link, J.S. How to determine the likely indirect food-web consequences of a newly introduced non-native species: A worked example. Ecol. Model. 2014, 272, 379–387. [Google Scholar] [CrossRef]
Figure 1. H-maps between EEG spectral power estimates for sound vs. no sound (baseline), eyes open, and the same visual stimuli; top row: Subject 2, middle row: Subject 7, bottom row: average for all subjects.
Figure 1. H-maps between EEG spectral power estimates for sound vs. no sound (baseline), eyes open, and the same visual stimuli; top row: Subject 2, middle row: Subject 7, bottom row: average for all subjects.
Electronics 15 00931 g001
Figure 2. H-maps between EEG spectral power estimates for sound vs. conditioned no sound, eyes open, and the same visual stimuli; top row: Subject 2, middle row: Subject 7, bottom row: average for all subjects.
Figure 2. H-maps between EEG spectral power estimates for sound vs. conditioned no sound, eyes open, and the same visual stimuli; top row: Subject 2, middle row: Subject 7, bottom row: average for all subjects.
Electronics 15 00931 g002
Figure 3. H-maps between EEG spectral power estimates for no sound (baseline) vs. conditioned no sound, eyes open, and the same visual stimuli; top row: Subject 2, middle row: Subject 7, bottom row: average for all subjects.
Figure 3. H-maps between EEG spectral power estimates for no sound (baseline) vs. conditioned no sound, eyes open, and the same visual stimuli; top row: Subject 2, middle row: Subject 7, bottom row: average for all subjects.
Electronics 15 00931 g003
Table 1. Uncorrected and FDR-corrected p-values estimated by Kruskal–Wallis test between EEG spectral power estimates for sound vs. no sound (baseline), eyes open, the same visual stimuli, and for all subjects.
Table 1. Uncorrected and FDR-corrected p-values estimated by Kruskal–Wallis test between EEG spectral power estimates for sound vs. no sound (baseline), eyes open, the same visual stimuli, and for all subjects.
δθα1α2β1β2γ1γ2
Fp10.78330.83920.64920.72130.01440.01960.01480.01580.00000.00000.00010.00020.00000.00000.00000.0000
Fpz0.26790.49180.24410.38540.05080.06090.00070.00090.12450.16240.01290.02580.00000.00000.00000.0000
Fp20.95750.95750.00120.00920.33540.33540.04250.04250.00150.00320.00000.00000.00000.00000.00000.0000
F70.01360.05610.19850.33080.06730.07770.00120.00130.38240.44190.00110.00260.00000.00000.00000.0000
F30.24100.48210.00250.01240.01200.01740.00000.00000.38300.44190.00010.0003000.00000.0000
Fz0.00030.00280.00620.02330.04380.05480.00000.00000.00010.00050.25800.35180.00550.00660.00090.0012
F40.36360.54280.11680.22350.01220.01740.00010.00020.54280.60310.00010.0004000.00000.0000
F80.00450.02260.12670.22350.09190.10210.00130.00140.10290.14030.00020.00050.00000.00000.00000.0000
FC50.00230.01410.03400.09270.00080.00160.00000.00000.00000.0000000000
FC10.00000.00060.00010.00240.09720.10410.00000.00000.96160.96160.02730.04790.00000.00000.00000.0000
FC20.00110.00830.37420.51030.01000.01660.00000.00000.00000.00000.24190.34550.00000.00000.00000.0000
FC60.83570.86450.02900.08710.01140.01740.00000.00000.09350.13350.00000.0000000.04510.0521
T70.34260.54090.00020.00240.27430.28380.03740.03870.00000.0000000000
C30.22220.48210.00220.01240.00080.00160.00000.00000.86550.92740.00000.0000000.00000.0000
Cz0.00010.00130.00030.00330.00060.00130.00000.00000.00020.00060.42440.52720.23040.23040.00000.0000
C40.58540.73180.04790.11050.00000.00000.00000.00000.00160.00330.01000.02150000
T80.01490.05610.00530.02270.04350.05480.00000.00000.00000.0000000000
CP50.21050.48210.45080.56350.00000.00010.00000.00000.15230.19040.00020.00050.00000.00000.00000.0000
CP10.02370.07900.04180.10450.00000.00000.00000.00000.01000.01670.46390.53100.00010.00010.74320.7689
CP20.50640.66050.12010.22350.00010.00020.00000.00000.00030.00070.92700.92700.00650.00750.92250.9225
CP60.61900.74270.63620.72130.00000.00010.00000.00000.93780.96160.00000.00010000
P70.78330.83920.78670.81390.00000.00000.00000.00000.03290.05190.06550.10340.00000.000000
P30.09780.29340.44750.56350.00000.00000.00000.00000.00250.00470.19890.29830.00000.00000.00000.0000
Pz0.28620.49180.02240.07480.00010.00030.00000.00000.00000.00000.43930.52720.00840.00930.02610.0313
P40.65650.75750.25870.38810.00000.00010.00000.00000.00950.01670.84750.87670.00010.00010.00110.0013
P80.23530.48210.76220.81390.00000.00000.00000.00000.07650.11480.02820.04790.00000.000000
POz0.29510.49180.93680.93680.00000.00010.00000.00000.00000.00000.39680.51750.01240.01330.00000.0000
O10.50230.66050.35610.50870.00200.00360.00000.00000.00100.00250.47790.53100.00000.00000.13720.1524
Oz0.16170.44100.12490.22350.00080.00160.00000.00000.00000.00000.02870.04790.07580.07840.23770.2547
O20.38000.54280.61490.72130.00000.00010.00000.00000.00140.00320.82830.87670.00000.00000.00000.0000
Table 2. Uncorrected and FDR-corrected p-values estimated by Kruskal–Wallis test between EEG spectral power estimates for sound vs. conditioned no sound, eyes open, the same visual stimuli, and for all subjects.
Table 2. Uncorrected and FDR-corrected p-values estimated by Kruskal–Wallis test between EEG spectral power estimates for sound vs. conditioned no sound, eyes open, the same visual stimuli, and for all subjects.
δθα1α2β1β2γ1γ2
Fp10.90210.98710.79390.97590.06610.16520.28420.38750.63040.70040.09630.36110.56260.86230.78870.8268
Fpz0.99040.99040.98140.98140.02310.08910.11130.21540.96970.96970.52860.60990.73130.86230.63290.8268
Fp20.19850.74430.88010.97590.22700.35740.59050.63260.92170.95350.47520.59160.66820.86230.39240.8268
F70.46590.94350.31990.97590.14050.24790.02010.08370.09500.26110.45630.59160.54750.86230.54260.8268
F30.89440.98710.94340.97590.00040.01280.00030.00990.13030.27930.23370.41240.60130.86230.68360.8268
Fz0.18810.74430.15850.97590.02370.08910.00230.02330.11960.27600.06410.27480.68700.86230.77220.8268
F40.18940.74430.70950.97590.25020.35740.11490.21540.21930.38700.00360.06020.67030.86230.95690.9569
F80.33930.92540.20450.97590.53670.61930.22500.32150.07750.25840.16430.41240.39030.86230.79830.8268
FC50.92130.98710.90840.97590.32450.40670.08790.19050.30030.43360.42010.57280.54020.86230.40230.8268
FC10.62540.97720.30480.97590.01890.08910.08310.19050.25720.42870.04710.27480.13790.86230.69710.8268
FC20.03430.63510.67640.97590.21490.35740.18530.30890.06830.25840.00600.06020.63880.86230.18340.8268
FC60.72980.97720.86630.97590.61290.66870.15640.27600.01920.24350.14760.41240.68260.86230.40760.8268
T70.66440.97720.76660.97590.63440.66870.47820.57390.03720.24350.31260.49350.33920.86230.22120.8268
C30.47160.94350.38830.97590.01270.08910.02230.08370.46390.55670.26740.44560.06920.86230.22560.8268
Cz0.04920.63510.73190.97590.13480.24790.55590.61760.30350.43360.00600.06020.06350.86230.11910.8268
C40.73090.97720.91650.97590.64640.66870.68850.71220.57860.66760.49300.59160.13970.86230.34820.8268
T80.86010.98710.91770.97590.33890.40670.02670.08910.02970.24350.22480.41240.34800.86230.12460.8268
CP50.47170.94350.67040.97590.32600.40670.21880.32150.02400.24350.65060.69710.70080.86230.75880.8268
CP10.13840.74430.29750.97590.02980.09920.31270.40780.83630.89600.05810.27480.18230.86230.79850.8268
CP20.83760.98710.72190.97590.92320.92320.88820.88820.28810.43360.37650.53780.77610.86230.65950.8268
CP60.30110.90330.84780.97590.06090.16520.05570.15200.43280.54100.94270.94270.34170.86230.79930.8268
P70.69460.97720.63120.97590.05440.16320.03730.11200.15310.30630.83830.86720.75730.86230.53770.8268
P30.74920.97720.76630.97590.26860.36630.35940.44920.11410.27600.03950.27480.34420.86230.69260.8268
Pz0.06350.63510.45350.97590.11390.23000.19620.30980.37450.48850.21220.41240.57560.86230.36840.8268
P40.96750.99040.28800.97590.24860.35740.55520.61760.19780.37080.61140.67930.94770.96450.67680.8268
P80.56210.97720.03210.96180.00210.03110.00840.06280.04060.24350.21750.41240.39070.86230.58890.8268
POz0.72590.97720.34930.97590.11500.23000.01090.06310.07230.25840.20740.41240.96450.96450.47140.8268
O10.24380.81270.46820.97590.09900.22850.08890.19050.36850.48850.34150.51220.80910.86690.51320.8268
Oz0.41250.94350.68160.97590.01670.08910.00230.02330.05360.25840.14630.41240.71700.86230.68350.8268
O20.17430.74430.36950.97590.01680.08910.01260.06310.09570.26110.16510.41240.58750.86230.37070.8268
Table 3. Uncorrected and FDR-corrected p-values estimated by Kruskal–Wallis test between EEG spectral power estimates for no sound (baseline) vs. conditioned no sound, eyes open, the same visual stimuli, and for all subjects.
Table 3. Uncorrected and FDR-corrected p-values estimated by Kruskal–Wallis test between EEG spectral power estimates for no sound (baseline) vs. conditioned no sound, eyes open, the same visual stimuli, and for all subjects.
δθα1α2β1β2γ1γ2
Fp10.75180.77770.59620.74520.60260.75000.28530.31700.00000.00010.00000.00020.00000.00000.00000.0000
Fpz0.41500.73240.38930.55620.80620.83400.19370.23240.24980.37470.01850.03080.00060.00080.00000.0000
Fp20.31480.66580.01000.07480.79950.83400.26500.30580.02360.07880.00010.00030.00000.00000.00080.0013
F70.02020.20380.10820.24970.83480.83480.48440.48440.05340.10710.00340.00730.00000.00000.00000.0000
F30.44330.73880.03350.14370.36670.57900.36520.38410.05950.11160.00010.0003000.00000.0000
Fz0.07710.33710.36490.55620.79080.83400.07930.10820.11620.18350.59020.67960.01500.01800.00720.0098
F40.10110.33710.41220.56210.34170.57570.10680.13930.63140.67640.00000.00000.00000.00000.00000.0000
F80.15650.40310.04030.14580.41290.61930.14120.17660.01120.04780.00010.00030.00000.00000.00040.0006
FC50.02040.20380.09560.23910.08600.26440.04950.07420.00000.00000.00000.0000000.00000.0000
FC10.00570.17180.04720.14580.45670.62270.06960.09950.36440.45550.00100.00270.00000.00000.00000.0000
FC20.37880.71030.73380.84670.34540.57570.00740.01380.01890.07080.18140.25910.00000.00000.02200.0275
FC60.95640.95640.08330.22710.18870.37740.02050.03420.00300.01500.00000.00000.00000.00000.26950.3234
T70.68230.77770.00220.03450.62500.75000.37130.38410.00000.0000000000
C30.16120.40310.00340.03450.66320.76530.01770.03120.50330.58070.00000.00000.00000.00000.00000.0001
Cz0.11440.34330.00290.03450.16130.37230.00010.00040.04730.10710.10890.16340.47920.47920.02090.0272
C40.53180.77770.13130.28140.00020.00560.00000.00000.04980.10710.01820.03080.00000.00000.00000.0000
T80.05200.31170.04860.14580.45640.62270.04230.06680.00000.0000000000
CP50.64560.77770.38770.55620.01110.08360.00040.00130.46810.56180.00240.00550.00000.00000.00000.0000
CP10.52010.77770.02320.11850.02490.11270.00000.00020.07340.12240.04460.07040.00020.00030.96440.9644
CP20.74050.77770.16060.28650.00400.05370.00000.00000.05350.10710.53560.64270.02600.02890.71690.7416
CP60.73310.77770.81170.90190.08810.26440.00230.00630.60980.67640.00110.0027000.00000.0000
P70.63280.77770.87910.94190.02160.11270.00540.01110.66360.68650.21820.28470.00000.00000.00000.0000
P30.33290.66580.72960.84670.00540.05370.00030.00110.28890.39400.01320.02510.00000.00000.00030.0005
Pz0.03420.25650.02370.11850.10390.27390.00000.00030.00200.01220.72400.74900.01990.02290.30280.3494
P40.71070.77770.98580.98580.02630.11270.00000.00000.34120.44500.61160.67960.00280.00350.00610.0087
P80.18600.42920.16240.28650.10960.27390.00040.00130.84470.84470.01340.02510.00000.000000
POz0.61830.77770.51400.67040.05140.19280.00550.01110.03530.10580.79500.79500.05880.06300.00000.0000
O10.72510.77770.93960.97200.30600.57380.00170.00510.06630.11690.21540.28470.00010.00020.55920.6213
Oz0.09690.33710.15740.28650.60460.75000.00540.01110.05260.10710.67130.71920.30470.31520.58640.6283
O20.10070.33710.32970.54960.18110.37740.00360.00910.26750.38210.41130.51410.00060.00080.00000.0000
Table 4. Percentages of correct detections (and corresponding k-values) of Class III for all subjects and EEG rhythms.
Table 4. Percentages of correct detections (and corresponding k-values) of Class III for all subjects and EEG rhythms.
Rhythmδθα1α2β1β2γ1γ2
Subj. 153.3 (56)93.3 (141)33.3 (15)26.7 (4)43.3 (21)23.3 (9)80.0 (105)34.0 (15)
Subj. 245.0 (11)60.0 (81)83.3 (79)36.7 (10)90.0 (104)95.0 (97)96.7 (13)80.0 (106)
Subj. 393.3 (115)48.3 (4)73.3 (144)46.7 (6)75.0 (13)91.7 (13)95.0 (120)80.0 (8)
Subj. 445.0 (32)93.3 (150)31.7 (9)35.0 (7)45.0 (17)53.3 (4)56.7 (5)60.0 (5)
Subj. 583.3 (140)30.0 (19)26.7 (13)30.0 (18)25.0 (4)75.0 (66)40.0 (7)91.7 (121)
Subj. 626.7 (4)48.3 (4)25.0 (5)35.0 (19)36.7 (12)41.7 (5)55.0 (9)76.7 (150)
Subj. 736.7 (6)41.7 (6)88.3 (77)85.0 (57)100.0 (113)95.0 (81)93.3 (97)66.7 (7)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tcheslavski, G.; Felipe, L.; Ali, K. Inducing and Detecting Hallucination-like Auditory Experiences in Healthy Subjects via Conditioning and Electroencephalogram Analysis: A Proof of Concept. Electronics 2026, 15, 931. https://doi.org/10.3390/electronics15050931

AMA Style

Tcheslavski G, Felipe L, Ali K. Inducing and Detecting Hallucination-like Auditory Experiences in Healthy Subjects via Conditioning and Electroencephalogram Analysis: A Proof of Concept. Electronics. 2026; 15(5):931. https://doi.org/10.3390/electronics15050931

Chicago/Turabian Style

Tcheslavski, Gleb, Lilian Felipe, and Kamal Ali. 2026. "Inducing and Detecting Hallucination-like Auditory Experiences in Healthy Subjects via Conditioning and Electroencephalogram Analysis: A Proof of Concept" Electronics 15, no. 5: 931. https://doi.org/10.3390/electronics15050931

APA Style

Tcheslavski, G., Felipe, L., & Ali, K. (2026). Inducing and Detecting Hallucination-like Auditory Experiences in Healthy Subjects via Conditioning and Electroencephalogram Analysis: A Proof of Concept. Electronics, 15(5), 931. https://doi.org/10.3390/electronics15050931

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop