Frequency and Time Domain Analysis of EEG Based Auditory Evoked Potentials to Detect Binaural Hearing in Noise

Hearing loss is a prevalent health issue that affects individuals worldwide. Binaural hearing refers to the ability to integrate information received simultaneously from both ears, allowing individuals to identify, locate, and separate sound sources. Auditory evoked potentials (AEPs) refer to the electrical responses that are generated within any part of the auditory system in response to auditory stimuli presented externally. Electroencephalography (EEG) is a non-invasive technology used for the monitoring of AEPs. This research aims to investigate the use of audiometric EEGs as an objective method to detect specific features of binaural hearing with frequency and time domain analysis techniques. Thirty-five subjects with normal hearing and a mean age of 27.35 participated in the research. The stimuli used in the current study were designed to investigate the impact of binaural phase shifts of the auditory stimuli in the presence of noise. The frequency domain and time domain analyses provided statistically significant and promising novel findings. The study utilized Blackman windowed 18 ms and 48 ms pure tones as stimuli, embedded in noise maskers, of frequencies 125 Hz, 250 Hz, 500 Hz, 750 Hz, 1000 Hz in homophasic (the same phase in both ears) and antiphasic (180-degree phase difference between the two ears) conditions. The study focuses on the effect of phase reversal of auditory stimuli in noise of the middle latency response (MLR) and late latency response (LLR) regions of the AEPs. The frequency domain analysis revealed a significant difference in the frequency bands of 20 to 25 Hz and 25 to 30 Hz when elicited by antiphasic and homophasic stimuli of 500 Hz for MLRs and 500 Hz and 250 Hz for LLRs. The time domain analysis identified the Na peak of the MLR for 500 Hz, the N1 peak of the LLR for 500 Hz stimuli and the P300 peak of the LLR for 250 Hz as significant potential markers in detecting binaural processing in the brain.


Background
Hearing loss is a prevalent health issue that affects individuals from various cultures, races, and age groups. The negative impact on a person's quality of life can be considerable, particularly if the condition goes undiagnosed [1]. For children, undetected hearing loss can impede development, and early identification and intervention are critical for them to acquire essential skills at a similar pace to their peers. Thus, it is imperative to detect any form of hearing loss as early as possible [2]. Research has revealed that hearing loss can range from mild to profound, and that a broad spectrum of contributing factors exists [3][4][5][6].
Hearing loss can be categorized as either sensorineural or conductive, with sensorineural hearing loss affecting the inner ear and nervous system, while conductive hearing loss can be caused by malformations or diseases of the outer ear, the ear canal, or the middle ear structure. Both sensorineural and conductive hearing loss can interfere with cognitive development, interfering with auditory processing and binaural hearing development [7,8].
investigate various hearing disorders has been explored [31,32]. AEPs have also been used to investigate the effects of different types of noise on auditory processing and to evaluate the neural mechanisms underlying sensory integration. Overall, AEPs are a valuable tool for evaluating auditory function and diagnosing hearing disorders [33].
The ABR, MLR, and LLR are the major AEP components, each reflecting different parts of the auditory pathway's electrical activity. AEPs have the potential to provide important information for the management and treatment of hearing disorders, as well as to advance our understanding of the neural mechanisms underlying auditory perception and cognition [34][35][36]. Further research is needed to fully understand the neural mechanisms underlying AEPs and their clinical and scientific implications.
The use of auditory evoked potentials (AEPs) in investigating binaural hearing has led to the identification of MLR and LLR as two main components of interest. Research related to the MLR has primarily focused on its sensitivity to binaural processing. Studies have shown that the MLR is modulated by differences in interaural time and intensity cues, which are important for sound localization and binaural fusion [29]. Furthermore, the MLR has been shown to be associated with the perceptual grouping of sounds, such as the segregation of speech from background noise [37,38]. However, the literature on the MLR in binaural hearing is limited, and more research is required to comprehend its role in binaural hearing. The LLR has also been studied in the context of binaural hearing [39]. Meanwhile, studies have shown that the LLR is modulated by binaural cues, including interaural time and intensity differences, and is sensitive to the spatial location of sounds [40], suggesting its potential role in auditory scene analysis through its sensitivity to changes in binaural cues over time [41]. However, the literature on the LLR in binaural hearing is still limited, demanding further research to fully understand its contribution to binaural processing.
The literature on the EEG and AEPs is diverse, but there is a lack of relevant literature regarding binaural hearing assessment and testing stimuli for AEP responses, indicating an avenue for future studies. A review of the literature reveals a gap in knowledge regarding the use of the EEG for binaural assessment. Specifically, research pertaining to the Auditory MLR and LLR indicates a need for further investigation and contribution to the existing body of knowledge. This study aims to fill this gap by exploring the potential of AEPs for binaural hearing assessment through MLR and LLR analysis.
ERP signals from the brain can be analysed in different ways which includes time domain analysis by examining AEPs, or the frequency domain analysis that uses methods such as FFT (Fast Fourier Transform) or Welch's Periodogram (Pwelch) [42][43][44]. The Pwelch method, a spectral decomposition technique, calculates the Power Spectral Density (PSD) for EEG data, providing valuable information about the spectral content and power across different frequency bands. It reduces variance and allows for high accuracy and resolution in PSD estimation, making it useful in ERP data analysis with low signal-to-noise ratios [45]. The Pwelch method can handle signals with non-uniform sampling rates and offers insights into the underlying neural mechanisms of cognitive and sensory processing [44]. AEPs can be analysed in the time domain through averaging techniques, allowing for the examination of peak amplitudes, latencies, and interpeak latency differences under various conditions [46,47]. Overall, the AEP analysis and Pwelch method in both time and frequency domains are essential in understanding brain function and neurological disorders.
This study aims to investigate the use of audiometric EEGs as an objective method to detect binaural hearing. Limited research on the MLR and LLR with binaural hearing stimuli creates an opportunity for novel contributions to the existing knowledge. For this study, the BMLD (Binaural Masking Level Difference) test, which has been recommended for measuring binaural hearing loss, has been taken as a starting point. The stimuli used in the BMLD trials are employed in the current Audiometric EEG study to investigate the impact of binaural phase shifts of the auditory stimuli in the presence of noise.

Materials and Methods
This study was conducted at Charles Darwin University in Australia, and the experimental protocols were approved by the Human Ethics Committee of the University, as outlined in H18014-Detecting Binaural Processing in the Audiometric EEG. Prior to the experiment, written consent was obtained from all volunteers indicating their willingness to participate in the various hearing tests, including EEG measurements. A plain language statement (PLS) was provided outlining the details of the study and to provide an overview of the experimental process. A questionnaire was completed by each subject to ensure a healthy otological history. The experimental set up and hardware was the same as explained in [20].

Participants
The sample for the present study included thirty-five participants (23 males and 12 females) between the ages of 18 and 33 years old (mean age 27.36), with a hearing threshold value between 0 and 20 dB hearing level (HL) in both ears, as confirmed by puretone audiometry. Pure-tone audiometry was conducted initially to measure the hearing threshold levels of the participants at different frequencies, in accordance with the relevant Australian Standards, and to determine whether they were acceptable [20]. All the selected thirty-five subjects had normal audiograms with a hearing threshold value between 0 to 20 dB hearing level (HL) for the frequency range of 125 Hz to 8 KHz [20].
Participants were also required to read and understand the PLS. Additionally they were asked to complete a questionnaire to ensure no noticeable otological issues were detected in the past or present. Exclusion criteria included participants who were under 18, had severe hearing loss conditions or cognitive impairment, had a cochlear implant or other implantable hearing device, or had severe or debilitating tinnitus. Additionally, pregnant women were excluded due to potential risk to the foetus. The inclusion and exclusion criteria were carefully considered to ensure that the trial results were valid and could be applied to the target population. The subjects were prepared for the experimental process as explained in the preliminary study conducted [20].

Auditory Stimuli
The stimuli used in the study were Blackmann windowed pure tones of different frequencies: 125 Hz, 250 Hz, 500 Hz, 750 Hz and 1000 Hz. Blackman windowing was performed to ensure a smooth acoustic transition at the start or the end of the stimulus and thus to reduce spectral splatter of the signal. The tones were embedded in a 10 Hz bandwidth Gaussian noise masker throughout. The centre frequency of the masker and the frequency of the tone were the same for these trials. The stimuli were 518 ms and 548 ms in duration, generated in MATLAB R2017b with a signal of 18 ms and 48 ms, respectively. The durations were chosen to correspond to the duration of signals that can be used for the generation of AEPs: 18 ms for the Middle Latency Response (MLR) and 48 ms for the Late Latency Response (LLR). The level for the masker was set at 20 dB whereas the tone was at 40 dB. The sampling frequency was set to 19.2 kHz. The generated stimuli of 18 ms and 48 ms, for the MLR and LLR respectively, are illustrated in Figures 1 and 2. The stimuli were presented in a predefined sequence: blocks of 25 homophasic stimuli followed by 25 antiphasic stimuli. A total of 1000 trials were carried out per subject, resulting in the generation of 500 antiphasic and 500 homophasic ERPs for each subject. The total time for 1000 trials was 8.633 min and 9.133 min for the 18 ms and 48 ms stimuli, respectively. On average, healthy adults have an attention span in the range of 10 to 20 min, so a short trial reduces the risk of hearing fatigue and adaptation during the experimental trial [48]. Subjects were asked to relax before and between the experiments, in order to ensure quality data acquisition. The stimuli were delivered to the ER.2 insert earphones via an external sound card (Creative Sound Blaster Omni Surround 5.1) at a 60 dB sound pressure level. The pure tone stimuli consisted of a sinusoidal signal, So, and its opposite signal, Spi [20]. in order to ensure quality data acquisition. The stimuli were delivered to the ER.2 insert earphones via an external sound card (Creative Sound Blaster Omni Surround 5.1) at a 60 dB sound pressure level. The pure tone stimuli consisted of a sinusoidal signal, So, and its opposite signal, Spi [20].

Electrode Placement
Electrical activity in response to auditory stimuli was recorded from 12 electrode sites on the scalp. The arrangement of specific electrode sites used for the recording of auditory evoked potentials (AEP) is shown in Figure 3 [49,50]. The reference electrode was positioned on the left earlobe, while the ground electrode was placed in the lower position on the forehead (FPz), following the 10-20 electrode placement system [51,52]. in order to ensure quality data acquisition. The stimuli were delivered to the ER.2 insert earphones via an external sound card (Creative Sound Blaster Omni Surround 5.1) at a 60 dB sound pressure level. The pure tone stimuli consisted of a sinusoidal signal, So, and its opposite signal, Spi [20].

Electrode Placement
Electrical activity in response to auditory stimuli was recorded from 12 electrode sites on the scalp. The arrangement of specific electrode sites used for the recording of auditory evoked potentials (AEP) is shown in Figure 3 [49,50]. The reference electrode was positioned on the left earlobe, while the ground electrode was placed in the lower position on the forehead (FPz), following the 10-20 electrode placement system [51,52].

Electrode Placement
Electrical activity in response to auditory stimuli was recorded from 12 electrode sites on the scalp. The arrangement of specific electrode sites used for the recording of auditory evoked potentials (AEP) is shown in Figure 3 [49,50]. The reference electrode was positioned on the left earlobe, while the ground electrode was placed in the lower position on the forehead (FPz), following the 10-20 electrode placement system [51,52].

Signal Processing and Analysis
After the data acquisition process was completed, data processing and analysis were carried out offline. Figure 4 illustrates the workflow followed for data processing and

Signal Processing and Analysis
After the data acquisition process was completed, data processing and analysis were carried out offline. Figure 4 illustrates the workflow followed for data processing and analysis. The data were imported as an array into MATLAB. This contained the data of the captured EEG channels, the trigger channels and the stimulus channels. Further processing of signals was carried out in MATLAB R2017b using the EEGLAB v2019.1 toolbox. The sampling rate for EEG data acquisition is 19.2 kHz and the Nyquist frequency is 9.6 kHz, much higher than required for analysing the frequency ranges in the human AEPs. In the pre-processing stage, the EEG data were down sampled to 2048 Hz [53]. The process of down sampling ensures that the filtering process in the preprocessing stage is computationally efficient and improves the roll-off ability of the filters. The next step in data pre-processing was to extract the accurate trigger times by removing false triggers and checking whether 1000 triggers were found in total. The detection of the triggers was carried out by checking the time between two triggers (0.518 s or 0.548 s). If a shorter time delay was found, the corresponding trigger was rejected. The trigger channel data captured by the amplifier were used to synchronize the averaging process. The 18 ms input stimuli, masked in noise in both the homophasic and the antiphasic conditions, were used to evoke the Middle Latency Response (MLR) and the 48 ms input stimuli, masked in noise in both the homophasic and the antiphasic conditions, were used to evoke the Late Latency Response (LLR). As shown in Figure 5, the actual triggers were detected by analysing the right ear stimuli (Stim-R) and left ear stimuli (Stim-L) together with the trigger timing. Figure 5a,b, shows that the triggers at the start of each stimulus were detected accurately. Once the triggers were detected, artefacts and noise were removed from the EEG signals to obtain clear evoked responses for further processing and analysis [54]. The down sampled data were then filtered using an FIR filter with a Hamming window. A low cut-off frequency of 1 Hz was applied to the data to remove slow drift noise and DC components from the signal. The next stage of data processing involves epoch generation. The start of the epoch was determined using the trigger signal, which was delivered synchronously with the stimulus [20]. The duration of the epoch is based on the duration of the signal and the interval between the signals. Using the trigger signal, the responses to homophasic and antiphasic stimuli were identified. The trial was then cut into suitable time frames for analysis. Thus, the evoked potentials were then split in epochs (pre-defined short duration of time) [55]. In the present study, the time windows of interest are the MLR, which ranges from 20 ms to 100 ms, and the LLR from 50 ms to 500 ms after the start of the input stimuli. Epochs with amplitudes larger than 150 mV were rejected. Baseline correction was conducted for the remaining trials [56].
The remaining epochs were analysed in the time domain and frequency domain. As an initial step for the time domain analysis, epoch averaging was carried out separately for each of the twelve electrode locations. The averaged AEP signals from the accepted trials were then further analysed in the time domain. Frequency domain analysis, however, was carried out initially as individual epochs and in the later stage the averaging was carried out. J. Clin. Med. 2023, 12, x FOR PEER REVIEW 7 of 26 500 ms after the start of the input stimuli. Epochs with amplitudes larger than 150 mV were rejected. Baseline correction was conducted for the remaining trials [56]. The remaining epochs were analysed in the time domain and frequency domain. As an initial step for the time domain analysis, epoch averaging was carried out separately for each of the twelve electrode locations. The averaged AEP signals from the accepted trials were then further analysed in the time domain. Frequency domain analysis, however, was carried out initially as individual epochs and in the later stage the averaging was carried out.

Frequency Domain Analysis
Frequency domain analysis is a widely used method for EEG analysis, and it is often conducted as the first analysis since it is a general and the most common method to understand the EEG data as a whole [57,58]. Frequency domain analysis is regarded as the most powerful and standard method for EEG analysis, compared to other methods 500 ms after the start of the input stimuli. Epochs with amplitudes larger than 150 mV were rejected. Baseline correction was conducted for the remaining trials [56]. The remaining epochs were analysed in the time domain and frequency domain. As an initial step for the time domain analysis, epoch averaging was carried out separately for each of the twelve electrode locations. The averaged AEP signals from the accepted trials were then further analysed in the time domain. Frequency domain analysis, however, was carried out initially as individual epochs and in the later stage the averaging was carried out.

Frequency Domain Analysis
Frequency domain analysis is a widely used method for EEG analysis, and it is often conducted as the first analysis since it is a general and the most common method to understand the EEG data as a whole [57,58]. Frequency domain analysis is regarded as the most powerful and standard method for EEG analysis, compared to other methods

Frequency Domain Analysis
Frequency domain analysis is a widely used method for EEG analysis, and it is often conducted as the first analysis since it is a general and the most common method to understand the EEG data as a whole [57,58]. Frequency domain analysis is regarded as the most powerful and standard method for EEG analysis, compared to other methods [57]. It gives insight into information contained in the frequency domain of EEG waveforms by adopting statistical and Fourier transform methods [57]. Power spectral analysis is the most used spectral method since the power spectrum reflects the 'frequency content' of the signal or the distribution of signal power over frequency [57]. The frequency-domain feature analysis method mainly observes the frequency spectrum of the EEG signal of a certain length, which can obtain the distribution [59]. Hence, frequency domain analysis was used in the analysis of the electroencephalogram (EEG), particularly in the context of auditory responses. By transforming time-domain signals into the frequency domain, it may be possible to extract some information about the underlying neural processes that generate the signals in response to auditory stimuli [44,60]. In EEG signal analysis, frequency domain analysis typically involves the use of Fourier transform techniques such as the Fast Fourier Transform (FFT) or modified methods such as Welch's periodogram (Pwelch) to estimate the spectral components of recorded EEG signals [45]. This technique provides valuable insights into the frequency components of EEG waveforms and is based on the mathematical principles of Fourier transforms and statistical analysis [42].
The Pwelch method is a common method of spectral decomposition of EEG signals used in ERP data analysis. The PSD calculated by the Pwelch method is computed by dividing the time signal into successive blocks, forming the periodogram for each block, and then averaging the periodogram results.
The epoched AEP data are then transformed into power spectral density (PSD) using Welch's periodogram method using a Hanning window. The Welch periodogram is calculated using Equation (1).
where the w(n) is the Hanning window function, defined as: Welch's method was applied to the MLR and LLR epochs of the EEG signals for each electrode. In other words, Welch's calculation was conducted on all the extracted epochs of MLR and LLR data for each channel and each subject. Welch's method generated PSD values for every calculated frequency on the signal of each epoch. The PSDs from all trials were then averaged. Once the averaged PSD of every subject was calculated, further analysis was conducted on more specific frequency bands. In the current study, the frequency range of 15-50 Hz was the objective [20,50] Figure 6a,b represent mean PSD values for every EEG channel for one subject study in the homophasic and antiphasic conditions, respectively.
The mean PSD values were then analysed using a statistical method to evaluate whether the PSD values were normally distributed. For this process, the Shapiro-Wilk test with alpha = 0.05 was employed to evaluate normality of the distribution among subjects for the same channel and frequency band. The results show that the data in general have p values less than 0.05, which means that the PSD value distribution among the subjects for each channel and frequency band is not normal. Figures 7 and 8 show the MLR and LLR plots in response to the 500 Hz stimulus, for seven different bands for the Cz channel. Since parametric tests require normal distribution of data, a non-parametric test was chosen as the alternative for further evaluation. The Wilcoxon signed-rank test was chosen to evaluate the significance of the differences between responses to the antiphasic and homophasic stimulus conditions among subjects for the same channel and frequency band. The Wilcoxon signed-rank test uses two matched samples, comparing their rank within the population. It is known to be more robust for a small sample size, usually under 50 samples. The process uses an alpha value as an indicator to accept or reject the null hypothesis that there is no significant difference between the homophasic and the antiphasic conditions. The most common alpha value for the Wilcoxon test is 0.05, which is also used in this study. If the p-value is less than 0.05, the null hypothesis is rejected, which means that there is a significant difference between the homophasic and the antiphasic conditions. If the p-value is more than 0.05, the null hypothesis cannot be rejected, which means that there is no significant difference between the homophasic and the antiphasic condition. Tables 1-10 show the Wilcoxon signed-rank significance test results for MLR and LLR data for five different frequencies for all twelve channels. The mean PSD values were then analysed using a statistical method to evaluate whether the PSD values were normally distributed. For this process, the Shapiro-Wilk test with alpha = 0.05 was employed to evaluate normality of the distribution among subjects for the same channel and frequency band. The results show that the data in general have p values less than 0.05, which means that the PSD value distribution among the subjects for each channel and frequency band is not normal. Figures 7 and 8 show the MLR and LLR plots in response to the 500 Hz stimulus, for seven different bands for the Cz channel. Since parametric tests require normal distribution of data, a non-parametric test was chosen as the alternative for further evaluation. The Wilcoxon signed-rank test was chosen to evaluate the significance of the differences between responses to the antiphasic and homophasic stimulus conditions among subjects for the same channel and frequency band. The Wilcoxon signed-rank test uses two matched samples, comparing their rank within the population. It is known to be more robust for a small sample size, usually under 50 samples. The process uses an alpha value as an indicator to accept or reject the null hypothesis that there is no significant difference between the homophasic and the antiphasic conditions. The most common alpha value for the Wilcoxon test is 0.05, which is also used in this study. If the p-value is less than 0.05, the null hypothesis is rejected, which means that there is a significant difference between the homophasic and the antiphasic conditions. If the p-value is more than 0.05, the null hypothesis cannot be rejected, which means that there is no significant difference between the homophasic and            The results of the statistical analysis from Tables 1-10 indicate that there exists a significant difference between antiphase and homophase PSD data among subjects for the midline channels of the MLR for 500 Hz stimuli. The difference is significant for the 20 Hz to 25 Hz and 25 Hz to 30 Hz frequency bands. The LLR shows significant differences between the homophasic and antiphasic conditions in various channels for the 500 Hz stimuli. In addition, for the 250 Hz stimuli, the midline and left channels recorded a noticeable difference between the two conditions. Most of the significant differences are in the 20 Hz to 25 Hz and 25 Hz to 30 Hz frequency bands.
The highlighted cells indicate that the p-values are less than alpha, which means that statistically significant differences exist in the PSD bands between the two conditions. Frequency band analysis is a useful tool for identifying the frequency range where the response to the binaural cue of phase reversal is most pronounced. The significant differences observed between the antiphasic and homophasic conditions support the notion that the interaural phase difference (IPD) is important for localizing sound sources. These findings suggest that the brain is particularly sensitive to IPD cues in certain frequency bands, which are used to determine the position of a sound source.

Time Domain Analysis
The time domain analysis aims to: (1) determine the effects of phase-shifted pure tone stimuli masked in noise of different frequencies on the MLRs and LLRs for normal hearing subjects; (2) compare the effects of ERP components with previously reported effects; and (3) determine which ERP components of MLR and LLR are dominant and contrast. The ERP data were processed using time-series domain analysis. The first step averaged the accepted epochs after epoch rejection in preprocessing. Next, the visual inspection was conducted to see whether there is a trend or pattern in the signal. This visual inspection was conducted by channel-wise and subject-wise plotting of the MLRs and LLRs, see        From visual inspection, it was found that the ERPs look similar among channels for each subject, as shown in Figures 11 and 12. This finding is consistent with previous studies that have reported individual differences in topography, latency, and morphology  From visual inspection, it was found that the ERPs look similar among channels for each subject, as shown in Figures 11 and 12. This finding is consistent with previous studies that have reported individual differences in topography, latency, and morphology From visual inspection, it was found that the ERPs look similar among channels for each subject, as shown in Figures 11 and 12. This finding is consistent with previous studies that have reported individual differences in topography, latency, and morphology of ERP components [61,62]. However, contradictory results are obtained when the ERPs are plotted for the same channel for all subjects, as shown by Figures 9 and 10. This discrepancy suggests that there may be subject-specific differences in the ERPs [63,64]. For instance, one study [62] analysed ERP traces from 238 scalp channels averaged over 500 EEG epochs in a single subject and found that the ERPs were subject specific.
The next step was the extraction of peaks and peak-to-peak values of the MLR and LLR from every channel and subject. For the MLR, the ERP components to be extracted are Na, Pa, Nb, and Pb, while for the LLR, the ERP components are N1, P1, N2, P2 and P300. A representation of the peak components in the MLR and LLR waves is shown in Figure 13. The results were categorized into 10 MLR variables and 11 LLR variables for homophasic and antiphasic data as shown in Table 11. The mean and standard deviation values of all extracted MLR and LLR peaks are shown in Figure 14. Differences between the homophasic and antiphasic results were not immediately apparent. Nonetheless, some insight can be drawn from the results depicted in Figure 14.
The next step was the extraction of peaks and peak-to-peak values of the MLR and LLR from every channel and subject. For the MLR, the ERP components to be extracted are Na, Pa, Nb, and Pb, while for the LLR, the ERP components are N1, P1, N2, P2 and P300. A representation of the peak components in the MLR and LLR waves is shown in Figure 13. The results were categorized into 10 MLR variables and 11 LLR variables for homophasic and antiphasic data as shown in Table 11. The mean and standard deviation values of all extracted MLR and LLR peaks are shown in Figure 14. Differences between the homophasic and antiphasic results were not immediately apparent. Nonetheless, some insight can be drawn from the results depicted in Figure 14.  Figure 13. ERP wave peak components of homophase and antiphase for MLR and LLR. Figure 13. ERP wave peak components of homophase and antiphase for MLR and LLR.  Figure 14. Mean peak amplitudes in (µV) with standard deviations for the ERP components for MLR. Figure 14. Mean peak amplitudes in (µV) with standard deviations for the ERP components for MLR.
Firstly, the Na peak of the MLR for the 500 Hz stimulus has the highest differences in mean values between the antiphasic and homophasic conditions with a low standard deviation. This suggests that the Na peak of the MLR for the 500 Hz stimulus may be sensitive to phase differences in the stimuli. Secondly, for the LLR, the highest average differences between the homophasic and the antiphasic conditions are found for the N1, N2, and P300 peaks for the 250 Hz stimuli and the N1 peak for the 500 Hz stimuli. This indicates that the N1, N2, and P300 peaks of the LLR for 250 Hz stimuli and the N1 peak for the 500 Hz stimuli may be more sensitive to phase differences. These findings suggest that the differences between the antiphasic and homophasic condition in mean values of ERP peaks vary across different frequencies of the stimuli and may depend on the type of AEPs. Previous studies on auditory middle latency responses (MLRs) have shown that the amplitudes of most MLR peaks increase and their latencies decrease with increasing stimulus intensity [65]. To evaluate the significance of ERP peak amplitudes for stimuli of different frequencies, a statistical test is required [61].
The Shapiro-Wilk test was conducted to examine the normality of the results. Since the visual inspection shows similarity across channels for the same subject, subject-wise data grouping for the tests was conducted, i.e., it was checked whether the distribution was normal among channels for the same subject. The normality results show that all peak categories have normal distribution for almost all subjects. Since the peak categories show a normal distribution trend, they may be averaged among subjects for each category separately. The results of averaged peaks among channels for each subject were converted into absolute values prior to the averaging of all subject peak values in each category. The use of absolute values when analysing peak amplitudes is consistent with the literature because it allows for the comparison of peak amplitudes across different conditions and subjects [66]. From the averaged results, the peaks of the MLR and LLR varied with stimuli of different frequencies.
It was then evaluated whether there is a significant difference in the MLR and LLR components between the homophasic and antiphasic for different stimulus frequencies. A two-sample t-Test was conducted for the mean peaks in the antiphasic and homophasic condition among subjects. The results show that there are significant differences in the Na and N1 peaks between the antiphasic and the homophasic condition for 500 Hz stimuli. In addition, the P300 peak of the LLR showed significant differences between the antiphasic and the homophasic condition for 250 Hz stimuli. The results are listed in Tables 12 and 13.

Discussion
The findings of the current study suggest that for signals masked in noise, phase changes can have a significant effect on binaural processing in the human brain, as measured by auditory evoked potentials (AEPs). Our results showed that there were statistically significant differences between the AEP signals generated by antiphasic and homophasic stimuli, in both time and frequency domain features. The differences suggest that the brain can detect and process interaural phase differences, which may be important for spatial localization of sound sources and other aspects of auditory processing. The results are consistent with previous studies [67] that have shown that binaural stimulation results in larger cortical responses compared to monaural stimuli, and that the amplitude and latency of AEPs are dependent on the binaural difference. However, our study is unique in its focus on phase changes and its use of stimuli with frequency and noise parameters. The findings may improve our understanding of binaural processing in the human brain and lead to applications in the development of new objective hearing tests in the future. Our results indicate that the detection of phase differences may be an important factor in the "cocktail party" effect, whereby listeners are able to focus on a particular sound source in a noisy environment [68].
In the frequency domain study, we used the Pwelch method to calculate the power spectral density values of the MLR and LLR signals in various frequency bands to investigate the significance of phase differences in binaural processing [69]. Our results showed that the 20-25 Hz and 25-30 Hz frequency bands of the MLR and LLR signals had a significant difference for antiphasic and homophasic stimuli. These frequency bands correspond to the high beta and low gamma frequency range of the EEG [70]. The finding is consistent with previous research suggesting that sensory integration results in frequencies in the high beta and low gamma range, which may indicate conscious and accurate phase detection of auditory stimuli [70,71]. Further analysis revealed that the stimuli which resulted in statistically significant differences were 500 Hz for the MLR signals, and 250 Hz and 500 Hz for the LLR signals, mainly in the 20-25 Hz and 25-30 Hz frequency bands. These findings suggest that optimal binaural processing occurs at 500 Hz, in line with previous literature predicting that lower frequencies result in larger binaural masking level differences (BMLD) [72,73].
We also analysed the electrode locations that provided more significant differences between the homophasic and antiphasic condition. Our results show that the midline electrodes provided more significant differences in the MLR [74], while both the midline and left electrodes provided significant differences for the LLR signals [75]. This finding may indicate that midline electrodes could be more suitable to investigate the processing of pure tone stimuli in binaural hearing [76]. The left hemisphere of the brain, which is known to be important for processing the temporal aspects of sound, may also be involved in this processing [77]. It is also in agreement with the finding of Ross et al. [78], where the author confirms the dominance of hemispheric contribution in processing auditory stimuli in noisy environments.
The present study also investigated phase-sensitive binaural hearing using the time domain ERP peak analysis. The results revealed that the Na peak of the MLR for 500 Hz stimuli, the N1 peak of the LLR for 500 Hz stimuli, and the P300 peak of the LLR for 250 Hz stimuli show a statistically significant difference between the antiphasic and the homophasic condition for subject-wise analysis, indicating the importance of phase differences in binaural hearing. It has been suggested that neural functioning in the thalamo-cortical level (bottom up) and neurocognitive functions (top down) are related to phase-sensitive stimuli masked in noise for binaural hearing [79,80]. Furthermore, our study highlights the importance of the N1 peak and the P300 peak of LLR in the analysis of binaural hearing and their potential use as measures of cortical processing of IPD [80,81]. The Na peak of the MLR, the N1 peak of the LLR, and the P300 peak of the LLR are important components in the analysis of the relevance of interaural phase differences (IPD) for binaural hearing. The peaks are believed to reflect the processing of IPD at different levels of the auditory system, starting from the midbrain regions to the auditory cortex, and finally, the attentional and working memory systems. The Na peak of MLR is believed to represent the processing of IPD in the midbrain regions, specifically in the superior olivary complex and lateral lemniscus [82]. This peak is sensitive to small differences in IPD, making it an important measure for studying spatial hearing in binaural hearing tasks [83,84]. The N1 peak of LLR reflects cortical processing of IPD, particularly in the auditory cortex. This peak may represent the neural processing of the differences in the timing of the sound wave between the two ears at a higher level of the auditory system. Reduced N1 peak amplitudes may suggest a possible deficit in cortical processing of IPD, which may contribute to difficulties in discriminating tones and non-speech sounds from noise [85]. In literature, the N1 peak has been identified as a physiological index of the ability to "tune in" one's attention to a single sound source when there are several competing sources in a noisy environment, again referring to the 'cocktail party effect' [86,87]. Enhancement of the N1 component for tasks which require selective attention has also been described in the literature [46] and is in line with the current study's findings. The P300 peak of LLR may reflect cognitive processing of IPD, particularly in attentional and working memory systems. Several studies have stated the importance of the P300 component in analysing binaural hearing for normal adults as well as in adults with central processing disorders [88,89]. Reduced P300 peak amplitudes may suggest a possible deficit in cognitive processing of IPD and impaired attentional and working memory functions.
The frequency domain analysis results suggest that the brain is capable of detecting and processing phase differences in binaural hearing, particularly in the high beta and low gamma frequency range, and that optimal binaural processing occurs for 500 Hz stimuli, based on the MLR results. Additionally, our results provide guidance on the selection of electrode locations for future binaural hearing studies. For time domain analysis, the Na peak of the MLR and N1 peak of the LLR for 500 Hz stimuli can be used as markers in objective studies of binaural hearing. The P300 peaks of the LLR for 250 Hz stimuli also may contribute to objective measures for binaural hearing.

Conclusions
In conclusion, the current study explored the role of interaural phase differences in binaural processing in noise and their neural correlates in the human brain. The results demonstrated significant differences between auditory evoked potentials (AEP) generated by antiphasic and homophasic stimuli in both time and frequency domains. These findings highlight the brain's ability to detect and process interaural phase differences, crucial for sound source localization and other auditory processing aspects.
Frequency domain analysis revealed significant differences in the middle latency response (MLR) signals for 500 Hz stimuli, while both 250 Hz and 500 Hz stimuli showed significant differences in the late latency response (LLR) signals, particularly in the 20-25 Hz and 25-30 Hz frequency bands. This suggests optimal binaural processing at 500 Hz, specifically in the high beta-low gamma frequency range, known for sensory integration. Additionally, midline electrodes proved more effective for investigating binaural processing of pure tone stimuli, yielding significant differences in MLR signals, while both the midline and left electrodes showed significant differences in LLR signals.
Furthermore, time domain analysis identified the Na peak of the MLR and N1 peak of the LLR for 500 Hz stimuli as significant markers for responses to homophasic and antiphasic stimuli, with potential applications in objective studies of binaural hearing. The P300 peak of the LLR for 250 Hz stimuli also exhibited strong significance between responses to homophasic and antiphasic stimuli, suggesting it might be considered as an objective measure for binaural hearing.
Future research can expand on these findings to explore the clinical implications of binaural processing in hearing disorders and related conditions.

Limitations
It is important to note, however, that the present study has some limitations. For instance, the sample size was thirty-five, and the current study mainly focused on healthy young adults, so the results may not be generalizable to other populations. Additionally, the study used a limited set of auditory stimuli, so future research could explore the effects of different types of stimuli on binaural processing in more depth.