Next Article in Journal
Acoustic Data on Vowel Nasalization Across Prosodic Conditions in L1 Korean and L2 English by Native Korean Speakers
Previous Article in Journal
Population Genetics Data of 21 Autosomal STR Loci in the Romanian Population
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Data Descriptor

Electroencephalogram Dataset of Visually Imagined Arabic Alphabet for Brain–Computer Interface Design and Evaluation

1
Department of Computer Engineering, School of Electrical Engineering and Information Technology, German Jordanian University, Amman 11180, Jordan
2
Department of Computer Science and Engineering, American University of Sharjah, Sharjah P.O. Box 26666, United Arab Emirates
3
Department of Mechatronics Engineering, School of Applied Technical Sciences, German Jordanian University, Amman 11180, Jordan
4
Department of Computer Information Systems, Jordan University of Science and Technology, Irbid 22110, Jordan
5
Department of Electrical Engineering, American University of Sharjah, Sharjah P.O. Box 26666, United Arab Emirates
*
Author to whom correspondence should be addressed.
Data 2025, 10(6), 81; https://doi.org/10.3390/data10060081
Submission received: 17 March 2025 / Revised: 6 May 2025 / Accepted: 21 May 2025 / Published: 22 May 2025

Abstract

:
Visual imagery (VI) is a mental process in which an individual generates and sustains a mental image of an object without physically seeing it. Recent advancements in assistive technology have enabled the utilization of VI mental tasks as a control paradigm to design brain–computer interfaces (BCIs) capable of generating numerous control signals. This, in turn, enables the design of control systems to assist individuals with locked-in syndrome in communicating and interacting with their environment. This paper presents an electroencephalogram (EEG) dataset captured from 30 healthy native Arabic-speaking subjects (12 females and 18 males; mean age: 20.8 years; age range: 19–23) while they visually imagined the 28 letters of the Arabic alphabet. Each subject conducted 10 trials per letter, resulting in 280 trials per participant and a total of 8400 trials for the entire dataset. The EEG signals were recorded using the EMOTIV Epoc X wireless EEG headset (San Francisco, CA, USA), which is equipped with 14 data electrodes and two reference electrodes arranged according to the 10–20 international system, with a sampling rate of 256 Hz. To the best of our knowledge, this is the first EEG dataset that focuses on visually imagined Arabic letters.
Dataset: The dataset is available at https://figshare.com/s/c287b37708b31ce663f7 (accessed on 22 February 2025).
Dataset License: CC BY 4.0

1. Summary

Visual imagery (VI) is a cognitive process where an individual creates and maintains a mental image of an object, such as the shape of a letter, without physically seeing it [1,2,3]. This type of imagery has shown great potential for various BCI systems, including brain-typing systems [4,5]. In fact, recognizing visually imagined letters is considered a key element in BCIs designed to enhance communication for individuals with motor impairments [6,7,8], such as Parkinson’s disease and amyotrophic lateral sclerosis. Nonetheless, VI remains one of the least explored types of imagery tasks in the existing literature [9].
Given this gap, the motivation behind collecting our dataset was driven by the lack of effective communication methods for individuals with complete paralysis, who often rely on BCIs for interaction. Our dataset was acquired to enhance the understanding of the brain signals produced when an individual visualizes and imagines Arabic letters, thereby contributing to the development of more robust and inclusive BCI systems tailored to diverse linguistic and cultural contexts. This effort aligns with the broader goal of advancing VI research, where developing effective algorithms for healthy participants is a critical first step toward potential applications for individuals with severe motor disabilities.
Although there are several EEG datasets that focus on visually imagined Latin characters, such as the English alphabet [9,10,11,12], this is, to the best of our knowledge, the first EEG dataset for visually imagined Arabic letters. Given the widespread use of the Arabic alphabet, we aim to enrich the existing body of research by contributing an EEG dataset that addresses the underrepresentation of the Arabic alphabet in BCI research. The key features and significance of the dataset presented in this paper are summarized below:
  • This dataset is the first publicly available EEG dataset focusing on visually imagined Arabic letters, contributing to the understanding of cognitive processes in language recognition for non-Latin characters and complementing previous Latin alphabet studies.
  • The EEG dataset was collected from 30 healthy native Arabic-speaking participants, providing a robust sample size for reliable analysis and research.
  • The dataset, comprising 28 distinct classes, offers significant versatility for classification and decoding experiments and applications. Moreover, researchers developing EEG-based BCI systems can utilize this variety in the available classes to design control mechanisms for assistive devices and evaluate their performance.
  • The dataset can be analyzed to investigate the time, frequency, and spatial characteristics of EEG signals during VI tasks. Consequently, this dataset can be used to design new signal processing techniques and machine learning models that can advance the development of EEG-based BCI systems.

2. Methods

2.1. Subjects

Thirty healthy subjects volunteered to participate in this study. All subjects were native Arabic speakers with full literacy in Arabic characters, ensuring they recognized the letters as linguistic symbols rather than abstract images. Furthermore, all subjects were proficient in both Arabic and English. The volunteered subjects were nonsmokers with normal vision and no history of neurological disorders. Table 1 provides demographic information for each subject, including gender, age, and handedness. Before participating in the experiments, each subject signed a written consent form that explains the study’s purpose and experimental procedure.

2.2. EEG Data Acquisition System

The EEG data were recorded using the Emotiv Epoc X wireless EEG headset (San Francisco, CA, USA) [13], shown in Figure 1A. This headset consists of 14 EEG electrodes (AF3, AF4, F7, F8, F3, F4, FC5, FC6, T7, T8, P7, P8, O1, and O2) and 2 reference electrodes (CMS/DRL at P3/P4, respectively). The electrodes are positioned on the scalp according to the international 10–20 electrode placement system, as shown in Figure 1B. The EmotivPro software v3.0 [14] was used to acquire the EEG signals at a rate of 256 samples per second and to record the event markers for each trial. The recorded EEG data were then exported as .mat files using MATLAB R2022b, as described in Section 3.
It is worth noting that the Emotiv headset performs several hardware-level preprocessing and filtering steps, including the following [13]:
  • Sampling Rate: The EEG signals are internally sampled at 2048 Hz and downsampled to either 128 Hz or 256 Hz (user-configurable). For this study, we set the downsampling rate to 256 Hz.
  • Bandwidth: A bandpass filter with a bandwidth of 0.16–43 Hz is applied, along with built-in digital notch filters at 50 Hz and 60 Hz.
  • Filtering: An integrated 5th-order Sinc digital filter is applied for noise reduction.
For a detailed description of the hardware-level preprocessing and filtering steps implemented by the Emotiv headset, readers are referred to the Emotiv technical specifications [13]. In the current study, no additional filtering or preprocessing was applied to the EEG signals beyond the built-in hardware-level processing of the Emotiv device.

2.3. Experimental Procedure

The experiments were conducted in a quiet room with white walls to minimize distractions. The room was well ventilated and maintained at a comfortable temperature of 24 °C using central air conditioning.
At the beginning of the experiment, each subject was seated upright in a comfortable chair positioned 80 cm from a monitor displaying the visual stimuli. To ensure consistency in letter perception across trials, subjects were instructed to align their eye level with the monitor. Figure 2 illustrates the experimental setup used in this study. During the experiment, subjects were instructed to keep their feet flat on the floor and rest their hands on the desk in front of them.
Each trial consisted of three sequential tasks: (1) a relaxation phase, (2) visual observation of an Arabic letter, and (3) mental imagination of the observed letter. Visual stimuli, consisting of black Arabic letters in Calibri font (size 380 pt), were displayed on the monitor to cue subjects. Figure 3 displays representative examples of two stimulus images used in our study. Each subject completed 10 trials for each of the 28 Arabic letters.
The timing structure of each trial included three distinct intervals, as illustrated in Figure 4:
  • Relaxation interval (5 s): A blank white screen was displayed, during which subjects were instructed to relax while keeping their eyes open.
  • Visual observation interval (5 s): A black Arabic letter appeared at the center of the screen against a white background, prompting subjects to carefully observe the displayed letter.
  • Visual imagination interval (8 s): A black screen was shown, and subjects were instructed to close their eyes and mentally visualize the letter they had observed in the previous interval. During this interval, subjects were explicitly instructed to envision the visual pattern of the letter as if it were still present in front of them, rather than thinking of its sound or imagining the act of writing it.
At the end of the third interval, a beep signaled the conclusion of the trial. Throughout the experiment, the experimenter remained seated behind the subject to manage trial recordings and ensure compliance with the experimental protocol. To achieve precise synchronization between the EEG recordings and the presented stimuli, we employed a push-button event marker within the EmotivPro software v3.0. This feature enabled real-time marking of events during stimulus presentation, ensuring accurate alignment between the EEG data and the corresponding stimuli. This method effectively minimized timing discrepancies and provided a reliable foundation for subsequent data analysis.

2.4. Quality Control and Artifact Inspection

To ensure data quality, two of the co-authors (R.A. and H.N), each with more than nine years of EEG analysis experience, performed a manual inspection of the EEG recordings. Specifically, the signals were visually examined to check for any anomalies, including excessive noise, electrode disconnections, or artifacts caused by muscle movements or eye blinks. Trials with clear distortions were deleted from the dataset.

3. Data Description

The raw EEG data are organized in a main folder titled “Raw_ Imagined_ Arabic_ Letters_Dataset”, containing 30 subfolders—each representing a unique subject. Each subject’s folder is labeled with an “S” followed by the subject ID (e.g., “S15” for subject 15). Within each subject’s folder, there are 28 subfolders corresponding to individual imagined Arabic letters. These letter-specific subfolders are named with an “L” prefix followed by the letter ID (e.g., “L03” for letter 3). Table 2 demonstrates the mapping of each letter ID to its respective Arabic letter.
Within each letter’s folder, individual trials are stored as separate MATLAB data files (.mat). The naming format of each file is “Sx_Ly_Tk.mat”, where
  • Sx denotes the subject, with x being an integer between 1 and 30, reflecting the 30 subjects in the study;
  • Ly denotes the Arabic letter, with y being an integer from 1 to 28;
  • Tk denotes the trial number, with k being an integer from 1 to 10.
For example, the file named “S03_L03_T3.mat” contains data for subject 3 imagining the Arabic letter (ت) during the third trial. Figure 5 illustrates the hierarchical folder structure of the recorded dataset.
The data file for each trial includes a data structure titled “EEG”, which comprises eight fields detailed in Table 3.

4. Validation of the Dataset’s Utility

To validate the practical utility of the dataset, we conducted an empirical evaluation using classical feature extraction techniques and a machine learning model, as outlined below.
Initially, the acquired raw EEG signals were preprocessed as described in [10]. Moreover, the automatic artifact rejection (AAR) MATLAB toolbox [15] was employed to reduce muscular artifacts.
Next, the preprocessed EEG signals were divided into non-overlapping segments of 128 samples using a sliding window approach [16]. For each segment, Continuous Wavelet Transform (CWT) was computed across all 14 EEG channels, generating a time-frequency representation for each channel [17,18]. From these representations, we extracted 12 distinct time-frequency features per channel, as outlined in references [10,18]. These features included the sum of logarithmic amplitudes, median absolute deviation, root mean square, inter-quartile range, mean, variance, skewness, kurtosis, flatness, flux, normalized Renyi entropy, and energy concentration. Combining the features from all 14 channels resulted in a 168-dimensional feature vector (14 channels × 12 features) for each window.
These feature vectors were then used to train and test a Random Forest (RF) classifier to recognize the class of the visually imagined Arabic letter. To ensure rigorous evaluation of our models, we implemented a subject-specific 10-fold cross-validation procedure [10,17,18]. For each subject, the EEG data were partitioned into 10 equal-sized folds, with models iteratively trained on 9 folds and tested on the remaining held-out fold. This process was repeated 10 times such that each fold served once as the test set, yielding 10 performance estimates per subject. The results presented in Figure 6 show the mean classification accuracy for each subject, computed by averaging across their 10 test folds, along with the corresponding standard deviation (STD) as a measure of performance variability. Across all subjects, we achieved an overall mean accuracy of 74.8% with an STD of 4.9%.
These preliminary results highlight the dataset’s reliability and practical value, demonstrating that meaningful features can be extracted to effectively distinguish between imagined Arabic letters. This not only validates the dataset’s utility but also underscores its potential for real-world applications in EEG-based classification tasks.

5. Conclusions, Limitations, and Future Work

The primary objective of this study is to present an EEG dataset of visually imagined Arabic alphabet characters, designed for use in BCI applications. The preliminary results, presented in Section 4, demonstrate the potential for recognizing different visually imagined Arabic letters, highlighting the applicability of this dataset in various BCI-related fields. Despite these promising findings, there are some associated limitations.
One potential limitation of our study is the inclusion of a small number of left-handed participants. Previous research indicates that left-handed individuals may exhibit different patterns of hemispheric dominance, especially in motor and cognitive tasks. Since our study focuses on letter-based visual imagery rather than motor execution, we do not expect handedness to significantly influence our results. However, we acknowledge that individual variations in brain activity lateralization could play a role. Future studies could further explore the potential impact of handedness on visually imagined letter processing, particularly in non-Latin alphabets.
Additionally, the dataset was recorded from subjects aged 19 to 23 years old from the Levant and Gulf Cooperation Council (GCC) regions. Therefore, this dataset may not fully represent the broader population of native Arabic speakers, particularly among younger and older age groups. To enhance the generalizability of the dataset, we recommend expanding the demographic diversity of participants in future research.
Lastly, a promising direction for future research using this dataset is the integration of time-frequency analysis techniques with deep learning models to advance BCI systems. Additionally, a further exploration of the dataset’s generalization across different sessions and subjects would be valuable in enhancing the reliability and scalability of BCI systems. Investigating how well the neural patterns of visually imagined letters maintain consistency across multiple recording sessions and diverse participants could offer valuable insights into the practical application of the dataset in dynamic, real-world environments. This approach could significantly contribute to improving the robustness and adaptability of BCI technologies.

Author Contributions

Conceptualization, Methodology, Funding Acquisition, Supervision, and Writing—Original Draft Preparation, R.A.; Software, Data Curation, and Writing—Original Draft Preparation, K.N.; Software, Data Curation, and Writing—Original Draft Preparation, A.E.; Data Collection and Writing, A.H.; Data Collection and Writing, F.H.; Visualization and Investigation, S.Q.; Validation and Writing—Reviewing and Editing, M.I.D.; Validation and Writing—Reviewing and Editing, M.Z.A.; Resources, Validation, and Project Administration, H.A.-N. All authors have read and agreed to the published version of the manuscript.

Funding

The work in this paper was supported, in part, by the College of Engineering Faculty Professional Development Grant at the American University of Sharjah. Additionally, partial support was provided through research grant no. RA SEEIT 01/2024 from the German Jordanian University.

Institutional Review Board Statement

This study received approval from the Institutional Review Board (IRB) at the American University of Sharjah (IRB protocol # 19-513, approval on 21 November 2022) and was conducted in compliance with the ethical standards outlined in the Declaration of Helsinki. EEG data collection took place between December 2022 and May 2023.

Informed Consent Statement

Each subject in the experiment provided written informed consent, allowing the recording and use of their EEG signals for research purposes. The subjects were thoroughly briefed on the study’s objectives, methods, duration, procedures, and their roles. Prior to the recording session, each subject was guided through the experimental process to ensure a full understanding of the steps involved.

Data Availability Statement

The dataset described in this paper is available via figshare at https://figshare.com/s/c287b37708b31ce663f7 (accessed on 22 February 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Knauff, M.; Kassubek, J.; Mulack, T.; Greenlee, M.W. Cortical activation evoked by visual mental imagery as measured by fMRI. NeuroReport Rapid Commun. Neurosci. Res. 2000, 11, 3957–3962. [Google Scholar] [CrossRef] [PubMed]
  2. Rademaker, R.L.; Pearson, J. Training visual imagery: Improvements of metacognition, but not imagery strength. Front. Psychol. 2012, 10, 3–224. [Google Scholar] [CrossRef] [PubMed]
  3. Wilson, H.; Chen, X.; Golbabaee, M.; Proulx, M.J.; O’Neill, E. Feasibility of decoding visual information from EEG. Brain-Comput. Interfaces 2024, 11, 33–60. [Google Scholar] [CrossRef]
  4. Zhang, X.; Yao, L.; Sheng, Q.Z.; Kanhere, S.S.; Gu, T.; Zhang, D. Converting your thoughts to texts: Enabling brain typing via deep feature learning of eeg signals. In Proceedings of the IEEE International Conference on Pervasive Computing and Communications (PerCom), Athens, Greece, 19–23 March 2018. [Google Scholar]
  5. Yang, J.; Awais, M.; Hossain, M.A.; Yee, L.; Haowei, M.; Mehedi, I.M.; Iskanderani, A.I.M. Thoughts of brain EEG signal-to-text conversion using weighted feature fusion-based multiscale dilated adaptive DenseNet with attention mechanism. Biomed. Signal Process. Control 2023, 86, 105120. [Google Scholar] [CrossRef]
  6. Liu, M.; Wu, W.; Gu, Z.; Yu, Z.; Qi, F.; Li, Y. Deep learning based on batch normalization for P300 signal detection. Neurocomputing 2018, 275, 288–297. [Google Scholar] [CrossRef]
  7. Bose, R.; Goh, S.K.; Wong, K.F.; Thakor, N.; Bezerianos, A.; Li, J. Classification of brain signal (EEG) induced by shape-analogous letter perception. Adv. Eng. Inform. 2019, 42, 100992. [Google Scholar] [CrossRef]
  8. Alazrai, R.; Al-Saqqaf, A.; Al-Hawari, F.; Alwanni, H.; Daoud, M.I. A time-frequency distribution-based approach for decoding visually imagined objects using EEG signals. IEEE Access 2020, 8, 138955–138972. [Google Scholar] [CrossRef]
  9. Ullah, S.; Halim, Z. Imagined character recognition through EEG signals using deep convolutional neural network. Med. Biol. Eng. Comput. 2021, 59, 1167–1183. [Google Scholar] [CrossRef] [PubMed]
  10. Alazrai, R.; Abuhijleh, M.; Ali, M.Z.; Daoud, M.I. A deep learning approach for decoding visually imagined digits and letters using time–frequency–spatial representation of EEG signals. Expert Syst. Appl. 2022, 203, 117417. [Google Scholar] [CrossRef]
  11. Ramirez-Quintana, J.A.; Macias-Macias, J.M.; Ramirez-Alonso, G.; Chacon-Murguia, M.I.; Corral-Martinez, L.F. A novel deep capsule neural network for vowel imagery patterns from EEG signals. Biomed. Signal Process. Control. 2023, 81, 104500. [Google Scholar] [CrossRef]
  12. Ahmadieh, H.; Gassemi, F.; Moradi, M.H. A hybrid deep learning framework for automated visual image classification using EEG signals. Neural Comput. Appl. 2023, 35, 20989–21005. [Google Scholar] [CrossRef]
  13. Emotiv Epoc X. Available online: https://www.emotiv.com/products/epoc-x (accessed on 22 February 2025).
  14. EmotivPro Software. Available online: https://www.emotiv.com/products/emotivpro (accessed on 22 February 2025).
  15. Gómez-Herrero, G.; De Clercq, W.; Anwar, H.; Kara, O.; Egiazarian, K.; Van Huffel, S.; Van Paesschen, W. Automatic removal of ocular artifacts in the EEG without an EOG reference channel. In Proceedings of the 7th Nordic Signal Processing Symposium-NORSIG, Reykjavik, Iceland, 7–9 June 2006; pp. 130–133. [Google Scholar]
  16. Alazrai, R.; Alwanni, H.; Daoud, M.I. EEG-based BCI system for decoding finger movements within the same hand. Neurosci. Lett. 2019, 698, 113–120. [Google Scholar] [CrossRef] [PubMed]
  17. Alazrai, R.; Homoud, R.; Alwanni, H.; Daoud, M.I. EEG-based emotion recognition using quadratic time-frequency distribution. Sensors 2018, 18, 2739. [Google Scholar] [CrossRef] [PubMed]
  18. Alazrai, R.; Alwanni, H.; Baslan, Y.; Alnuman, N.; Daoud, M.I. Eeg-based brain-computer interface for decoding motor imagery tasks within the same hand using choi-williams time-frequency distribution. Sensors 2017, 17, 1937. [Google Scholar] [CrossRef] [PubMed]
Figure 1. (A) The Emotiv Epoc X wireless EEG headset. (B) The locations of the EEG electrodes arranged according to the 10–20 international standard.
Figure 1. (A) The Emotiv Epoc X wireless EEG headset. (B) The locations of the EEG electrodes arranged according to the 10–20 international standard.
Data 10 00081 g001
Figure 2. The experimental setup used in the current study.
Figure 2. The experimental setup used in the current study.
Data 10 00081 g002
Figure 3. Sample images of the stimuli used in the experiments. (A) shows the visual stimulus used to imagine letter 01 from Table 2, and (B) shows the visual stimulus used to imagine letter 15 from Table 2.
Figure 3. Sample images of the stimuli used in the experiments. (A) shows the visual stimulus used to imagine letter 01 from Table 2, and (B) shows the visual stimulus used to imagine letter 15 from Table 2.
Data 10 00081 g003
Figure 4. The timing structure of each trial.
Figure 4. The timing structure of each trial.
Data 10 00081 g004
Figure 5. Folder structure of recorded dataset.
Figure 5. Folder structure of recorded dataset.
Data 10 00081 g005
Figure 6. Average classification accuracies for each subject, with standard deviations (STDs) represented by black vertical error bars.
Figure 6. Average classification accuracies for each subject, with standard deviations (STDs) represented by black vertical error bars.
Data 10 00081 g006
Table 1. Demographic information of the subjects in this study, including self-reported handedness.
Table 1. Demographic information of the subjects in this study, including self-reported handedness.
SubjectGenderAgeHandednessSubjectGenderAgeHandedness
S01Male21Right-handedS16Male20Right-handed
S02Female21Right-handedS17Female21Left-handed
S03Male20Right-handedS18Male21Right-handed
S04Male20Left-handedS19Male21Right-handed
S05Male21Right-handedS20Female22Right-handed
S06Female21Right-handedS21Male22Right-handed
S07Female21Right-handedS22Female20Right-handed
S08Female20Right-handedS23Female21Right-handed
S09Female21Left-handedS24Male21Right-handed
S10Male20Right-handedS25Male21Right-handed
S11Female21Right-handedS26Male21Right-handed
S12Male22Right-handedS27Male19Right-handed
S13Male20Right-handedS28Male20Right-handed
S14Male21Right-handedS29Male23Right-handed
S15Male21Left-handedS30Female19Right-handed
Table 2. Mapping of letter IDs to their respective Arabic letters.
Table 2. Mapping of letter IDs to their respective Arabic letters.
Letter IDArabic LetterLetter IDArabic LetterLetter IDArabic Letter
L01أL11زL21ق
L02بL12سL22ك
L03تL13شL23ل
L04ثL14صL24م
L05جL15ضL25ن
L06حL16طL26ه
L07خL17ظL27و
L08دL18عL28ي
L09ذL19غ
L10رL20ف
Table 3. Detailed descriptions of the fields comprised in the data file for each recorded trial.
Table 3. Detailed descriptions of the fields comprised in the data file for each recorded trial.
FieldDescription
timestampThe recorded timestamps for each sample point in the EEG data. These timestamps are organized as a row vector with dimensions Nsp × 1, where Nsp denotes the number of sample points recorded within a specific trial.
dataThe recorded EEG data for each trial are organized as a matrix with dimensions Nch × Nsp, where Nch represents the number of EEG channels (14 channels in this study). The order and description of these EEG channels are detailed in the channels field.
channelsA cell array with dimensions of 14 × 1, where each cell contains a string representing the name of an EEG channel, arranged in the order shown below.
Channel indexChannel nameChannel indexChannel name
1AF38O2
2F79P8
3F310T8
4FC511FC6
5T712F4
6P713F8
7O114AF4
samplingrateThe number of samples measured in one second.
eventsA row vector consisting of four values, each representing the sample index marking the start or end of a specific event within recorded trial. The descriptions of these four events are as follows:
Event indexEvent description
1The sample index marking the beginning of the trial.
2The sample index marking the end of the relaxation interval and the start of the letter observation interval.
3The sample index marking the start of the letter imagination interval and the end of the letter observation interval.
4The sample index marking the end of the letter imagination interval, which also marks the end of the trial.
subjectnameA string representing the subject ID, for example, “S03” for subject 3.
objectnameA string representing the letter ID, for example, “L03” for letter 3.
trialnumberA string representing the trial number, for example, “T03” for trial 3.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alazrai, R.; Naqi, K.; Elkouni, A.; Hamza, A.; Hammam, F.; Qaadan, S.; Daoud, M.I.; Ali, M.Z.; Al-Nashash, H. Electroencephalogram Dataset of Visually Imagined Arabic Alphabet for Brain–Computer Interface Design and Evaluation. Data 2025, 10, 81. https://doi.org/10.3390/data10060081

AMA Style

Alazrai R, Naqi K, Elkouni A, Hamza A, Hammam F, Qaadan S, Daoud MI, Ali MZ, Al-Nashash H. Electroencephalogram Dataset of Visually Imagined Arabic Alphabet for Brain–Computer Interface Design and Evaluation. Data. 2025; 10(6):81. https://doi.org/10.3390/data10060081

Chicago/Turabian Style

Alazrai, Rami, Khalid Naqi, Alaa Elkouni, Amr Hamza, Farah Hammam, Sahar Qaadan, Mohammad I. Daoud, Mostafa Z. Ali, and Hasan Al-Nashash. 2025. "Electroencephalogram Dataset of Visually Imagined Arabic Alphabet for Brain–Computer Interface Design and Evaluation" Data 10, no. 6: 81. https://doi.org/10.3390/data10060081

APA Style

Alazrai, R., Naqi, K., Elkouni, A., Hamza, A., Hammam, F., Qaadan, S., Daoud, M. I., Ali, M. Z., & Al-Nashash, H. (2025). Electroencephalogram Dataset of Visually Imagined Arabic Alphabet for Brain–Computer Interface Design and Evaluation. Data, 10(6), 81. https://doi.org/10.3390/data10060081

Article Metrics

Back to TopTop