Next Article in Journal
Subjective Well-Being and Mental Health among College Students: Two Datasets for Diagnosis and Program Evaluation
Previous Article in Journal
A Set of Ground Penetrating Radar Measures from Quarries
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Data Descriptor

Pupil Data Upon Stimulation by Auditory Stimuli

by
Davide La Rosa
1,
Luca Bruschini
2,
Maria Paola Tramonti Fantozzi
3,
Paolo Orsini
3,
Mario Milazzo
4,* and
Antonino Crivello
1,*
1
Institute of Information Science and Technologies (ISTI-CNR), 56124 Pisa, Italy
2
Department of Surgical, Medical, Molecular Pathology and Critical Cares, University of Pisa, 56124 Pisa, Italy
3
Department of Translational Research and of New Surgical and Medical Technologies, University of Pisa, 56126 Pisa, Italy
4
Department of Civil and Industrial Engineering, University of Pisa, 56126 Pisa, Italy
*
Authors to whom correspondence should be addressed.
Submission received: 3 February 2024 / Revised: 28 February 2024 / Accepted: 29 February 2024 / Published: 5 March 2024

Abstract

:
Evaluating hearing in newborns and uncooperative patients can pose a considerable challenge. One potential solution might be to employ the Pupil Dilation Response (PDR) as an objective physiological metric. In this dataset descriptor paper, we present a collection of data showing changes in pupil dimension and shape upon presentation of auditory stimuli. In particular, we collected pupil data from 16 subjects, with no known hearing loss, upon different lighting conditions, measured in response to a series of 60–100 audible tones, all of the same frequency and amplitude, which may serve to further investigate any relationship between hearing capabilities and PDRs.
Dataset: Data is available at https://zenodo.org/doi/10.5281/zenodo.10497437.
Dataset License: CC-BY-4.0

1. Introduction

Assessing human hearing traditionally relies on motor responses and verbal feedback from cooperative patients, which helps determine their ability to detect varying amplitudes and frequencies of pure tones [1]. However, this approach is not always feasible, particularly when dealing with neonates or pre-lingual infants. In such cases, early hearing assessments are essential, as the presence of auditory neuropathy may necessitate early interventions such as hearing aids or cochlear implants to optimize communication development [2].
Dementia patients pose another challenge. While mild cognitive impairment does not necessarily hinder accurate audiologic assessment [3], standard audiometric tests are often incomplete for 41–44% of dementia patients [4]. These challenges have prompted the development of non-behavioral hearing tests [5].
Electrophysiologic auditory tests like the auditory brainstem response (ABR) and the auditory steady-state response (ASSR) are used to assess hearing thresholds. However, these tests may require sedation or spontaneous sleep in children and uncooperative patients [6,7].
A potential solution lies in recording objective physiological variables related to sound detection. Sound inputs trigger an orienting response, including the pupil dilation response (PDR) [8,9]. Although the PDR amplitude is influenced by various factors, including cognitive resources allocated to the perceptual task [10], it has been generally correlated to the effort associated with a listening task, and is more pronounced when auditory detection is demanding [11,12]. It has been demonstrated that pupil size traces recorded in response to sound input can reflect the PDR, indicating its association with sound detection in both animals [13,14] and humans [15,16,17]. Although the PDR has been thoroughly investigated in population studies [15,16] and documented in typical cases [17], the relationship with sound detection is still unclear. Therefore, PDRs might not be a completely objective measure of hearing ability since they depend on the individual responses to sound stimuli of each subject [11].
In this dataset descriptor paper, we present a dataset of PDRs from 16 subjects with no known hearing loss that, upon two different lighting conditions, were subjected to audible tones, all of the same frequency and amplitude, which may serve to further investigate the relationship between hearing capabilities and pupil dilation.

2. Data Description

The dataset is structured in folders labeled with an anonymized subject ID in the format <sequential_number gender> (e.g., 1F). An additional file called ’audio_stimuli.xlsx’ contains the association of each subject ID with sound frequency and level of the administered stimuli. In each folder, the reader finds two spreadsheet files and two subfolders (Figure 1).
Each spreadsheet contains pupil data for both the left (i.e., sx) and right (i.e., dx) sides, considering the subject’s perspective. The two subfolders, named “video without stimuli—baseline” and “video with stimuli”, contain recordings from the left and right pupil cameras during experiments without and with stimuli, respectively. These recordings capture grayscale infrared images of the eyes, collected at a rate of about 60 FPS with the Pupil Labsoftware tool (Figure 2).
Each spreadsheet is structured in six columns labeled as milliseconds, confidence, diameter, blink, artifact, and audio (Figure 3). In detail:
  • the “milliseconds” field reports the pupil sampling instant as an offset from the beginning of the recording session;
  • the “confidence” column, as reported by the hardware and software producer (https://docs.pupil-labs.com/core/terminology/, accessed on 14 December 2023), contains the “quality assessment” value for a given eye image of the pupil. In particular, a value of “0” means that the pupil could not be detected while the “1.0” is the highest possible value that is assigned when the pupil is detected with a very high certainty. Note that when a blink occurs, or the pupil tracking is lost for any reason, the confidence drops to values equal or near to zero;
  • the “diameter” column contains the pupil diameter, expressed in pixels, evaluated by the “Pupil Labs” software for each sample. Our recordings focus on bidimensional pupil detection, and therefore we considered as the diameter of the pupil, the ellipse that fits the pupil within a given eye image.
  • the “blink” column contains a value equal to “1” whenever the “Pupil Labs” software detected an eye blink, and “0” otherwise.
  • the “artifact” column serves to annotate the instants at which, although a blink was not detected, the pupil tracking confidence was too low. We set the confidence threshold below which the samples are marked as artifacts if no blink is detected to “0.8”. It is important to note that artifacts can occur for several reasons, such as tiredness of the subject or a sudden struggling with keeping the eyes adequately open. In this condition, a blink could not be detected because the pupil is partially covered by the eyelid, resulting in a low detection confidence due to poor tracking of the pupil shape (Figure 4);
  • the “audio” column reports whenever the audio stimulus is administered to the subject during the given sample. The structure of the spreadsheet is the same for both the eye sides (left and right), and for tests with and without the audio stimuli, sheets “audio” and “baseline”, respectively.
Figure 4. Example of subject tiredness: poor pupil tracking leads to low detection confidence.
Figure 4. Example of subject tiredness: poor pupil tracking leads to low detection confidence.
Data 09 00043 g004
Figure 5 shows, as an example, the value of the diameter (in pixel) for subject 1F, acquired during a session with the audio stimulus. The time frames at which the pupil diameter drops to zero means that the tracking was lost due to either a blink event or an artifact. Interestingly, we observed that blink events are usually preceded and followed by a few samples with low confidence.

3. Methods

3.1. Subjects

We collected data from 16 volunteers—nine females and seven males—30 ± 4 years old, with normal hearing. We asked four volunteers to repeat the test twice. In total, we have 20 recordings, both with audio stimuli and without audio stimuli, from 16 subjects. The study was conducted in accordance with the Declaration of Helsinki; as such, the participants were properly informed and signed a written informed consent.

3.2. Employed Devices

The acoustic stimulus was provided by using the AC 40 audiometer (Interacoustics, Middelfart, Denmark), a commercial device commonly employed in clinics.
Pupil data were collected through a commercial device, the Pupil Core (Pupil Labs GmbH, Berlin, Germany), a pair of eye-tracking glasses endowed with two infrared real-time cameras (one per eye), and an environmental camera focused straight ahead of the subject. The Pupil Core was connected to a laptop via a USB connection to process the recorded videos.
Environmental lighting conditions were controlled using a light meter (MT-912, Shenzhen Flus Technology Co., Ltd., Pinghu Town, Longgang District, Shenzhen, China).

3.3. Experimental Protocol

Each subject sat on a chair in a room with controlled lighting conditions. In particular, two different and distinct luminance conditions were used to collect pupil data: between 8 and 12 lux (Low Luminance, LL) and between 95 and 115 lux (High Luminance, HL). Based on the lighting parameter, a total of six subjects experienced the tests under LL condition, six under HL condition, while a total of four subjects were exposed to both HL and LL conditions.
In the dataset, subjects from 1 to 5 (both F and M) were tested under LL conditions, while subjects from 6 to 10 (both F and M) were tested under HL conditions. All subjects wore the Pupil Core to record pupil data before, during, and after the acoustic stimulus. In order to collect a baseline signal, pupil data were collected in the absence of acoustic stimuli for approximately 4 min. Afterwards, the so-called “Audio Condition”, consisted in delivering to the subjects one stimulus every 4 s, in the form of a 2000 Hz pure tone (in the center of the speech range) with an amplitude of 70 dB HTL (an amplitude clearly detectable by any subject with no hearing loss), extended for 500 ms, for an additional 4 min.
The synchronization of the acoustic stimuli and PDRs was made possible by the images from the environmental camera pointed towards the audiometer that was processed afterwards with those from the pupils. This process was carried out by manually identifying the appropriate timestamp and examining the video frame-by-frame. Since the video streams from the pupil cameras and the external world are managed by the same software, the timestamp corresponding to selected frames finds correspondence in the timestamps of the raw data coming from the pupils.
The experimental protocol was supervised by an interdisciplinary team consisting of engineers, physicians and technicians specialized in otolaryngology.

3.4. Data Collection and Processing

Video and signal data were collected at an approximate frame rate of 62 FPS. The software was able to identify the shape of the pupil, in terms of major diameter (in pixels), providing a confidence index between 0 and 1. In fact, when the subject is perfectly aligned with the camera axis, the pupil geometry evaluated by the camera is circular. If the subject slightly moves his or her eyes away from the camera axis, the geometry becomes ellipsoidal, and the software reports the major axis of this ellipse.
The choice of the pupil confidence threshold significantly impacts the accuracy and reliability of the detected events (blinks and artifacts). To select an appropriate value, we relied on both recommendations provided by PupilLabs, which identifies the threshold of 0.6 as generally the minimum value to consider pupil information significant, and on the empirical visual validation of several events collected during our experimental campaign. It is important to note that the blink and artifact events are provided as additional information to facilitate the data exploration, even though any kind of blink and artifact detection algorithm can be developed starting from the raw pupil diameter and confidence time series.
Based on preliminary tests, we used an empirical threshold value of 0.8 as a confidence index to define the measurement as reliable. Furthermore, attention was paid to eye blinks, defined as time intervals in which the pupils were partially or fully obstructed by the eyelids.
Regarding the front view environmental camera, the recordings were resampled at 120 FPS to better match the time at which the acoustic stimuli were delivered to subjects. A summary of data collection and processing is reported in Figure 6.

4. Conclusions

The experimental protocol aimed at creating a controlled environment to measure sound-evoked PDRs. In view of this, to get closer to a clinical environment, we used an audiometer commonly used in clinics, and approved by the ethical committee, which could provide pure tones to subjects. Specifically, we employed a frequency (2000 Hz) in the speech range, with an amplitude (70 dB) that could be clearly heard by the subject. Moreover, the lighting conditions chosen for the experiments aim to provide useful information on environmental conditions that might hinder data collection.
This dataset provides a collection of physiological pupillary responses upon auditory stimuli in the form of pure tones at a single frequency and amplitude. Such data could be used to assess whether PDRs might be exploited in clinics against, or in substitution to traditional audiometric hearing tests in partially responsive subjects (e.g., children, elders). From a different perspective, physiological data could be studied to further elucidate the cooperation between sight and hearing apparatus to detect and locate external sounds (e.g., [18]).
From the collected data, without any specific post-processing of the results (out of the scope of this work), we can say that: (i) an immediate measurement of the pupil diameter variation is highly dependent on the employed cameras and their placement with respect to the patient. As such, we extracted only measurements in pixels of such an important feature and, potentially, its variation with respect to a baseline (without delivering specific audio signals). (ii) We observed different behavior of the pupil dilation upon different lighting conditions, which might have implications on the clinical assessment protocols.
However, further data processing, including statistical analyses are required to quantify other results.
The limitations of this research product, and thus the consequential potential improvements, are: (i) the relatively small number of participants that could be increased and improved in future studies by using a diverse and larger sample to validate the findings across different age groups and hearing abilities; (ii) the presence of only participants with no known hearing loss, authorized by an ethical committee, which are completely collaborative during the experiments. In view of partially or non-collaborative participants (e.g., children, elders), a different protocol—to be authorized—might be designed to collect reliable data. In particular, non-wearable devices might be used in addition (or in substitution) of the currently employed device to facilitate data acquisition.

Author Contributions

Conceptualization, D.L.R., M.M. and A.C.; methodology, D.L.R., M.M. and A.C.; software, D.L.R. and A.C.; validation, D.L.R. and A.C.; formal analysis, all authors; investigation, all authors; resources, L.B. and A.C.; data curation, all authors; writing—original draft preparation, D.L.R., M.M. and A.C.; writing—review and editing, all authors; visualization, D.L.R., M.M. and A.C.; supervision, D.L.R., M.M. and A.C.; project administration, A.C.; funding acquisition, L.B. and A.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the European Union—Next Generation EU through the Ministero dell’Università e della Ricerca, Italy, under the PRIN Grant 20222XCFA4, project A-Pure Audiometry with Pupil Response, Decreto Direttoriale n. 861.

Institutional Review Board Statement

The Ethical Committee of the local hospital in Pisa (Italy) approved the study with the protocol labeled 45/2023 on 5 December 2023. The study was conducted in accordance to the Declaration of Helsinki.

Informed Consent Statement

Written informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patients to publish this paper.

Data Availability Statement

The data presented in this study are openly available in Zenodo at https://zenodo.org/doi/10.5281/zenodo.1049743.

Acknowledgments

The authors want to acknowledge Diego Manzoni for the fruitful discussions on the topic.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PDRPupil Dilation Response
FPSFrame Per Second

References

  1. Carhart, R. Clinical determination of abnormal auditory adaptation. AMA Arch. Otolaryngol. 1957, 65, 32–39. [Google Scholar] [CrossRef]
  2. Yoshinaga-Itano, C. From screening to early identification and intervention: Discovering predictors to successful outcomes for children with significant hearing loss. J. Deaf Stud. Deaf Educ. 2003, 8, 11–30. [Google Scholar] [CrossRef] [PubMed]
  3. Lin, F.R.; Ferrucci, L.; Metter, E.J.; An, Y.; Zonderman, A.B.; Resnick, S.M. Hearing loss and cognition in the Baltimore Longitudinal Study of Aging. Neuropsychology 2011, 25, 763. [Google Scholar] [CrossRef] [PubMed]
  4. Gates, G.A.; Gibbons, L.E.; McCurry, S.M.; Crane, P.K.; Feeney, M.P.; Larson, E.B. Executive dysfunction and presbycusis in older persons with and without memory loss and dementia. Cogn. Behav. Neurol. 2010, 23, 218–223. [Google Scholar] [CrossRef] [PubMed]
  5. Scharinger, C.; Kammerer, Y.; Gerjets, P. Pupil dilation and EEG alpha frequency band power reveal load on executive functions for link-selection processes during text reading. PLoS ONE 2015, 10, e0130608. [Google Scholar] [CrossRef]
  6. Thornton, A.R.D.; Kimm, L.; Kennedy, C.R. Methodological factors involved in neonatal screening using transient-evoked otoacoustic emissions and automated auditory brainstem response testing. Hear. Res. 2003, 182, 65–76. [Google Scholar] [CrossRef]
  7. Legatt, A.D. Electrophysiologic auditory tests. Handb. Clin. Neurol. 2015, 129, 289–311. [Google Scholar]
  8. Beatty, J. Task-evoked pupillary responses, processing load, and the structure of processing resources. Psychol. Bull. 1982, 91, 276. [Google Scholar] [CrossRef]
  9. Naylor, G.; Koelewijn, T.; Zekveld, A.A.; Kramer, S.E. The application of pupillometry in hearing science to assess listening effort. Trends Hear. 2018, 22, 2331216518799437. [Google Scholar] [CrossRef]
  10. Laeng, B.; Sirois, S.; Gredebäck, G. Pupillometry: A window to the preconscious? Perspect. Psychol. Sci. 2012, 7, 18–27. [Google Scholar] [CrossRef]
  11. Zekveld, A.A.; Kramer, S.E.; Festen, J.M. Pupil response as an indication of effortful listening: The influence of sentence intelligibility. Ear Hear. 2010, 31, 480–490. [Google Scholar] [CrossRef]
  12. Kramer, S.E.; Kapteyn, T.S.; Festen, J.M.; Kuik, D.J. Assessing aspects of auditory handicap by means of pupil dilatation. Audiology 1997, 36, 155–164. [Google Scholar] [CrossRef] [PubMed]
  13. Bala, A.; Takahashi, T. Pupillary dilation response as an indicator of auditory discrimination in the barn owl. J. Comp. Physiol. A 2000, 186, 425–434. [Google Scholar] [CrossRef] [PubMed]
  14. Montes-Lourido, P.; Kar, M.; Kumbam, I.; Sadagopan, S. Pupillometry as a reliable metric of auditory detection and discrimination across diverse stimulus paradigms in animal models. Sci. Rep. 2021, 11, 3108. [Google Scholar] [CrossRef]
  15. Kramer, S.E.; Lorens, A.; Coninx, F.; Zekveld, A.A.; Piotrowska, A.; Skarzynski, H. Processing load during listening: The influence of task characteristics on the pupil response. Lang. Cogn. Process. 2013, 28, 426–442. [Google Scholar] [CrossRef]
  16. Liao, H.I.; Kidani, S.; Yoneya, M.; Kashino, M.; Furukawa, S. Correspondences among pupillary dilation response, subjective salience of sounds, and loudness. Psychon. Bull. Rev. 2016, 23, 412–425. [Google Scholar] [CrossRef] [PubMed]
  17. Bala, A.D.; Whitchurch, E.A.; Takahashi, T.T. Human auditory detection and discrimination measured with the pupil dilation response. J. Assoc. Res. Otolaryngol. 2020, 21, 43–59. [Google Scholar] [CrossRef] [PubMed]
  18. Tsang, K.Y.; Mannion, D.J. Relating Sound and Sight in Simulated Environments. Multisensory Res. 2022, 35, 589–622. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Structure of the dataset folders showing the content related to subject 1F.
Figure 1. Structure of the dataset folders showing the content related to subject 1F.
Data 09 00043 g001
Figure 2. Screenshot of the Pupil Labs software tool used to acquire the recordings of the pupils.
Figure 2. Screenshot of the Pupil Labs software tool used to acquire the recordings of the pupils.
Data 09 00043 g002
Figure 3. Example of the “audio” sheet content for subject 1 F .
Figure 3. Example of the “audio” sheet content for subject 1 F .
Data 09 00043 g003
Figure 5. Example of the pupil diameter plot for subject 1F. Different colors are used to highlight the blink/artifact instants, and the actual value of the diameter.
Figure 5. Example of the pupil diameter plot for subject 1F. Different colors are used to highlight the blink/artifact instants, and the actual value of the diameter.
Data 09 00043 g005
Figure 6. Methodology employed to collect and process acoustic and PDR data.
Figure 6. Methodology employed to collect and process acoustic and PDR data.
Data 09 00043 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

La Rosa, D.; Bruschini, L.; Tramonti Fantozzi, M.P.; Orsini, P.; Milazzo, M.; Crivello, A. Pupil Data Upon Stimulation by Auditory Stimuli. Data 2024, 9, 43. https://doi.org/10.3390/data9030043

AMA Style

La Rosa D, Bruschini L, Tramonti Fantozzi MP, Orsini P, Milazzo M, Crivello A. Pupil Data Upon Stimulation by Auditory Stimuli. Data. 2024; 9(3):43. https://doi.org/10.3390/data9030043

Chicago/Turabian Style

La Rosa, Davide, Luca Bruschini, Maria Paola Tramonti Fantozzi, Paolo Orsini, Mario Milazzo, and Antonino Crivello. 2024. "Pupil Data Upon Stimulation by Auditory Stimuli" Data 9, no. 3: 43. https://doi.org/10.3390/data9030043

Article Metrics

Back to TopTop