Next Article in Journal
Misunderstanding Flight Part 1: A Century of Flight and Lift Education Literature
Next Article in Special Issue
Perspectives on Identity and d/Deaf and Hard-of-Hearing Students
Previous Article in Journal
Use of an Artificial Intelligence-Driven Digital Platform for Reflective Learning to Support Continuing Medical and Professional Education and Opportunities for Interprofessional Education and Equitable Access
Previous Article in Special Issue
Deaf and Hard of Hearing Students with Disabilities: An Evolving Landscape
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

The Connected Life: Using Access Technology at Home, at School and in the Community

Faculty of Education, York University, Toronto, ON M3J 1P3, Canada
Educ. Sci. 2023, 13(8), 761; https://doi.org/10.3390/educsci13080761
Submission received: 20 March 2023 / Revised: 20 July 2023 / Accepted: 21 July 2023 / Published: 25 July 2023

Abstract

:
Hearing technologies such as hearing aids, cochlear implants and bone-anchored devices provide students with hearing loss with far greater access to auditory information (and most importantly, to spoken language) than even a decade ago. However, in a student’s daily life, many situations arise where effective communication and participation can be comprised by factors such as distance, noise, reverberation, difficulty hearing peer input, missing or obscured visual information (e.g., due to masks during the COVID-19 pandemic), speakers with accents or poor auditory/visual quality (e.g., on the phone or during online learning). Access technologies such as remote microphone systems, wireless connectivity platforms and captioning can be used to supplement and/or clarify auditory and visual information, so that students can fully participate in all aspects of their lives. This article discusses how access technologies can provide support for students in preschool, elementary, secondary and postsecondary education. The importance of universal design for access to public spaces, such as schools and community spaces, to ensure that individuals with hearing loss live in an equitable and inclusive world are also discussed.

1. Introduction

For deaf children for whom the development of spoken language is a goal, consistent use of effective hearing technologies such as hearing aids, cochlear implants and bone-anchored devices is a cornerstone. (The term “deaf children” will be used as an inclusive term to describe children with any degree, type or configuration of hearing loss.) It is one of the strongest predictors of spoken language development [1,2,3,4] and forms the foundation for the subsequent development of reading and writing skills [5]. In large part due to the implementation of universal newborn screening programs across the globe, provision of hearing technology at a young age is now possible, and the development of typical language and literacy skills is within reach for many deaf children [6,7,8,9]. The access to sound provided to deaf children by 21st-century hearing technologies far eclipses what was possible even 20 years ago. Yet, in addition to the use of hearing technologies, there are a wide range of additional assistive technologies available which can enhance the child’s hearing technology or provide visual support or replacement for auditory information. So, if hearing technologies provide sufficient access for the development of spoken language for most children, most of the time, why might we consider additional access technology, particularly given the additional financial cost?
Despite advancements in hearing technologies, typical hearing can never be restored. All individuals with hearing loss will still experience difficulties understanding speech under adverse listening conditions, no matter how sophisticated their hearing technology [10,11,12]. Hearing technologies are very effective for communication with one speaker in close proximity to the listener, articulating clearly, in a quiet room with good acoustics. However, this rarely represents real world communication situations in children’s everyday lives. Deaf children acquire language, literacy and communication skills not only at home and in the classroom but also in the gym, on the playground, on field trips, on the soccer field and in many other situations where they are in communication with others. Typically, hearing children have access to sound 24 h a day, and there is never a moment when hearing is turned off. Deaf children are already at a disadvantage in terms of quantity of exposure to sound and spoken language since hearing devices cannot be worn 24/7; anything that can be done to maximize the quality and quantity of sound input during wearing time is valuable. Southworth [13] coined the term “soundscape” to describe the acoustic environment in which humans are immersed; the term has subsequently been expanded to refer to specific contexts, such as music soundscapes or classroom soundscapes [14,15]. The goal of access technologies for deaf children who use spoken language is to support the ability of the deaf child to hear and experience all aspects of their soundscapes. It does need to be acknowledged that there are many deaf children in low and middle income countries who do not have access to this technology because the technology is simply not available because of financial constraints and/or because of lack of support in its implementation and use. Even in high-income countries, access to services and technology can be highly variable. The World Health Organization’s recent World Report on Hearing notes that there is still much work to be done by governments, international and nongovernment organizations, professional groups and other stakeholders to ensure that quality services and technologies are available to all individuals with hearing loss [16].
This paper will provide an overview of the factors which adversely affect spoken language communication, types of access technologies currently available and the benefits and challenges of these technologies. A glimpse into some new technologies currently being explored and/or under development will also be provided.

2. Factors Adversely Affecting Communication for Deaf Children

There are several factors that reliably interfere with the ability to hear, understand and communicate for individuals with hearing loss, even with the use of hearing technology. In 2015, the Fifth Eriksholm Workshop on Hearing Impairment and Cognitive Energy brought together experts from the fields of audiology, engineering, neuroscience, speech perception, gerontology, philosophy and psychology to consolidate the research on the complex nature of listening for individuals with hearing loss [17]. The resulting theoretical model was the Framework for Understanding Effortful Listening (FUEL) [18]. The authors note that simply providing audibility of sounds (the task of hearing technology) is not sufficient to ensure successful communication. When the quality of the auditory information provided to the listener is reduced, individuals will need to expend more cognitive effort. Use of the Framework for Understanding Effortful Listening (FUEL) has been applied for children as well, to better understand how to support deaf children’s language and literacy development [19,20].
FUEL proposes that individuals with hearing loss will experience difficulty when a communicative situation is “acoustically adverse or informationally complex”. There are several potential factors contributing to this difficulty; for simplicity, they have been loosely grouped here into three categories—environmental factors (variables that contribute to poor listening environments), speaker factors (variables that create difficulty understanding a particular individual) and listener factors (variables within the individual).

2.1. Environmental Factors

Distance and the Listening Bubble

For all listeners, overall loudness of a sound decreases with distance from the source of the sound speech; however, the speech signal also becomes degraded as it travels as softer speech sounds such as “s” drop off and noise distorts the signal [21]. Madell [22] refers to the optimal listening distance from the speaker as the child’s “listening bubble”, the size of which varies according to factors such a degree of hearing loss, characteristics of hearing technology and room acoustics. Beyond this range, sound decreases in volume exponentially, at a rate of 6 decibels with every doubling of distance. This is particularly problematic for young deaf children, for whom exposure to quality and quantity of spoken language is significantly reduced for any conversation taking place at any distance. For infants and toddlers, this might mean difficulty hearing when in a stroller or car seat, at the playground, at daycare or at the dinner table with family. For school-aged children, this might mean difficulty hearing classmates, videos presented in class, information over the public address system, conversations in the cafeteria or at recess, coaches on a soccer field or the teacher on a field trip. For a postsecondary student, this might mean difficulty hearing the professor in a large lecture hall, hearing classmates during small group work or communicating with employers and others during an experiential learning or practicum placement.

2.2. Noise and Reverberation

Distance is a key contributing factor to difficulty in communication, but it is not the only problem. Background noise is everywhere; children are rarely in a communication situation that is completely without noise. The effects of noise are best described by the concept of signal-to-noise ratio, that is, the ratio of the loudness of the signal of interest (usually a speaker’s voice) to the loudness of the background noise. A positive value indicates that the speaker’s voice is louder than the background noise, while a negative value indicates that the noise is louder than the speaker’s voice. Signal to noise ratios for children have consistently been reported as recommendations for the speaker’s voice to be 15 to 20 decibels louder than the background noise [16,23]. However, research has consistently indicated that the typical signal to noise ratio in classrooms is not infrequently a negative value, with noise levels louder than the teacher’s voice (see [24] for a review).
Reverberation is seen when sound in a room is reflected off walls, ceilings and floors. This reflected sound adds to the total sound level in the room, eventually creating noise and distortion. As with noise, there are published guidelines for reverberation characteristics for classrooms, but again, classrooms rarely meet these standards [24,25,26,27]. Classroom acoustics do not improve at the postsecondary level, where large class sizes and poor room acoustics are common [28,29,30]. In contrast to elementary and secondary students, postsecondary students are likely to encounter a wide variety of physical classrooms, from huge lecture halls to small seminar rooms to laboratories, resulting in the need to adapt to a wide range of communication obstacles over the course of a school day.
While we might suspect that the listening environment at home for younger children is quieter, this is not necessarily so. There is less research evaluating room acoustics at home (although there has been renewed interest in the topic with the move to work from home arrangements because of the COVID-19 pandemic). However, studies indicate that acoustics conditions in homes can be poor, often no better than what is seen in typical classrooms [31,32].

3. Speaker Factors

Even if the listening environment acoustics are excellent, other communication variables related to the speaker and to individual characteristics impact speech understanding. Much of our understanding of the effects of speaker factors comes from both the literature in deafness and the literature on listening comprehension of English-language learners. The size of the research literature precludes a fulsome discussion; however, a brief overview of these factors is discussed here.

3.1. Clear Speech and Message Complexity

Clear speech refers to a style of speaking characterized by clearly articulated speech provided at slightly slower and slightly louder levels than a more informal conversational style [33,34]. It has been demonstrated to be helpful for individuals with and without hearing loss, particularly in adverse listening conditions such as in noise [35,36,37,38]. In typical conversations, however, rate of speech, articulation and loudness level are not only highly variable across speakers but unpredictable [39,40,41]. Conversational partners will each have their own speech patterns. Some will speak more quickly, some will have less precise articulation, some will have softer voices; the individual with hearing loss has no control over this.
The adverse effects of environmental factors and unpredictable speech patterns act to degrade the acoustic signal to a greater or lesser extent. However, even with the use of clear speech, linguistic complexity of the message plays a part in listening comprehension, with more complex messages requiring greater cognitive effort [42,43]. In addition to linguistic complexity of the message, task complexity should also be considered. For example, a multistep instruction (go upstairs, brush your teeth, put your pajamas on and pick out a book) is not linguistically complex in terms of vocabulary or syntax, but the task to be performed relies heavily on auditory memory and sequencing, skills which have been shown to be an area of difficulty for some deaf children [43,44,45,46]. School-aged children face additional challenges since many of their listening tasks occur in new learning contexts, using decontextualized language [20].
As students move from elementary to secondary to postsecondary education, listening demands only increase, particularly in terms of both message and task complexity. Notetaking, for example, is a significant challenge to deaf students who not only have incomplete access to the message due to their hearing loss, environmental factors and lack of access to visual cues but are also engaged in cognitively demanding task under time pressure. Siegel [47] states that notetaking demands “concentration, listening endurance, an active aural vocabulary, some knowledge of the lecture genre, the capacity to prioritize information, and the ability to multi-task”.

3.2. Accented Speech

Even individuals without hearing loss sometimes experience difficulty understanding a communication partner with non-standard pronunciation of the language (i.e., accented speech). In addition, conversation often occurs under the adverse listening conditions described above (distance, noise and reverberation). Difficulty understanding accented speech under adverse listening conditions has been documented for individuals with typical hearing those with hearing loss, and for English-language learners [48,49,50,51].

3.3. Access to Speechreading Cues

Speechreading can be considered to be the use of visual clues from a speaker’s lip and facial movements, gestures, posture and body language to determine or infer meaning. All listeners, with or without hearing loss, use speechreading cues to a greater or lesser degree. However, individuals rely more on speechreading in difficult listening environments, and deaf children are often in situations where they cannot access speechreading cues (e.g., infants in car seats, school-aged children on a sports field or postsecondary students in a 300-seat lecture hall). When the acoustic signal is degraded by distance, noise and reverberation, listeners engage more explicit cognitive processes and strategies such as working memory and inferencing [52]. Adding visual cues via speechreading fills in some of the missing pieces in the message, providing more clarity and reducing listening and cognitive effort [53,54]. Interest in the effects of reduced access to speechreading cues has increased in recent years due to the widespread wearing of face masks during the COVID-19 pandemic. All masks degrade speech to a greater or lesser extent and interfere with speech perception for individuals with typical hearing and with hearing loss [55,56,57].
Less is known about the extent to which young deaf children can utilize speechreading skills to support communication, and there are limited assessment instruments available. However, research has shown that they have more difficulty without, and better speech perception with, speechreading, particularly under poor listening conditions [58,59,60,61].

4. Listener Factors

Age

Listening and learning in a classroom requires a wide range of linguistic and cognitive skills, all of which are developmental in nature. Language competency, working memory, executive function, processing speed, metacognition—these are all skills that develop with age. The ability to process auditory information is also developmental, relying on neurological maturation of the auditory pathways in the brain. This maturation begins at birth and continues until early adulthood, when skills approach adult levels. It includes skills such as auditory closure, selective attention, speech discrimination in noise, sequencing auditory information and identifying subtle acoustic information [62,63,64,65,66,67,68].

5. Purpose of Access Technologies

In summary, adults with typical hearing have difficulty understanding speech at a distance and in noisy or reverberant environments. They have difficulty if the message content is unfamiliar or complex, if the speaker has heavily accented speech or if they are tired. These difficulties are multiplied for adults with hearing loss but also for children with typical hearing (who have auditory processing systems which are neurologically immature). Further, they are multiplied exponentially for children with hearing loss for whom all of these factors come into play—age, immature auditory processing systems, hearing loss, often incomplete language foundations and lack of metacognitive strategies, and they also find themselves in poor listening environments over which they have no control. Both children and adults expend significant amounts of energy on speech perception, cognitive processing and memory in complex listening environments, and this energy expenditure is exponentially higher for individuals with hearing loss [69,70,71,72,73,74,75]. Reducing listening effort and fatigue is a byproduct of improving access; if listeners can hear clearly and without strain, listening effort and fatigue is automatically reduced. If a listener can identify that the acoustical environment is poor, they might be able to implement strategies for amelioration. However, deaf children often have few skills to identify or anticipate difficult communication situations, and little awareness of communication breakdowns when they occur [69]. It is fair to say, then, that the listening environments of deaf children from infancy to postsecondary education provide more than sufficient rationale to consider the use of access technology.
The purpose of access technologies is to improve communication for the individual with hearing loss and ensure that they are connected to the world using two strategies. The first strategy is to maintain the integrity of auditory information received at the hearing technology microphone to the greatest extent possible. The second strategy is to supplement missing or distorted auditory information with visual information.
Different technologies address these access goals in different ways. Figure 1 provides an overview of the technologies described in subsequent sections.
Auditory technologies ensure that the signal received at the hearing aid, cochlear implant or bone-anchored hearing device microphone is as clear and intact as possible, with minimal distortion from distance, noise and reverberation. They typically involve using an external microphone close to the sound source (ideally within 4 to 6 inches) so that the integrity of the auditory signal is maintained. Since the signal travels only a short distance, the possibility of the signal being smeared or distorted by noise or reverberation is greatly reduced. Visual technologies, on the other hand, convert the speech signal to written text or provide visual indicators of auditory information (such as flashing light fire alarms, or baby monitors that provide a visual signal to the caregiver when sound from the baby is detected).

6. Auditory Technologies

There have been many terms used for technology which uses one or more microphones worn by a speaker to transmit sound wirelessly to the deaf child, typically via the child’s hearing technology. These systems include frequency modulation (FM) system, digital modulation (DM) systems, hearing assistance technology (HAT) and remote microphone systems (RMS). This article will use remote microphone systems (RMS) to describe any system which uses wireless transmission from a speaker-worn transmitter to enhance speech clarity.

6.1. Remote Microphone Systems for School

Access to a Primary Speaker

The benefits of RMS at school, with an external microphone worn by the classroom teacher, are undisputed. The external microphone (generally referred to as a transmitter), sends sound wirelessly to a receiver (usually interfaced with the student’s hearing technology), resulting in the student being able to hear the teacher as if they were speaking 4 to 6 inches away from the student’s ears at all times. The location of the external microphone reduces the effects of distance and to a lesser extent, of noise and reverberation (since the speech signal is sent more directly to the student). This means that wherever the student and the teacher are located in the classroom, the student has better access to the speech signal. The technology has been available since the 1960s, and a wealth of research demonstrates their ability to improve speech perception and access to auditory information in the classroom, regardless of degree of hearing loss, student age or type of hearing technology evaluated [70,71,76,77,78,79,80,81,82,83,84,85]. In fact, provision of RMS is one of the most common accommodations reported to be included on student individual education plans [84,85,86,87].

7. Access to Peer Input

Remote microphone systems help the deaf child hear a primary speaker clearly (in most cases, this is the classroom teacher). However, classrooms represent dynamic auditory environments, and there are other sources of important information in a classroom, including comments and questions from peers and classroom audio. A great deal of learning in classrooms happens through dynamic interactions among all members of the classroom community, and a single microphone worn by a classroom teacher will not capture all of this rich information. Group discussion is often a challenge for deaf students because of the fast pace, and difficulty identifying, and then attending to, the current speaker. Many remote microphone systems also allow for the addition of passaround microphones which can be used by all students in the classroom. For small group work, there are also options for table top microphones which can be placed in the middle of a table to pick up the voices of students seated around it. One study of older students and adults evaluated both options (table top microphones and passaround microphones) and found significant improvements in speech understanding with their use, with speech perception improvements ranging from 14 to 23 percentage points for table top microphones, and 27 to 47 percentage points for passaround microphones [39]. Teachers have also reported that the use of passaround microphones had a positive impact on student engagement and student willingness to participate [88].

8. Access to Classroom Audio and Online Learning

Technology-enhanced learning and the use of audiovisual teaching and learning materials are important parts of today’s classrooms [89], so it is important to ensure that deaf students have access to all audiovisual materials used by the teacher. Instructional materials which incorporate audio and video are very commonly used in classrooms; however, deaf students may not be able to hear the audio clearly due to distance, noise and reverberation. Visual cues (such as speechreading) may also be missing from audiovisual materials. All remote microphone systems offer the ability to connect their microphones to audio sources (e.g., a laptop, tablet, cellphone, listening center or interactive whiteboard) either wirelessly or via an audio cable.
The COVID-19 pandemic created a paradigm shift in education, away from in-class learning towards online learning. Online learning activities can include a variety of different ways to access information, many of which create a host of potential barriers for deaf students [89,90,91,92,93,94,95]. Online learning includes extensive use of virtual synchronous classes as well as instructor-created audio and multimedia materials sourced from the Internet. Noise levels and audio and video quality can all be highly inconsistent during a synchronous call depending on participants’ streaming capability and their camera and microphone settings. In a synchronous online class, participants may turn off their video, either because they do not feel comfortable with video on or because bandwidth begins to become a problem. If many people on a conference call are using video, deteriorating audio and video quality can develop as time goes on. Access to visual cues via speechreading can range from excellent (with good video quality and the speaker centered in the window) to poor (when speakers are backlit or only part of their head shows) to nonexistent (when participants turn off their cameras altogether). Prerecorded materials and multimedia materials sourced from the Internet vary widely in audio quality and in the provision of speechreading cues, forcing students to rely entirely on the audio signal. It is crucial for deaf students to have the clearest possible auditory signal from their online learning devices, clarity which can be provided by the use of audio connectivity to RMS.

8.1. Remote Microphone Systems at Home and in the Community

It is generally more common for students to have RMS at school (where they are provided by the educational system) than at home [96]. There are likely several factors for this, such as cost (including concerns regarding loss or damage) and lack of awareness of the technology by families [96,97]. RMS recommended for school use vary somewhat in technology characteristics, as they typically have more features and options to provide the flexibility needed to manage a wide range of student needs in a wide range of classrooms. However, most hearing aid, cochlear implant and bone-anchored device manufacturers offer simplified RMS for home use, typically consisting of a single microphone for a speaker which broadcasts to the student’s hearing device, and these systems are simpler to use and lower in cost.
There is less research on the use of RMS at home than at school. However, families have reported a number of advantages for home use, including better hearing and understanding by their child, safety in road traffic, less listening fatigue and perceived improvements in vocabulary acquisition [96,97,98]. Contexts which may offer particular advantages for the use of RMS at home and in the community are described here.

8.2. Sports and Extracurricular Activities

Elementary and secondary school students often participate in sports and extracurricular activities both at school and in the community. These activities represent very dynamic listening activities, with both speech and other auditory information (e.g., referee whistles) coming from a variety of different sources, and deaf children can face a number of obstacles to full participation [99]. It is impossible to manage access to all of the sources of information during sports; however, it is possible to improve access to the one adult common to most sporting activities, the coach. Use of a remote microphone system can provide access to instructions during practice. Use of an RMS during games or competitions (for example, during a time-out in basketball or football) may need to be explained and sometimes negotiated, however, as there can be a perception among referees, judges or other coaches and competitors that the deaf student is receiving an unfair advantage. In fact, the remote microphone system serves to level the playing field for the deaf student as much as possible.

8.3. Experiential Learning

Experiential learning has become an increasing focus of postsecondary institutions, whether these be formal professional or paid practical internships, or apprenticeships or more informal community volunteering, mentorship relationships or opportunities to participate in courses conducted outside of the traditional classroom. Experiential learning can be highly successful for students with disabilities, providing opportunities to experience real-world employment situations in a supported context, to practice and integrate skills learned in the classroom and to develop networks in industry and organizations which may help in the job search after graduation, and providing support. For deaf students, experiential learning also involves significant communication challenges interacting with employers, fellow employees and potentially customers or members of the public under difficult or stressful communication situations (e.g., using the phone or working in a noisy office or restaurant). Experiential learning represents a context where use of an RMS could improve communication under challenging communication situations.

8.4. Infants and Toddlers

In contrast to older children, adolescents and adults, the communication partner for infants and toddlers is very often in close proximity to the child. In a one-to-one interaction between an infant and a caregiver, distance, noise and reverberation are less problematic, and the integrity of the speech signal is generally maintained. However, children also learn language incidentally, through immersion in the language happening all around them. As noted previously, microphones pick up sound very well in close proximity but at a distance, sounds are not picked up efficiently or clearly by the hearing technology microphone, resulting in less access to language compared to hearing children [96,97,98].
In addition, infants and toddlers still spend a significant part of their day in situations where distance, noise and reverberation do come into play and often where there is the added problem of reduced visual cues [96,97]. These include riding in a front-facing stroller, riding in the back seat of a car (particularly problematic for infants in rear facing car seats), riding in a sling across the chest or a backpack on an adult’s back. In these cases, the infant or toddler is often completely cut off from communication with the adult, reducing opportunities for language learning and potentially increasing emotional distress for children. For an infant in a rear-facing car seat, for example, they can neither see nor hear the parent, leaving them, from their perspective, completely alone.
Remote microphone systems have been suggested for use with infants and toddlers since their inception in the 1960s. There is theoretical rationale for the wider use of RMS to support language development in infants and toddlers with hearing loss [97,98,99,100,101]. The development of syntactic and pragmatic language skills may rely more heavily on a clear signal to access subtle acoustic characteristics in language input and conversational interactions than does vocabulary development. Inconsistent access to spoken language provided by hearing technologies alone may have a significant impact of language development, with more consistent access through RMS use potentially leading to better language outcomes. In the early years of remove microphone technology, uptake was limited and often confined to children with severe to profound hearing loss, with parents reporting that the technology could be expensive and complicated to use [97,98,100]. However, with improvements in technology, a number of studies have reported positive outcomes for infants, toddlers and preschoolers with the use of RMS at home [96,97,98,99,100,101,102,103,104,105,106,107,108,109].
Recent research using Language Environmental Analysis (LENA) recorders indicated that over the course of a typical day, preschoolers potentially have access to 42% more words per day when a remote microphone system is used, and that child-directed speech from a distance was significantly more accessible [102,103,104,105]. Results of analysis of type of caregiver utterances used when a RMS was used, compared with no RMS, indicated that caregivers produced an average of 67 percent fewer verbal clarifying behaviors, suggesting that there was less need to repeat, rephrase or clarify for children because of better child responsiveness and understanding. It was noted that parents were more likely to talk to their children from a distance (for example, across a playground) since they knew that the child could hear them with the RMS. Curran et al. [107] found that deaf children whose families had used remote microphone systems in early childhood had significantly better discourse skills at age 5 compared to a matched group of deaf children who had not used RMS and concluded that RMS use may have a positive impact on higher level language skills. Benitez-Barrera et al. [102,103] suggested that the fact that parents talked more to their children using the RMS may provide greater frequency of communicative interactions, potentially enhancing discourse skills. Walker et al. [96] found that the vast majority of parents (95%) responded that they felt that their child benefitted from the personal RMS. While mothers were the primary reported users of the RMS, some families also reported the system to be used by grandparents, babysitters and sport coaches.

8.5. Access to Audio at Home

These days, deaf children commonly have access to music and video on television, parent cellphones, tablets, car video systems, etc. During a long car ride, the ability to provide music or a video to a small child can be a lifesaver for caregivers. However, it is only effective if the audio signal is sufficiently clear that it is meaningful to the infant, and turning up the volume to a level that is above the engine noise of the car is generally not well-received by others in the car. As noted previously, all RMS have the ability to easily connect to audio from consumer devices.

9. Wireless Connectivity for Audio

In 2000, Beecher [110] proposed a “vision for the future”, a “concept” hearing aid which incorporated Bluetooth technology. That future is here; most hearing aids, cochlear implants and bone-anchored hearing devices now include the ability to connect to audio (for example, in consumer electronic devices) via Bluetooth. This may be a direct Bluetooth connection to a consumer device such as a cellphone, laptop, gaming system or tablet, it might require an interface (typically called a “streamer”), or it might involve an app on a mobile device. Wireless connectivity allows the user to obtain a clear auditory signal from the external device directly into their own hearing device, reducing some of the effects of distance, noise, reverberation, speaker variables and reduced visual cues. Bluetooth streaming for hearing technologies was intended originally for the adult market, so there is limited research on children who tend to use RMS developed for school use. Bluetooth streaming to hearing technologies has a few features which make it potentially less desirable than school RMS, including an audio delay which is sometimes discernable to the listener, and decreased battery life for the hearing technology [111]. However, consideration of options for Bluetooth streaming for home devices such as cellphones or gaming systems should always be kept in mind by hearing care professionals to ensure that families are made aware of all the ways that access to sound can be improved.

10. Visual Access Technologies

Visual access technologies support communication by supplementing or replacing auditory information with visual information. Since many of these visual technologies use text, they would obviously be most appropriate for children who can read. It should be noted that part of the process of evaluating access technology for an individual student should always be an assessment of whether the student is able to use the technology effectively. Captioning is access through reading, so the student must demonstrate not only adequate reading comprehension but also good reading fluency. While captions are slightly delayed and slower than spoken language, ideally the captions will be presented at a rate that is commensurate with the pace of the speaker’s voice, requiring the student read not only accurately but quickly.

Captioning

Captioning is not a new technology, but it has evolved significantly over the years. The first captioning technologies provided captions only for prerecorded media such as video or television, and they required special equipment (closed captioning). With advancements in technology, provision of captions for live television and then for live events such as conference presentations became available, although human transcribers were still required [97]. The development of voice recognition software for widespread use in consumer electronics (such as Siri or Alexa) has helped to create a new era in the availability of captioning for everyone, not only for deaf individuals. There is a large body of research indicating that captioning improves comprehension for deaf students, students with learning disabilities and English-language learners (see [112,113] for reviews), yet the provision of captioning in education continues to be primarily on a case-by-case basis [114]. This section will discuss the differences between types of captions, as this impacts what is feasible to provide in a classroom as well as benefits and obstacles to using captions.

11. Communication Access Real-Time Translation (CART)

Computer-assisted real-time-translation (CART) captioning uses a human captionist who is present during class (either in person or remotely) and produces an exact transcript of everything said in the classroom in real time. CART captioning has been available for decades and has been proven to be an effective accommodation for adults in the workplace and in the community. Providing CART in a classroom has technical challenges, however. Because CART requires specialized hardware, software, and skills, it is an expensive accommodation in itself, as well as requiring significant administrative resources to ensure the service is available for each student, for each class. Remote captioning, where the captionist is located offsite but listening in through a speakerphone, Zoom, etc., reduces some of the cost, but a human captionist is still required. CART requires the use of a very good microphone in the classroom so that the captionist is able to hear clearly enough to caption accurately. Poor audio quality, signal dropping, background noise or speakers moving away from the microphone will compromise the captionist’s ability to hear and transcribe accurately, and, as discussed previously, classrooms rarely represent good listening environments. Educators need to be trained on how to use the audio setup and on principles of best practice (for example, the timing of breaks). For these reasons, this effective accommodation has not been routinely available for most deaf students, even at the postsecondary level [115,116,117].

12. Automated Speech-to-Text Captioning

A more promising technology for provision of captioning in classrooms is automated speech-to-text captioning, which uses voice recognition software and artificial intelligence to transcribe spoken language. In the past, low accuracy rates due to poor audio quality, software limitations and the need to train the software to the speaker’s voice, have generally precluded widespread use of automated speech-to-text captioning [118,119]. However, since the COVID-19 pandemic, automated speech-to-text captioning is increasingly available through platforms such as Zoom, Google Meet or Microsoft Teams, although captioning accuracy for live classes with discussion is highly variable. Millett [94] found that a comparison of captioning accuracy across platforms showed small differences in accuracy, but also some universal trends. Accuracy was best for prerecorded course materials, and during live classes, for the instructor. Accuracy was poorest during interactive discussions and was poorer for student utterances than for instructor utterances. Inaccurate and distracting punctuation was identified as a significant barrier to comprehension across all platforms.
Many of the same factors that impede listening comprehension for individuals are also at play for automated speech-to-text software. These include distance, noise and reverberation, which distort the auditory signal. The accuracy of automated speech-to-text software is also impacted by speaker variables such as rate and accent. The use of automated speech-to-text captioning for deaf students became quite common in online learning during the pandemic. However, implementing automated captioning in a classroom in real time poses significant challenges, although manufacturers have begun marketing software for this application often in the absence of published accuracy data. When considering implementing this technology in the classroom, educators must be cognizant of the acoustic characteristics of the classroom, accuracy of the software (which is at least partly dependent on the acoustics of the classroom), the reading skills of the student, and the student’s ability to recognize, and compensate for, captioning errors. Simply turning on an automated captioning feature does not mean that the student’s accommodation needs have been met.

13. Computerized Notetaking

Notetaking is always a challenge for deaf students. Taking notes is a complex task, requiring a deaf student to forego all visual cues (such as speechreading, facial expression, body language, references to visual materials, and/or interpreting) while needing to accurately hear and remember spoken language which is likely to be distorted by distance, noise and reverberation. Siegel [47] noted that “When taking notes while listening to an L2, students face additional challenges of first comprehending the lecture input and then integrating it into a notetaking cycle that draws extensively on both cognitive and physical attributes operating under continuous time pressure. The listener must first comprehend the input, separate important from extraneous information, decide when, where and how to record the information, and finally write or type (p. 1)”. This is as true for deaf students as for English-language learners. While the term “computerized notetaking” is commonly used, essentially, this simply referred to a notetaker who used a computer to input notes during class (rather than handwriting them). This sometimes referred to a notetaker hired by the institution, but more often, referred to having another student in the classroom share their notes with the deaf student. Access to a notetaker or a classmate’s notes is an extremely common accommodation offered in secondary and postsecondary education [116,117].
The appeal of being able to combine automated captioning with notetaking is obvious in terms of time and resource efficiently. However, it is important to understand the difference between transcription and notetaking. For example, a full verbatim transcript of a 1 h class can consist of dozens of pages and still requires the student to take their own notes from the transcript. Captioning (whether provided by a real-time captionist or software) provides accessibility in the moment; notetaking supports learning after class. Best practice for accommodations would include access for all learning activities in which students are expected to participate, which may require more than one type of support.
Recent advancements in speech-to-text technology have resulted in the development of software which has some ability to not only transcribe speaker utterances, but to organize the information into a format which more closely resembles notes than verbatim transcript. An example is the platform Otter Pilot, which is reported to record audio, write notes, capture slides and generate summaries, thereby combining the advantages of automated speech-to-text transmission and notetaking. This technology has the potential to offer true computerized notetaking but needs to be researched in addition to being marketed.

14. C-Print

An example of access technology which in many ways combines the provision of elements of real-time captioning with notetaking is the C-Print system, first developed at the National Technical Institute for the Deaf in Rochester, New York. A trained notetaker uses computerized abbreviations and notetaking strategies to produce the text display of spoken information which appears on the deaf student’s computer or mobile device. C-Print Pro, an upgrade to the original software, also provides for two-way communication between the notetaker and the student, as well as the ability for students to add their own notes to those provided by the notetaker, as well as the option to edit notes after class. C-Print has been demonstrated to effectively support student learning and accessibility [119,120,121]. Because there are both hardware and software requirements, as well as specialized training for the notetaker, C-Print is generally implemented as an institutional solution, rather than a technology that students could provide for themselves.

15. Alerting Systems

Alerting systems for safety (such as fire or evacuation alarms) often use auditory signals, potentially creating issues for deaf adults and children who are unable to hear them. In many jurisdictions, disability legislation requires the installation of visual alerting systems (such as flashing lights) in public spaces, and this legislation should cover school buildings as well. However, alarms may be installed in “public” areas (such as a main hallway in a school), but if a student is, for example, working in a small room in the library with the door shut, there is likely no fire alarm in the library, and therefore the volume of the alarm in the hallway is greatly reduced in the student’s workspace. A variety of technologies are widely available, including flashing lights or text alerts (for older students), but it needs to be recognized as a school issue, not only a public-space issue.
Understanding intercom announcements is extremely difficult for deaf students as the audio quality is often poor, and visual cues are completely missing. Deaf students often have little access to announcements about school life, and this is something that is often overlooked by school staff. A simple, no-tech strategy is for the teacher to write a summary of the announcements on the blackboard. There are several manufacturers of school intercom systems which offer the option of a visual display which can be mounted in hallways or classrooms, to provide a textual display of school announcements. Such systems are more expensive than regular audio intercom systems; however, if we view universal design for accessibility as an important goal, school administrators should consider this issue.

16. Evaluating Benefits of Access Technologies

Sometimes the choice of access technology for an individual student is clear; however, in other cases, there are several appropriate options, or sometimes more than one technology is needed. It is important to evaluate benefit from technology, and, in fact, in some school jurisdictions, formal documentation of benefit is required for funding purposes. Lewis et al. [78] proposed several strategies for verification and validation of the system which remain valid today. The first is electroacoustic verification of the system by an audiologist. This ensures that the RMS is providing appropriate amplification and that it is providing acoustic transparency when interfaced with the student’s hearing device. The second strategy is direct speech perception testing under a variety of listening situations with and without the RMS; this is best performed by an educational audiologist or teacher of the deaf who has specialized knowledge in the speech perception difficulties of deaf students. This direct assessment provides valuable information about the need for the RMS. However, it can also serve as an important counselling tool for students themselves and their classroom teachers as a quantitative measurement of the difference in the student’s ability to understand spoken language with and without the RMS.
The third strategy is the use of behavioral measures (observation in the classroom, teacher/student/parent checklists, etc.). Published checklists and observational tools are readily accessible online through websites such as Success for Kids with Hearing Loss (successforkidswithhearingloss.com, accessed on 29 February 2023) and The Online Itinerant (theonlineitinerant.com, accessed on 30 February 2023). Checklists for teachers and students can be particularly useful since they focus the respondent’s attention specifically on listening, attention and listening effort. Implementation of an RMS can sometimes result in dramatic changes in behavior, participation and academic learning that are readily identifiable. However, sometimes the effects of an RMS can be subtler, for example, faster response times to questions posed by the classroom teacher, less fatigue at the end of the day or a greater willingness to participate in discussions [122,123,124,125,126]. A listening checklist can focus the respondent’s attention on aspects of listening and learning that they had not considered previously.

17. Challenges in the Implementation of Access Technologies

Implementing access technologies at home and school requires a number of different pieces of the puzzle to come together, some of which are more straightforward (such as purchasing the system), others which require specialized knowledge for effective fitting of the system, others which require teaching everyone involved how to use the system and identify when it is not working and others (perhaps most challenging) related to buy-in from students and teachers. Unfortunately, despite the many advantages of both auditory and visual access technologies, many students do not have systems, have systems but are not using them most effectively or refuse to use them at all. Educators can do a great deal to improve this situation.

17.1. Cost and Availability

At school, there are typically processes for the recommendation and purchase of assistive technology by teachers of the deaf and/or educational audiologists. The details of funding for school RMS will vary depending on jurisdiction, but RMS are often available at school at no cost to parents. Concerns regarding cost are seen more for home use (for children of all ages), for postsecondary education and in developing countries, where funding sources can be scarcer [127,128,129]. This obstacle is being reduced with advancements in technology, however, in that access technologies such as RMS and wireless connectivity are being integrated into the hearing device itself, reducing the need for extra technology which must be purchased at additional cost.

17.2. Insufficient Education on Use, Maintenance and Troubleshooting

The effective use of access technologies requires both specialized knowledge on the part of the individuals setting up the systems and training others on their use (typically educational audiologists and teachers of the deaf) and commitment on the part of the users (teachers/coaches/caregivers and students). At school, it is common for deaf students to be seen by teachers of the deaf, who are usually well-versed in both the rationale for access technologies and in their implementation. Teachers of the deaf are crucial supports for the use of access technologies, ensuring not only that everyone involved understands the importance of its use but also feels comfortable using the equipment and identifying when it is malfunctioning. Having the involvement of teachers of the deaf means that students and classroom teachers do not need to be experts in the technical aspects of the system and how to troubleshoot and repair problems; help is a phone call or email away. Support for technology use is less consistent in settings outside of school, such as preschool programs, where teachers may struggle with use of the equipment [122].
Direct support for the use of access technologies for home use is also problematic. While certainly clinical audiologists and/or hearing instrument dispensers will carefully explain the use of the technology when it is purchased, ongoing support when a parent has a question or suspects the equipment is not working is not necessarily at one’s fingertips. In addition, parents of deaf children already have additional responsibilities in terms of managing appointments, ensuring working hearing technologies, language development, etc. Adding more technology is a significant ask.
Most hearing technologies now contain data logging software, which allows the clinical audiologist to ascertain precisely how many hours per day the hearing device is being used and under which conditions (e.g., in noise, in conjunction with a remote microphone system, etc.). This can be useful to identify patterns of device use, information that could be used to support students, parents and teachers. For example, Gustafson et al. [123] found that in their sample of 13 deaf children, average time using a hearing device was approximately 6 h per day. If these 6 h correspond to school hours, this is a valuable opportunity to counsel parents about consistent device use at home. However, if datalogging indicated that a remote microphone system was not being used, although one had been purchased for the student, this is an opportunity to educate and support school staff (and the student) about the importance of using the system.

17.3. Willingness and Ability to Use Technology

Unwillingness to access and use technology by deaf students of all ages has been reported often, even when students articulate its usefulness [117,124,125,126]. Compliance with using RMS at school has been reported to be higher in elementary grades than in high school, due largely to social factors and the desire not to look different from peers [124,125,126]. Esturaro et al. [125] found that in their sample of 120 students with RMS, only slightly more than half (54%) were using it. Of interest is the fact that of the students who were not using their RMS, 22% were for “involuntary” reasons (i.e., the students would have worn the system but for circumstances out of their control). These involuntary reasons included the hearing technology or RMS being lost, the RMS was broken, the hearing device was not compatible with the RMS and the teacher not wanting to use the system. This is of concern since better technical and educational support could address many, if not all, of these concerns. There were a variety of reasons for “voluntary” disuse of the system, some of which were understandable (e.g., student preference), but others which could be ameliorated with better support and education (e.g., student did not understand the benefit of the system). Many deaf teenagers are very interested in wireless technology for cellphones, gaming systems and other electronic devices, but often are not well-informed as to the possibilities for them [126].

18. The Future of Access Technologies: Universal Design

Ideally, access technologies would be available to everyone, at one’s fingertips, at little or no cost, in all situations where it is needed in a universal design model [130]. The COVID-19 pandemic highlighted issues in accessibility for not only deaf children [90,91] and children with disabilities in general [131,132,133] but also for adults and children who suddenly experienced heretofore unencountered communication barriers during online and in-person interactions. Exciting new possibilities for more universal accessibility are on the horizon.

18.1. Bluetooth LE Audio with Auracast

Bluetooth technology is not new; in fact, Bluetooth availability in hearing aids, cochlear implants and bone-anchored devices is not even new. As described previously, individuals with hearing loss are currently able to connect their hearing devices to their cellphones, laptops and tablets to stream audio directly. However, an innovation in Bluetooth transmission, Bluetooth LE Audio with Auracast, promises to significantly expand accessibility for individuals with hearing loss. Bluetooth LE Audio with Auracast is a variation of Bluetooth streaming which functionally works in a similar way to current public WiFi networks. It requires two parts—the deaf individual must have a hearing device which has Auracast capabilities, and the public space (e.g., movie theaters, conference centers, places of worship, schools, etc.) must also have Auracast incorporated into its audio system. Public spaces will stream its audio via Auracast, and consumers will be able to connect to a particular “broadcast” similar to the way we currently choose a WiFi network or a Bluetooth device. Kaufmann et al. [134] note that Auracast will allow automatic connectivity in public spaces without the need to attach something to a hearing device or wear an additional device; likely an app will be used as an interface. It is not entirely clear if, or how, this technology might be used with children in schools, but the one constant in technology is that it is always changing. As is always the case in education, changes in access technology will require educators and parents to sort out new problems (e.g., a kindergarten student will be unlikely to possess the technical or metacognitive skills to manage connecting to Auracast). However, this has been the story of technology in deaf education; as technology changes, students, educators and parents will re-learn, adjust and, ultimately, benefit from better accessibility.

18.2. Captioning

Voice recognition software has been in development for many years; however, it was not until it was adopted by companies such as Apple for Siri that rapid improvements began to be seen. An interesting outcome of the COVID-19 pandemic has been the fact that many individuals without hearing loss have experienced difficulty with communication in online environments (e.g., Zoom meetings), and automated captioning technologies have begun to receive more attention by the general public [135,136,137,138,139]. Having companies such as Apple, Microsoft, Google and others focus their attention of research and development on voice recognition technology and its applications for captioning has resulted in exponential improvements in availability and accuracy, and this is expected to continue.
Future directions from captioning include the continued improvements in accuracy using crowd-sourced insights into captioning as well as linguistic prediction software. Attention is being paid to situations where captioning is needed, but effective technologies have not yet been developed, such as during student small group work or other activities where there are multiple speakers [136,137]. This also includes creating interfaces that make it easy for content creators to add captions to their own work, as knowledge and skills in this area are often lacking [138]. There is a new focus on including deaf individuals in the research and design stages of technology development, rather than simply presenting consumers with a finished product [115,137,139]. Virtual reality technology is now being investigated as a different method of providing captions in a real-time communication interaction with others via SpeechBubbles provided over the heads of speakers [140]. Practical applications of this concept will require time and research to implement, but it is a clear example of the kind of out of the box thinking that has not been common in the captioning world.
Access technologies have come a long way since the 1960s and 1970s. Research and experience have demonstrated that access technologies are crucial in providing support not only for adults but for students in preschool, elementary, secondary and postsecondary education. Use of RMS provides a greater quantity and quality of spoken language throughout the day, particularly in situations in which the child might otherwise have little to no access, as well as lessening communication effort and listening fatigue. While access solutions tailored to individual students will always be necessary, our ultimate goal should be recognizing the importance of universal design for access to public spaces such as schools and community spaces to ensure that individuals with hearing loss live in an equitable and inclusive world.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Ching, T.Y.; Dillon, H.; Button, L.; Seeto, M.; Van Buynder, P.; Marnane, V.; Cupples, L.; Leigh, G. Age at Intervention for Permanent Hearing Loss and 5-Year Language Outcomes. Pediatrics 2017, 140, e20164274. [Google Scholar] [CrossRef] [Green Version]
  2. Ching, T.Y.C.; Dillon, H.; Leigh, G.; Cupples, L. Learning from the Longitudinal Outcomes of Children with Hearing Impairment (LOCHI) study: Summary of 5-year findings and implications. Int. J. Audiol. 2018, 57, S105–S111. [Google Scholar] [CrossRef]
  3. Cupples, L.; Ching, T.Y.C.; Button, L.; Leigh, G.; Marnane, V.; Whitfield, J.; Gunnourie, M.; Martin, L. Language and speech outcomes of children with hearing loss and additional disabilities: Identifying the variables that influence performance at five years of age. Int. J. Audiol. 2016, 57, S93–S104. [Google Scholar] [CrossRef]
  4. Walker, E.A.; Holte, L.; McCreery, R.W.; Spratford, M.; Moeller, M.P.; Aksoy, S.; Zehnhoff-Dinnesen, A.A.; Atas, A.; Bamiou, D.-E.; Bartel-Friedrich, S.; et al. The Influence of Hearing Aid Use on Outcomes of Children with Mild Hearing Loss. J. Speech Lang. Heart Res. 2015, 58, 1611–1625. [Google Scholar] [CrossRef] [Green Version]
  5. Mayer, C. What Really Matters in the Early Literacy Development of Deaf Children. J. Deaf. Stud. Deaf. Educ. 2007, 12, 411–431. [Google Scholar] [CrossRef] [Green Version]
  6. Easterbrooks, S.R.; Beal-Alvarez, J.S. States’ Reading Outcomes of Students Who Are d/Deaf and Hard of Hearing. Am. Ann. Deaf. 2012, 157, 27–40. [Google Scholar] [CrossRef] [PubMed]
  7. Mayer, C.; Trezek, B.J. Literacy Outcomes in Deaf Students with Cochlear Implants: Current State of the Knowledge. J. Deaf. Stud. Deaf. Educ. 2018, 23, 1–16. [Google Scholar] [CrossRef] [PubMed]
  8. Mayer, C.; Trezek, B.J.; Hancock, G.R. Reading Achievement of Deaf Students: Challenging the Fourth Grade Ceiling. J. Deaf. Stud. Deaf. Educ. 2021, 26, 427–437. [Google Scholar] [CrossRef]
  9. Fagan, M.K. Cochlear implantation at 12 months: Limitations and benefits for vocabulary production. Cochlear Implant. Int. 2015, 16, 24–31. [Google Scholar] [CrossRef] [PubMed]
  10. Healy, E.W.; Yoho, S.E. Difficulty understanding speech in noise by the hearing impaired: Underlying causes and technological solutions. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Orlando, FL, USA, 16–20 August 2016; pp. 89–92. [Google Scholar] [CrossRef]
  11. Lerner, S. Limitations of Conventional Hearing Aids. Otolaryngol. Clin. N. Am. 2019, 52, 211–220. [Google Scholar] [CrossRef]
  12. Lesica, N.A. Why Do Hearing Aids Fail to Restore Normal Auditory Perception? Trends Neurosci. 2018, 41, 174–185. [Google Scholar] [CrossRef]
  13. Southworth, M.F. The sonic environment of cities. Environ. Behav. 1969, 1, 49–70. [Google Scholar]
  14. Anderson, K. The Problem of Classroom Acoustics: The Typical Classroom Soundscape Is a Barrier to Learning. Semin. Heart 2004, 25, 117–129. [Google Scholar] [CrossRef]
  15. Flagg-Williams, J.B.; Rubin, R.L.; Aquino-Russell, C.E. Classroom soundscape. Educ. Child Psychol. 2011, 28, 92–99. [Google Scholar] [CrossRef]
  16. World Health Organization. World Report on Hearing; World Health Organization: Geneva, Switzerland, 2021. [Google Scholar]
  17. Pichora-Fuller, M.K.; Kramer, S.E. Eriksholm workshop on hearing impairment and cognitive energy. Ear. Heart 2016, 37, 1S–4S. [Google Scholar] [CrossRef]
  18. Pichora-Fuller, M.K.; Kramer, S.E.; Eckert, M.A.; Edwards, B.; Hornsby, B.W.; Humes, L.E.; Lemke, U.; Lunner, T.; Matthen, M.; Mackersie, C.L.; et al. Hearing impairment and cognitive energy: The framework for understanding effortful listening (FUEL). Ear. Heart 2016, 37, 5S–27S. [Google Scholar] [CrossRef]
  19. Rudner, M.; Lyberg-Åhlander, V.; Brännström, J.; Nirme, J.; Pichora-Fuller, M.K.; Sahlén, B. Effects of background noise, talker’s voice, and speechreading on speech understanding by primary school children in simulated classroom listening situations. J. Acoust. Soc. Am. 2018, 144, 1976. [Google Scholar] [CrossRef]
  20. Shields, C.; Willis, H.; Nichani, J.; Sladen, M.; Kort, K.K.-D. Listening effort: WHAT is it, HOW is it measured and WHY is it important? Cochlea-Implant. Int. 2022, 23, 114–117. [Google Scholar] [CrossRef]
  21. Leavitt, R.; Flexer, C. Speech Degradation as Measured by the Rapid Speech Transmission Index (RASTI). Ear Heart 1991, 12, 115–118. [Google Scholar] [CrossRef] [PubMed]
  22. Madell, J. The Listening Bubble. Hearing Health and Technology Matters. 2014. Available online: https://hearinghealthmatters.org/hearing-and-kids/2014/listening-bubble/ (accessed on 17 January 2023).
  23. Gremp, M.A.; Easterbrooks, S.R. A Descriptive Analysis of Noise in Classrooms across the U.S. and Canada for Children who are Deaf and Hard of Hearing. Volta Rev. 2018, 117, 5–31. [Google Scholar] [CrossRef]
  24. Gheller, F.; Lovo, E.; Arsie, A.; Bovo, R. Classroom acoustics: Listening problems in children. Build. Acoust. 2020, 27, 47–59. [Google Scholar] [CrossRef]
  25. Lind, S. The evolution of standard S12.60 acoustical performance criteria, design requirements, and guidelines for schools. J. Acoust. Soc. Am. 2020, 148, 2705. [Google Scholar] [CrossRef]
  26. Nelson, P.B.; Blaeser, S.B. Classroom Acoustics: What Possibly Could Be New? ASHA Lead. 2010, 15, 16–19. [Google Scholar] [CrossRef]
  27. Wang, L.M.; Brill, L.C. Speech and noise levels measured in occupied K–12 classrooms. J. Acoust. Soc. Am. 2021, 150, 864–877. [Google Scholar] [CrossRef]
  28. Burlingame, E.P. Classroom Acoustics in the Postsecondary Setting: A Case for Universal Design. Ph.D. Thesis, State University of New York, Buffalo, NY, USA, 2018. [Google Scholar]
  29. Elmehdi, H.; Alzoubi, H.; Lohani, S.H.R. Acoustic quality of university classrooms: A subjective evaluation of the acoustic comfort and conditions at the University of Sharjah classrooms. In Proceedings of the International Conference on Acoustics, Aachen, Germany, 9–13 September 2019; pp. 4130–4137. [Google Scholar]
  30. Heuij, K.V.D.; Goverts, T.; Neijenhuis, K.; Coene, M. Challenging listening environments in higher education: An analysis of academic classroom acoustics. J. Appl. Res. High. Educ. 2021, 13, 1213–1226. [Google Scholar] [CrossRef]
  31. Benítez-Barrera, C.R.; Grantham, D.W.; Hornsby, B.W. The Challenge of Listening at Home: Speech and Noise Levels in Homes of Young Children with Hearing Loss. Ear Heart 2020, 41, 1575–1585. [Google Scholar] [CrossRef] [PubMed]
  32. Pang, Z.; Becerik-Gerber, B.; Hoque, S.; O’Neill, Z.; Pedrielli, G.; Wen, J.; Wu, T. How Work From Home Has Affected the Occupant’s Well-Being in the Residential Built Environment: An International Survey Amid the COVID-19 Pandemic. ASME J. Eng. Sustain. Build. Cities 2021, 2, 041003. [Google Scholar] [CrossRef]
  33. Picheny, M.A.; Durlach, N.I.; Braida, L.D. Speaking clearly for the hard of hearing I: Intelligibility differences between clear and conversational speech. J. Speech Lang. Heart Res. 1985, 28, 96–103. [Google Scholar] [CrossRef]
  34. Picheny, M.A.; Durlach, N.I.; Braida, L.D. Speaking clearly for the hard of hearing II: Acoustic characteristics of clear and conversational speech. J. Speech Lang. Heart Res. 1986, 29, 434–446. [Google Scholar] [CrossRef]
  35. Calandruccio, L.; Porter, H.L.; Leibold, L.J.; Buss, E. The Clear-Speech Benefit for School-Age Children: Speech-in-Noise and Speech-in-Speech Recognition. J. Speech Lang. Heart Res. 2020, 63, 4265–4276. [Google Scholar] [CrossRef]
  36. Haake, M.; Hansson, K.; Gulz, A.; Schötz, S.; Sahlén, B. The slower the better? Does the speaker’s speech rate influence children’s performance on a language comprehension test? Int. J. Speech-Lang. Pathol. 2014, 16, 181–190. [Google Scholar] [CrossRef] [Green Version]
  37. Meemann, K.; Smiljanić, R. Intelligibility of Noise-Adapted and Clear Speech in Energetic and Informational Maskers for Native and Nonnative Listeners. J. Speech Lang. Heart Res. 2022, 65, 1263–1281. [Google Scholar] [CrossRef]
  38. Van Engen, K.J.; Phelps, J.E.B.; Smiljanic, R.; Chandrasekaran, B. Enhancing Speech Intelligibility: Interactions Among Context, Modality, Speech Style, and Masker. J. Speech Lang. Heart Res. 2014, 57, 1908–1918. [Google Scholar] [CrossRef]
  39. Miller, S.; Wolfe, J.; Neumann, S.; Schafer, E.C.; Galster, J.; Agrawal, S. Remote Microphone Systems for Cochlear Implant Recipients in Small Group Settings. J. Am. Acad. Audiol. 2022, 33, 142–148. [Google Scholar] [CrossRef]
  40. Syrdal, A. Acoustic variability in spontaneous conversational speech of American English talkers. In Proceedings of the Fourth International Conference on Spoken Language Processing, ICSLP’96 1, Philadelphia, PA, USA, 6 August 2002; pp. 438–441. [Google Scholar] [CrossRef] [Green Version]
  41. Alain, C.; Du, Y.; Bernstein, L.J.; Barten, T.; Banai, K. Listening under difficult conditions: An activation likelihood estimation meta-analysis. Hum. Brain Mapp. 2018, 39, 2695–2709. [Google Scholar] [CrossRef] [Green Version]
  42. Pichora-Fuller, M.K.; Smith, S.L. Effects of age, hearing loss, and linguistic complexity on listening effort as measured by working memory span. J. Acoust. Soc. Am. 2015, 137, 2235. [Google Scholar] [CrossRef]
  43. Pisoni, D.B.; Geers, A.E. Working Memory in Deaf Children with Cochlear Implants: Correlations between Digit Span and Measures of Spoken Language Processing. Ann. Otol. Rhinol. Laryngol. 2000, 109, 92–93. [Google Scholar] [CrossRef]
  44. Mikic, B.; Miric, D.; Nikolic-Mikic, M.; Ostojic, S.; Asanovic, M. Age at implantation and auditory memory in cochlear implanted children. Cochlea-Implant. Int. 2014, 15, S33–S35. [Google Scholar] [CrossRef]
  45. Conway, C.M.; Pisoni, D.B.; Anaya, E.M.; Karpicke, J.; Henning, S.C. Implicit sequence learning in deaf children with cochlear implants. Dev. Sci. 2011, 14, 69–82. [Google Scholar] [CrossRef] [Green Version]
  46. Conway, C.M.; Pisoni, D.B.; Kronenberger, W.G. The importance of sound for cognitive sequencing abilities: The auditory scaffolding hypothesis. Curr. Dir. Psychol. Sci. 2009, 18, 275–279. [Google Scholar] [CrossRef] [Green Version]
  47. Siegel, J. Factors affecting notetaking performance. Int. J. List. 2022, 1–13. [Google Scholar] [CrossRef]
  48. Lecumberri, M.L.G.; Cooke, M.; Cutler, A. Non-native speech perception in adverse conditions: A review. Speech Commun. 2010, 52, 864–886. [Google Scholar] [CrossRef] [Green Version]
  49. Peng, Z.E.; Wang, L.M. Listening Effort by Native and Nonnative Listeners Due to Noise, Reverberation, and Talker Foreign Accent during English Speech Perception. J. Speech Lang. Heart Res. 2019, 62, 1068–1081. [Google Scholar] [CrossRef]
  50. Van Engen, K.J.; Peelle, J.E. Listening effort and accented speech. Front. Hum. Neurosci. 2014, 8, 577. [Google Scholar] [CrossRef] [Green Version]
  51. Gordon-Salant, S.; Yeni-Komshian, G.H.; Fitzgibbons, P.J. Recognition of accented English in quiet and noise by younger and older listeners. J. Acoust. Soc. Am. 2010, 128, 3152–3160. [Google Scholar] [CrossRef]
  52. Pichora-Fuller, M.K. Working memory and speechreading. In Speechreading by Humans and Machines: Models, Systems, and Applications; Springer Berlin Heidelberg: Berlin, Heidelberg, 1996; pp. 257–274. [Google Scholar]
  53. Moradi, S.; Lidestam, B.; Danielsson, H.; Ng, E.H.N.; Rönnberg, J. Visual Cues Contribute Differentially to Audiovisual Perception of Consonants and Vowels in Improving Recognition and Reducing Cognitive Demands in Listeners with Hearing Impairment Using Hearing Aids. J. Speech Lang. Heart Res. 2017, 60, 2687–2703. [Google Scholar] [CrossRef] [Green Version]
  54. Gagne, J.P.; Besser, J.; Lemke, U. Behavioral assessment of listening effort using a dual-task paradigm: A review. Trends Hear. 2017, 21, 2331216516687287. [Google Scholar] [CrossRef] [Green Version]
  55. Homans, N.C.; Vroegop, J.L. The impact of face masks on the communication of adults with hearing loss during COVID-19 in a clinical setting. Int. J. Audiol. 2022, 61, 365–370. [Google Scholar] [CrossRef]
  56. Rahne, T.; Fröhlich, L.; Plontke, S.; Wagner, L. Influence of surgical and N95 face masks on speech perception and listening effort in noise. PLoS ONE 2021, 16, e0253874. [Google Scholar] [CrossRef]
  57. Tofanelli, M.; Capriotti, V.; Gatto, A.; Boscolo-Rizzo, P.; Rizzo, S.; Tirelli, G. COVID-19 and Deafness: Impact of Face Masks on Speech Perception. J. Am. Acad. Audiol. 2022, 33, 098–104. [Google Scholar] [CrossRef]
  58. Kyle, F.E.; Campbell, R.; Mohammed, T.; Coleman, M.; MacSweeney, M. Speechreading Development in Deaf and Hearing Children: Introducing the Test of Child Speechreading. J. Speech Lang. Heart Res. 2013, 56, 416–426. [Google Scholar] [CrossRef] [Green Version]
  59. Kyle, F.E.; Campbell, R.; MacSweeney, M. The relative contributions of speechreading and vocabulary to deaf and hearing children’s reading ability. Res. Dev. Disabil. 2016, 48, 13–24. [Google Scholar] [CrossRef] [Green Version]
  60. Rogers, C.L.; Lister, J.J.; Febo, D.M.; Besing, J.M.; Abrams, H.B. Effects of bilingualism, noise, and reverberation on speech perception by listeners with normal hearing. Appl. Psycholinguist. 2006, 27, 465–485. [Google Scholar] [CrossRef]
  61. Lalonde, K.; McCreery, R.W. Audiovisual Enhancement of Speech Perception in Noise by School-Age Children Who Are Hard of Hearing. Ear Heart 2020, 41, 705–719. [Google Scholar] [CrossRef]
  62. Moore, D.R.; Cowan, J.A.; Riley, A.; Edmondson-Jones, A.M.; Ferguson, M.A. Development of Auditory Processing in 6- to 11-Yr-Old Children. Ear Heart 2011, 32, 269–285. [Google Scholar] [CrossRef]
  63. Dawes, P.; Bishop, D.V.M. Maturation of Visual and Auditory Temporal Processing in School-Aged Children. J. Speech Lang. Heart Res. 2008, 51, 1002–1015. [Google Scholar] [CrossRef]
  64. Krizman, J.; Tierney, A.; Fitzroy, A.B.; Skoe, E.; Amar, J.; Kraus, N. Continued maturation of auditory brainstem function during adolescence: A longitudinal approach. Clin. Neurophysiol. 2015, 126, 2348–2355. [Google Scholar] [CrossRef] [Green Version]
  65. Moore, D.R.; Rosen, S.; Bamiou, D.-E.; Campbell, N.G.; Sirimanna, T. Evolving concepts of developmental auditory processing disorder (APD): A British Society of Audiology APD Special Interest Group ‘white paper’. Int. J. Audiol. 2013, 52, 3–13. [Google Scholar] [CrossRef]
  66. Leibold, L.J. Speech Perception in Complex Acoustic Environments: Developmental Effects. J. Speech Lang. Heart Res. 2017, 60, 3001–3008. [Google Scholar] [CrossRef] [Green Version]
  67. Neves, I.F.; Schochat, E. Auditory processing maturation in children with and without learning difficulties. Pró-Fono Rev. Atualização Científica 2005, 17, 311–320. [Google Scholar] [CrossRef] [Green Version]
  68. Ponton, C.W.; Eggermont, J.J.; Kwong, B.; Don, M. Maturation of human central auditory system activity: Evidence from multi-channel evoked potentials. Clin. Neurophysiol. 2000, 111, 220–236. [Google Scholar] [CrossRef]
  69. Squires, B.; Bird, E.K.-R. Self-Reported Listening Abilities in Educational Settings of Typically Hearing Children and Those Who Are Deaf/Hard-of-Hearing. Commun. Disord. Q. 2023, 44, 107–116. [Google Scholar] [CrossRef]
  70. Oosthuizen, I.; Picou, E.M.; Pottas, L.; Myburgh, H.C.; Swanepoel, D.W. Listening Effort in School-Aged Children with Limited Useable Hearing Unilaterally: Examining the Effects of a Personal, Digital Remote Microphone System and a Contralateral Routing of Signal System. Trends Heart 2021, 25, 2331216520984700. [Google Scholar] [CrossRef]
  71. Gabova, K.; Meier, Z.; Tavel, P. Parents’ experiences of remote microphone systems for children with hearing loss. Disabil. Rehabil. Assist. Technol. 2022, 1–10. [Google Scholar] [CrossRef]
  72. Davis, H.; Schlundt, D.; Bonnet, K.; Camarata, S.; Hornsby, B.; Bess, F.H. Listening-Related Fatigue in Children with Hearing Loss: Perspectives of Children, Parents, and School Professionals. Am. J. Audiol. 2021, 30, 929–940. [Google Scholar] [CrossRef]
  73. McGarrigle, R.; Gustafson, S.J.; Hornsby, B.W.Y.; Bess, F.H. Behavioral Measures of Listening Effort in School-Age Children: Examining the Effects of Signal-to-Noise Ratio, Hearing Loss, and Amplification. Ear Heart 2019, 40, 381–392. [Google Scholar] [CrossRef]
  74. Peelle, J.E. Listening Effort: How the Cognitive Consequences of Acoustic Challenge Are Reflected in Brain and Behavior. Ear Heart 2018, 39, 204–214. [Google Scholar] [CrossRef]
  75. Winn, M.B.; Teece, K.H. Listening Effort Is Not the Same as Speech Intelligibility Score. Trends Heart 2021, 25, 23312165211027688. [Google Scholar] [CrossRef]
  76. Chen, J.; Wang, Z.; Dong, R.; Fu, X.; Wang, Y.; Wang, S. Effects of Wireless Remote Microphone on Speech Recognition in Noise for Hearing Aid Users in China. Front. Neurosci. 2021, 15, 643205. [Google Scholar] [CrossRef]
  77. Fitzpatrick, E.M.; Fournier, P.; Séguin, C.; Armstrong, S.; Chénier, J.; Schramm, D. Users’ perspectives on the benefits of FM systems with cochlear implants. Int. J. Audiol. 2010, 49, 44–53. [Google Scholar] [CrossRef]
  78. Lewis, D.E.; Feigin, J.A.; Karasek, A.E.; Stelmachowicz, P.G. Evaluation and Assessment of FM Systems. Ear Heart 1991, 12, 268–280. [Google Scholar] [CrossRef]
  79. Lewis, D.; Spratford, M.; Stecker, G.C.; McCreery, R.W. Remote-Microphone Benefit in Noise and Reverberation for Children Who are Hard of Hearing. J. Am. Acad. Audiol. 2022, 21, 642–653. [Google Scholar] [CrossRef]
  80. Madell, J. FM Systems as Primary Amplification for Children with Profound Hearing Loss. Ear Heart 1992, 13, 102–107. [Google Scholar] [CrossRef]
  81. Snapp, H.; Morgenstein, K.; Sanchez, C.; Coto, J.; Cejas, I. Comparisons of performance in pediatric bone conduction implant recipients using remote microphone technology. Int. J. Pediatr. Otorhinolaryngol. 2020, 139, 110444. [Google Scholar] [CrossRef]
  82. Thibodeau, L.M. Between the Listener and the Talker: Connectivity Options. Semin. Heart 2020, 41, 247–253. [Google Scholar] [CrossRef]
  83. Zanin, J.; Rance, G. Functional hearing in the classroom: Assistive listening devices for students with hearing impairment in a mainstream school setting. Int. J. Audiol. 2016, 55, 723–729. [Google Scholar] [CrossRef]
  84. Martin, A.; Cox, J. Using Technology to Enhance Learning for Students Who Are Deaf/Hard of Hearing. In Using Technology to Enhance Special Education (Advances in Special Education); Bakken, J.P., Obiakor, F.E., Eds.; Emerald Publishing Limited: Bingley, UK, 2023; Volume 37, pp. 71–86. [Google Scholar] [CrossRef]
  85. Sassano, C. Comparison of Classroom Accommodations for Students Who Are Deaf/Hard-of-Hearing. 2016. Available online: http://purl.flvc.org/fsu/fd/FSU_libsubv1_scholarship_submission_1461183310 (accessed on 22 December 2022).
  86. Berndsen, M.; Luckner, J. Supporting Students Who Are Deaf or Hard of Hearing in General Education Classrooms. Commun. Disord. Q. 2012, 33, 111–118. [Google Scholar] [CrossRef]
  87. Fry, A.C. Survey of Personal FM Systems in the Classroom: Consistency of Use and Teacher Attitudes. 2014. Available online: https://kb.osu.edu/handle/1811/61601 (accessed on 5 January 2023).
  88. Millett, P. The role of sound field amplification for English Language Learners. J. Educ. Pediatr. (Re) Habilit. Audiol. 2018, 35. [Google Scholar]
  89. Nicolaou, C.; Matsiola, M.; Kalliris, G. Technology-Enhanced Learning and Teaching Methodologies through Audiovisual Media. Educ. Sci. 2019, 9, 196. [Google Scholar] [CrossRef] [Green Version]
  90. Aljedaani, W.; Krasniqi, R.; Aljedaani, S.; Mkaouer, M.W.; Ludi, S.; Al-Raddah, K. If online learning works for you, what about deaf students? Emerging challenges of online learning for deaf and hearing-impaired students during COVID-19: A literature review. Univers. Access Inf. Soc. 2022, 22, 1027–1046. [Google Scholar] [CrossRef]
  91. Johnson, C.D. Remote Learning for Children with Auditory Access Needs: What We Have Learned during COVID-19. Semin. Heart 2020, 41, 302–308. [Google Scholar] [CrossRef]
  92. Millett, P. Accommodating students with hearing loss in a teacher of the deaf/hard of hearing education program. J. Educ. Audiol. 2014, 15, 84–90. [Google Scholar]
  93. Millett, P. Improving accessibility with captioning: An overview of the current state of technology. Can. Audiol. 2019, 6, 1–5. [Google Scholar]
  94. Millett, P. Accuracy of Speech-to-Text Captioning for Students Who are Deaf or Hard of Hearing. J. Educ. Pediatr. (Re) Habilit. Audiol. 2021, 25, 1–13. [Google Scholar]
  95. Millett, P.; Mayer, C. Integrating onsite and online learning in a teacher of the deaf and hard of hearing education program. J. Online Learn. Teach. 2010, 6, 1–10. [Google Scholar]
  96. Walker, E.A.; Curran, M.; Spratford, M.; Roush, P. Remote microphone systems for preschool-age children who are hard of hearing: Access and utilization. Int. J. Audiol. 2019, 58, 200–207. [Google Scholar] [CrossRef] [PubMed]
  97. Gabbard, S.A. The use of FM technology for infants and young children. In Proceedings of the International Phonak conference: Achieving Clear Communication Employing Sound Solutions, Chicago; Fabry, D., Johnson, C.D., Eds.; Phonak: Geneva, Switzerland, 2004; pp. 93–99. [Google Scholar]
  98. Sexton, J.; Madell, J. Auditory Access for Infants and Toddlers Utilizing Personal FM Technology. Perspect. Heart Heart Disord. Child. 2008, 18, 58–62. [Google Scholar] [CrossRef]
  99. Pennington, C.G.; Costine, J.; Dunbar, M.; Jennings, R. Deafness and hard of hearing: Adapting sport and physical activity. In Proceedings of the 9th International Conference on Sport Sciences Research and Technology Support, Online, 28–29 October 2021. [Google Scholar]
  100. Moeller, M.P.; Tomblin, J.B. An Introduction to the Outcomes of Children with Hearing Loss Study. Ear Heart 2015, 36, 4S–13S. [Google Scholar] [CrossRef] [Green Version]
  101. Moeller, M.P.; Tomblin, J.B. Epilogue. Ear Heart 2015, 36, 92S–98S. [Google Scholar] [CrossRef] [Green Version]
  102. Thibodeau, L.M.; Schafer, E. Issues to Consider Regarding Use of FM Systems with Infants with Hearing Loss. Perspect. Heart Conserv. Occup. Audiol. 2002, 8, 18–21. [Google Scholar] [CrossRef]
  103. Benítez-Barrera, C.R.; Angley, G.P.; Tharpe, A.M. Remote Microphone System Use at Home: Impact on Caregiver Talk. J. Speech Lang. Heart Res. 2018, 61, 399–409. [Google Scholar] [CrossRef] [PubMed]
  104. Benítez-Barrera, C.R.; Thompson, E.C.; Angley, G.P.; Woynaroski, T.; Tharpe, A.M. Remote Microphone System Use at Home: Impact on Child-Directed Speech. J. Speech Lang. Heart Res. 2019, 62, 2002–2008. [Google Scholar] [CrossRef]
  105. Thompson, E.C.; Benítez-Barrera, C.R.; Tharpe, A.M. Home use of remote mi-crophone systems by children with hearing loss. Hear. J. 2020, 73, 34–35. [Google Scholar] [CrossRef]
  106. Thompson, E.C.; Benítez-Barrera, C.R.; Angley, G.P.; Woynaroski, T.; Tharpe, A.M. Remote Microphone System Use in the Homes of Children with Hearing Loss: Impact on Caregiver Communication and Child Vocalizations. J. Speech Lang. Heart Res. 2020, 63, 633–642. [Google Scholar] [CrossRef] [PubMed]
  107. McCracken, W.; Mulla, I. Frequency Modulation for Preschoolers with Hearing Loss. Semin. Heart 2014, 35, 206–216. [Google Scholar] [CrossRef]
  108. Curran, M.; Walker, E.A.; Roush, P.; Spratford, M. Using Propensity Score Matching to Address Clinical Questions: The Impact of Remote Microphone Systems on Language Outcomes in Children Who Are Hard of Hearing. J. Speech Lang. Heart Res. 2019, 62, 564–576. [Google Scholar] [CrossRef]
  109. Moeller, M.P.; Donaghy, K.F.; Beauchaine, K.L.; Lewis, D.E.; Stelmachowicz, P.G. Longitudinal Study of FM System Use in Nonacademic Settings: Effects on Language Development. Ear Heart 1996, 17, 28–41. [Google Scholar] [CrossRef]
  110. Beecher, F. A vision of the future: A “concept hearing aid” with Bluetooth wireless technology. Heart J. 2000, 53, 40–44. [Google Scholar] [CrossRef]
  111. Stone, M.; Dillon, H.; Chilton, H.; Glyde, H.; Mander, J.; Lough, M.; Wilbraham, K. To Generate Evidence on the Effectiveness of Wireless Streaming Technologies for Deaf Children, Compared to Radio Aids; Report for the National Deaf Children’s Society: London, UK, 2022. [Google Scholar]
  112. Gernsbacher, M.A. Video Captions Benefit Everyone. Policy Insights Behav. Brain Sci. 2015, 2, 195–202. [Google Scholar] [CrossRef] [Green Version]
  113. Perez, M.M.; Noortgate, W.V.D.; Desmet, P. Captioned video for L2 listening and vocabulary learning: A meta-analysis. System 2013, 41, 720–739. [Google Scholar] [CrossRef]
  114. Kent, M.; Ellis, K.; Latter, N.; Peaty, G. The Case for Captioned Lectures in Australian Higher Education. Techtrends 2018, 62, 158–165. [Google Scholar] [CrossRef]
  115. Kawas, S.; Karalis, G.; Wen, T.; Ladner, R.E. Improving real-time captioning experiences for deaf and hard of hearing students. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility, Reno, NV, USA, 23–26 October 2016; pp. 15–23. [Google Scholar]
  116. Cawthon, S.W.; Leppo, R.; Ge, J.J.; Bond, M. Accommodations Use Patterns in High School and Postsecondary Settings for Students Who Are d/Deaf or Hard of Hearing. Am. Ann. Deaf. 2015, 160, 9–23. [Google Scholar] [CrossRef] [PubMed]
  117. Powell, D.; Hyde, M.; Punch, R. Inclusion in Postsecondary Institutions with Small Numbers of Deaf and Hard-of-Hearing Students: Highlights and Challenges. J. Deaf. Stud. Deaf. Educ. 2014, 19, 126–140. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  118. Ranchal, R.; Taber-Doughty, T.; Guo, Y.; Bain, K.; Martin, H.; Robinson, J.P.; Duerstock, B.S. Using speech recognition for real-time captioning and lecture transcription in the classroom. IEEE Trans. Learn. Technol. 2013, 6, 299–311. [Google Scholar] [CrossRef]
  119. Elliot, L.; Foster, S.; Stinson, M. Student Study Habits Using Notes from a Speech-to-Text Support Service. Except. Child. 2002, 69, 25–40. [Google Scholar] [CrossRef]
  120. Elliot, L.; Stinson, M.; Francis, P. C-Print Tablet PC support for deaf and hard of hearing students. In Proceedings of the ICERI2009 Proceedings, Madrid, Spain, 16–18 November 2009; pp. 2454–2473. [Google Scholar]
  121. Elliot, L.B.; Stinson, M.S.; McKee, B.G.; Everhart, V.S.; Francis, P.J. College Students’ Perceptions of the C-Print Speech-to-Text Transcription System. J. Deaf. Stud. Deaf. Educ. 2001, 6, 285–298. [Google Scholar] [CrossRef] [Green Version]
  122. Harish, K. New Zealand Early Childhood Education Teachers’ Knowledge and Experience of Supporting Hard of Hearing or Deaf Children. Master’s Thesis, University of Canterbury, Christchurch, New Zealand, 2022. [Google Scholar]
  123. Gustafson, S.J.; Ricketts, T.A.; Tharpe, A.M. Hearing Technology Use and Management in School-Age Children: Reports from Data Logs, Parents, and Teachers. J. Am. Acad. Audiol. 2017, 28, 883–892. [Google Scholar] [CrossRef]
  124. Barker, R.E. Teacher and Student Experiences of Remote Microphone Systems. Master’s Thesis, University of Canterbury, Christchurch, New Zealand, 2020. Available online: https://ir.canterbury.ac.nz/handle/10092/100086 (accessed on 15 March 2023).
  125. Esturaro, G.T.; Youssef, B.C.; Ficker, L.B.; Deperon, T.M.; Mendes, B.d.C.A.; Novaes, B.C.d.A.C. Adesão ao uso do Sistema de Microfone Remoto em estudantes com deficiência auditiva usuários de dispositivos auditivos. Codas 2022, 34. [Google Scholar] [CrossRef]
  126. Groth, J. Exploring teenagers’ access and use of assistive hearing technology. ENT Audiol. News 2017, 25. [Google Scholar]
  127. McPherson, B. Hearing assistive technologies in developing countries: Background, achievements and challenges. Disabil. Rehabil. Assist. Technol. 2014, 9, 360–364. [Google Scholar] [CrossRef] [PubMed]
  128. Hersh, M.; Mouroutsou, S. Learning technology and disability—Overcoming barriers to inclusion: Evidence from a multicountry study. Br. J. Educ. Technol. 2019, 50, 3329–3344. [Google Scholar] [CrossRef]
  129. Schafer, E.C.; Dunn, A.; Lavi, A. Educational Challenges during the Pandemic for Students Who Have Hearing Loss. Lang. Speech Heart Serv. Sch. 2021, 52, 889–898. [Google Scholar] [CrossRef] [PubMed]
  130. Taylor, K.; Neild, R.; Fitzpatrick, M. Universal Design for Learning: Promoting Access in Early Childhood Education for Deaf and Hard of Hearing Children. Perspect. Early Child. Psychol. Educ. 2023, 5, 4. [Google Scholar] [CrossRef]
  131. Houtrow, A.; Harris, D.; Molinero, A.; Levin-Decanini, T.; Robichaud, C. Children with disabilities in the United States and the COVID-19 pandemic. J. Pediatr. Rehabil. Med. 2020, 13, 415–424. [Google Scholar] [CrossRef]
  132. Kim, J.Y.; Fienup, D.M. Increasing Access to Online Learning for Students with Disabilities during the COVID-19 Pandemic. J. Spéc. Educ. 2022, 55, 213–221. [Google Scholar] [CrossRef]
  133. Taggart, L.; Mulhall, P.; Kelly, R.; Trip, H.; Sullivan, B.; Wallén, E.F. Preventing, mitigating, and managing future pandemics for people with an intellectual and developmental disability-Learnings from COVID-19: A scoping review. J. Policy Pract. Intellect. Disabil. 2022, 19, 4–34. [Google Scholar] [CrossRef]
  134. Kaufmann, T.B.; Foroogozar, M.; Liss, J.; Berisha, V. Requirements for mass adoption of assistive listening technology by the general public. arXiv 2023, arXiv:2303.02523. [Google Scholar]
  135. Fink, M.; Butler, J.; Stremlau, T.; Kerschbaum, S.L.; Brueggemann, B.J. Honoring access needs at academic conferences through Computer Assisted Real-time Captioning (CART) and sign language interpreting. Coll. Compos. Commun. 2020, 72, 103–106. [Google Scholar]
  136. Morris, K.K.; Frechette, C.; Dukes, L.; Stowell, N.; Topping, N.E.; Brodosi, D. Closed captioning matters: Examining the value of closed captions for all students. J. Postsecond. Educ. Disabil. 2016, 29, 231–238. [Google Scholar]
  137. McDonnell, E.J.; Liu, P.; Goodman, S.M.; Kushalnagar, R.; Froehlich, J.E.; Findlater, L. Social, Environmental, and Technical: Factors at Play in the Current Use and Future Design of Small-Group Captioning. Proc. ACM Hum.-Comput. Interact. 2021, 5, 1–25. [Google Scholar] [CrossRef]
  138. Li, F.M.; Lu, C.; Lu, Z.; Carrington, P.; Truong, K.N. An Exploration of Captioning Practices and Challenges of Individual Content Creators on YouTube for People with Hearing Impairments. Proc. ACM Hum.-Comput. Interact. 2022, 6, 1–26. [Google Scholar] [CrossRef]
  139. McDonnell, E.J. Understanding Social and Environmental Factors to Enable Collective Access Approaches to the Design of Captioning Technology. ACM SIGACCESS Access. Comput. 2023, 135, 1. [Google Scholar] [CrossRef]
  140. Peng, Y.-H.; Hsi, M.-W.; Taele, P.; Lin, T.-Y.; Lai, P.-E.; Hsu, L.; Chen, T.-C.; Wu, T.-Y.; Chen, Y.-A.; Tang, H.-H.; et al. SpeechBubbles: Enhancing Captioning Experiences for Deaf and Hard-of-Hearing People in Group Conversations. In Proceedings of the CHI’18: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; pp. 1–10. [Google Scholar] [CrossRef]
Figure 1. Technologies supporting access to auditory and visual information.
Figure 1. Technologies supporting access to auditory and visual information.
Education 13 00761 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Millett, P. The Connected Life: Using Access Technology at Home, at School and in the Community. Educ. Sci. 2023, 13, 761. https://doi.org/10.3390/educsci13080761

AMA Style

Millett P. The Connected Life: Using Access Technology at Home, at School and in the Community. Education Sciences. 2023; 13(8):761. https://doi.org/10.3390/educsci13080761

Chicago/Turabian Style

Millett, Pam. 2023. "The Connected Life: Using Access Technology at Home, at School and in the Community" Education Sciences 13, no. 8: 761. https://doi.org/10.3390/educsci13080761

APA Style

Millett, P. (2023). The Connected Life: Using Access Technology at Home, at School and in the Community. Education Sciences, 13(8), 761. https://doi.org/10.3390/educsci13080761

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop