Next Article in Journal
Atrial Septal Defects: From Embryology to Pediatric Pulmonary Hypertension
Previous Article in Journal
Is Ti-Coated PEEK Superior to PEEK for Lumbar and Cervical Fusion Procedures? A Systematic Review and Meta-Analysis
Previous Article in Special Issue
Neuropsychological Assessments to Explore the Cognitive Impact of Cochlear Implants: A Scoping Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

What to Measure? Development of a Core Outcome Set to Assess Remote Technologies for Cochlear Implant Users

1
School of Medicine, University of Western Australia, Crawley, WA 6009, Australia
2
Curtin School of Allied Health, Curtin University, Bentley, WA 6102, Australia
3
Incept Labs, Sydney, NSW 2009, Australia
4
School of Health and Rehabilitation Sciences, The University of Queensland, Brisbane, QLD 4072, Australia
5
Department of Community and Clinical Health, LaTrobe University, Bundoora, VIC 3086, Australia
6
Sydney School of Health Sciences, University of Sydney, Sydney, NSW 2006, Australia
7
Curtin Enable Institute, Curtin University, Bentley, WA 6102, Australia
*
Authors to whom correspondence should be addressed.
J. Clin. Med. 2025, 14(21), 7697; https://doi.org/10.3390/jcm14217697
Submission received: 25 August 2025 / Revised: 15 October 2025 / Accepted: 15 October 2025 / Published: 30 October 2025
(This article belongs to the Special Issue The Challenges and Prospects in Cochlear Implantation)

Abstract

Background/Objectives: Uptake of remote cochlear implant (CI) services is feasible in clinical studies, but implementation into regular clinical practice is limited. Effective implementation requires demonstration of at least equivalent outcomes to in-person care. Use of outcome measures (e.g., specific tools such as speech tests or surveys) that are relevant and sensitive to both modes of service facilitates evidence-based provision of CI services. Following our previous study, which developed a core outcome domain set (CODS) (i.e., a set of CI outcome areas important to measure), this study aimed to (1) review current awareness and use of outcome measures implemented clinically, in-person, or remotely; and (2) provide recommendations for a pragmatic core outcome set (COS) to assess remote technologies for CI users. Methods: Expert Australian/New Zealand clinical CI professionals (n = 20) completed an online survey regarding use of, and familiarity with, pre-identified outcome measures mapping to the previously identified CODS. Respondents rated the outcomes’ usefulness, ease of use, trustworthiness, and recommendation for future use. Stakeholder workshops (clinician, n = 3, CI users n = 4) finalised recommendations. Results: Four of the six most regularly used and familiar measures were speech perception tests: BKB-A sentences, CNC words, CUNY sentences, and AB words. The long- and short-form Speech, Spatial, and Qualities of Hearing Scales (SSQ/SSQ-12) were the top-ranked patient-reported outcome measures (PROMs). These outcome measures were also perceived as the most trustworthy, easy to use, and likely to be used if recommended. Conclusions: A pragmatic COS, relevant to both remote and in-person delivery of CI services, including recommendations for measurement of service, clinician-measured and patient-reported outcomes, and how these might be developed in future, is recommended.

1. Introduction

Opportunities to access cochlear implant (CI) care via remote technologies, rather than the traditional method of accessing care through in-person appointments at specialised clinics, have rapidly expanded in recent years [1,2,3,4,5,6]. Various studies have shown that both synchronous and asynchronous options for remote CI services, such as intraoperative CI telemetry, implant programming, electrode-specific measures, and post-operative assessment of speech recognition, management, and review, are possible and feasible [5,6,7,8,9]. The use of telehealth for CI service provision has the potential to significantly improve the efficiency, effectiveness, and equity of care for CI users, in a personalised manner. However, it is vital that its implementation is well-considered [1,10,11].
Common barriers to integration of remote care into hearing services for either CIs or hearing aids include a lack of adequate funding frameworks; poor integration with current clinical practices; mistrust of the accuracy and quality of the remote care service; measures and outcomes; and audiologists’ confidence in their clients’ ability to utilise the remote technology [1,2,12,13,14]. Nevertheless, most studies show that use of a hybrid system in which both remote and in-person care is provided is the preferred method of service delivery, both by clients and audiologists [9,13,14,15].
In order to implement remote care effectively into clinical practice, it is essential to demonstrate that remote service provision provides equivalent, if not superior, care compared to the current standard of in-person clinical care [16]. This is necessary for regulatory purposes, as well as to ensure that CI service providers and users have sufficient trust and confidence in the outcomes of the remote service to consider using it.
Within audiology, a vast number of outcome measures exist for measuring the effectiveness of CIs and hearing aids [17,18,19]. Danermark et al. [20] suggested a concise set of outcome measures for assessment of hearing in general, and Allen et al. [21] identified a core outcome domain set (i.e., a set of patient outcome areas that should be assessed, but not specific tools for assessment) for hearing rehabilitation, primarily for hearing aids. These sets of measures and domains, however, do not address some of the specific auditory issues associated with severe–profound hearing loss or the technical issues associated with the use of CIs. More specific to CIs, Andries et al. [22] recommended a CI-specific outcome assessment protocol for adult CI users, which an expert group of CI professionals selected based on the WHO international classification of Functioning, Disability, and Health (ICF) framework. The ability to assess outcomes for CIs, such as speech perception, is particularly problematic in remote care, given the difficulties determining presentation levels and establishing standardised test environments (e.g., a sound-treated booth), compared to in-person clinical measures. Currently, there is no set of measures for CI-specific outcomes when also used in combination with remote technology.
The use of relevant and sensitive outcome measures to evaluate CI services delivered via remote technologies is vital to facilitate the provision of evidence-based healthcare services, allowing stakeholders to make informed decisions about how to best care for their patients. The current approach to audiological outcome measures is essentially non-standardised [23], both for in-person and remote services, making it difficult to compare and integrate results across different studies and services, for example, in systematic reviews with meta-analyses.
To address these issues, over the last decade, there has been an increase in the development and use of core outcome sets (COSs) [24]. A COS is an agreed, standardised set of outcomes that should be measured and reported as a minimum dataset for a specific condition [25], ideally with input from end-users, including patients, clinicians, industry, and other key stakeholders. Outcome measures (specific measurement tools) are identified as part of pre-specified outcome domains (i.e., CI outcome areas which have been deemed important to measure by CI users and CI professionals—i.e., what should be measured, but not specific measurement tools). A core outcome domain set (CODS) has been defined for hearing aids with separate, and significant, input from both patient and hearing care professional stakeholder groups [21] based on best practice guidelines [26]; however, there is nothing similar for CIs. Traditionally, CI outcomes have focused on the domains of speech perception and CI uptake, although there is growing evidence that patient-reported outcome measures (PROMs) offer a more functional real-world outcome, tapping into different mechanisms of benefit to speech perception outcomes [27,28]. More recently, there has been a growing number of PROMs specific to cochlear implants [29,30], although the extent of how they are used in clinical practice is unclear.
With the increase in use of remote technologies, there is a need to consider the meaningful domains that are specific to these technologies, which may also be relevant to in-person services. For example, empowerment has recently emerged as a feature of remote technologies [31,32], but it likely also applies to in-person services. Furthermore, there are other considerations specific to the service delivery of remote technologies that are often identified as benefits to both patients and services, such as reduced time, convenience, and costs [1,2,12,14].
An essential tenet of this research is to be able to demonstrate the equivalence of remote care services, either as a stand-alone or hybrid model of care, to the ‘gold-standard’ in-person clinical model of care. Thus, the overall objective of this research was to develop a COS to evaluate remote technologies delivered within CI services to maximise the potential benefits of this model of care. Feedback was sought from relevant parties (e.g., CI users and their families’ service providers, including management and clinicians, CI manufacturers, and CI advocacy groups) to ensure a broad range of perspectives were considered. Outcome domains encompassing outcome measures (measurement tools) specific to the delivery of remote technologies, in terms of both (i) patient outcomes (i.e., benefits of remote technologies for CI patients) and (ii) service delivery, were included, ensuring the outcomes can be easily integrated into clinical care. This paper reports on the final phase of a broader three-phase study (see Figure 1), which used the COS development roadmap described by Hall et al. (2015) [26] as the theoretical underpinning. The current study followed Stage 2 (identify and agree on outcome measures) of the Hall et al. (2015) [26] roadmap, where outcome measures were systematically appraised based on COSMIN principles. The study was registered on the COMET (Core Outcome Measures in effectiveness Trials) website: https://www.comet-initiative.org/Studies/Details/2586 (accessed on 1 October 2022).
Phase one of the study included a systematic review of outcome measures identified in studies documenting the use of remote services for the provision of CI and hearing-aid care [28]. A total of 250 different outcome measures were identified, with CI studies revealing significantly more outcomes in the ear and labyrinth domains, compared to hearing-aid studies (43% vs. 10%), and hearing-aid studies revealing significantly more outcomes in the cognitive (28% vs. 5%) and emotional (35% vs. 10%) domains than CI studies.
Phase two involved a combination of stakeholder workshops with CI users and their significant others, CI professionals, and hearing advocates, followed by a series of three parallel e-Delphi reviews conducted separately for 74 CI professionals and 114 CI users across Australia, the UK, and the USA. This utilised a methodology described by Allen et al. [33] outlining the development of a CODS for adult CI outcome domains. This phase aimed to identify, by consensus, the most important outcome domains based on stakeholder input [33]. The Delphi review assessed 58 domains across three supradomains: Service, Clinical (assessment-based), and Patient (self-report). The top three domains, in which a consensus of ≥80% was achieved within each supradomain for both groups (i.e., CI users and CI professionals), are shown in Table 1. Agreement was good for the Service supradomain; however, the consensus was poorer for the Clinical supra-domain, and there was no between-group agreement for the Patient supra-domain. Many domains ranked highly by CI users were ranked far less important by professionals.
The aim of the final phase of the study reported in this paper, Phase three, was to identify a COS to evaluate remote technologies delivered within CI services based on the previously defined CODS. Due to the substantial mismatch in outcome domains for both the Clinical and Patient supra-domains between CI users and CI professionals noted in Phase two, Allen et al. [33] recommended inclusion of domains ranked most highly by CI professionals for the Clinical supra-domain, and by CI users for the Patient supra-domain in an interim, pragmatic COS. This would facilitate an easy transition into a robust, pragmatic, and clinically acceptable COS utilising clinical measures used regularly and trusted by CI programmes across Australia and New Zealand.

2. Materials and Methods

Outcome measures that mapped onto the CODS were selected from those identified in Phase one, and outcome measures identified by the research team as commonly used in clinical care in Australia and New Zealand were included in Phase three of the study. The list of identified outcome measures was appraised based on the Hall et al. (2015) [26] roadmap. Content validity and developmental methodology were used to identify which outcome measures were included in Phase three. The final list of included outcome measures, consisting of 43 patient-reported outcome measures (PROMs) and 10 speech perception outcome measures (speech perception tests) (see Supplementary Tables S1 and S2), was presented to experienced CI clinicians from Australia and New Zealand in a single-round online survey. Each clinician was asked to rate the outcome measures for their use of the measure and, if used, the outcome measures’ usefulness, trustworthiness, ease of use, and likelihood of future use if it were recommended to them.

2.1. Single-Round Online Survey

Expert CI clinicians (n = 57) from large adult CI services in Australia involved in the provision of CI services in Australia and New Zealand were identified by authors CS and IB from professional contacts and invited to participate via email invitation. Clinicians were excluded if they had less than 12 months of experience in adult CI service provision or did not work with adult CI recipients. Individuals who agreed to participate completed a single-round online survey utilising Qualtrics software, Provo, UT, USA, https://www.qualtrics.com/, accessed date 1 November 2023. The online survey consisted of questions about the use of 10 speech perception tests and 43 patient-reported outcome measures (PROMs) using a 4-point categorical scale (never heard of, never used, occasionally used, regularly used). A short description was provided for each PROM listed. For example, “The Social Participation Restrictions Questionnaire (SParQ) is a hearing-specific, patient-reported outcome measure that was originally developed through consultation with adults with hearing loss, clinicians, and researchers. It has 19 items, each assessed on an 11-point scale. Responses are averaged to form two subscales: Social Behaviours and Social Perceptions”. A copy of the Example Survey is available in Supplementary Materials.
Eligibility criteria for participants were as follows: recent or current CI clinicians from a range of large CI clinics and research institutes, and identified by the research team as having extensive knowledge of current CI clinical practices and/or extensive knowledge of currently available adult-focused CI outcome measures used in Australia and New Zealand. Clinicians included experienced clinicians who participated in Phase two [33]. They were invited to participate via an email message.
When participants indicated that they had regularly, or occasionally, used a measure, they were asked to provide a rating, using a 5-point Likert scale, ranging from “strongly disagree” to “strongly agree” for each of the following statements:
  • This measure is easy to use in clinical practice (ease of use);
  • This measure gives results that are trustworthy/believable;
  • This measure gives results that are useful in clinical practice;
  • I would use this measure in clinical practice if it were recommended to me.
Respondents were also asked about other clinical outcome measures they used as part of their standard protocol, their usual approach to testing asymmetrical hearing losses, and the factors that they considered when choosing a speech test to ensure that no outcome measures were missed.
Statistical analysis was conducted in Python (v3.11.0) [34], using pandas (v2.2.1) [35], numpy (v1.23.5) [36], and scipy (v1.11.4) [37], and scikit-learn (v1.4.2) [38]. Graphics were generated using matplotlib (v3.8.4) [39]. Descriptive statistics were calculated for demographic variables and survey responses.

2.2. Online Final Recommendation Workshops

Two Final Recommendation workshops, approximately 90 min each in duration, were held online through Microsoft Teams videoconferencing software, WA, USA (version 1.6.00.29964) with CI users and CI professionals to finalise key recommendations for the interim pragmatic COS. The outline presentation of the study was provided, one for professionals, and a layperson version for CI users, summarising the key results from the CODS and the current study. A semi-structured interview guide (see final workshop agenda in Supplementary Materials) was developed. Questions included the following:
  • What domains should be included in future iterations of the COS?
  • (CI professionals only) Which outcome measures or subdomains should be recommended as a minimum standard?
  • How should we prioritise outcome measures within each subdomain?
Participants consisted of two groups: (1) Adult CI users and (2) CI professionals. CI professionals were required to have at least 12 months of experience providing CI services to adult CI users. Adult CI users were required to be ≥18 years of age with at least 6 months experience using a CI and sufficient self-reported English proficiency to participate in the workshop. Individuals with self-reported disability, other than hearing loss, that precluded full participation in the workshop were excluded. Potential participants were invited, via email, from the list of individuals who had participated in Phase two of the study and agreed to participate in Phase three.

3. Results

3.1. Single-Round Online Survey

Twenty CI clinicians from Australia (n = 18) and New Zealand (n = 2) participated, 17 of whom responded to both the PROM and the speech perception test familiarity survey questions. Participants’ clinical and research experience is detailed in Table 2. Home and work postcodes for Australian participants were mapped to the Index of Relative Social Advantage and Disadvantage (IRSAD) decile, with all Australian participants living in the top 30% of postcodes and working in the top 50% of postcodes, suggesting that participants skewed toward relative social advantage. For participants from New Zealand, postcodes were mapped using the New Zealand Index of Deprivation for 2023 [40], with one participant living in the top 30% of postcodes but working in an inner-city location in the bottom 30% of postcodes, and the other participant living in the bottom 30% of postcodes but working in the top 50%.

3.1.1. Familiarity Ratings (Speech Perception Outcome Measures and PROMs)

A summary of familiarity ratings is shown in Table 3. Participants were most familiar with speech perception outcome measures. Four speech perception outcome measures (BKB-A, CUNY sentences, CNC words, and AB words) fell within the top five most-used outcome measures and were regularly used by 50% of participants. The DIN/DTT test, BKB-SIN, and QuickSIN were occasionally used by >50% of participants. The remaining three speech perception tests (HINT, Austin, and AzBio) had either been “never heard of” or “never been used” by 59%, 53%, and 76% of participants, respectively.
Familiarity with PROMs was lower than for speech perception outcome measures. The SSQ (90% of participants) and the SSQ-12 (75% of participants) were the most used PROMs. Participants used the SSQ-12 (70%) slightly more regularly than the SSQ (65%). The Glasgow Hearing Aid Benefit Profile (GHABP), Hearing Handicap for the Elderly (HHIE), and Abbreviated Profile for Hearing Aid Benefit (APHAB) were occasionally or regularly used by at least half the participants. No participant regularly used the short or revised version of the HHIE. The Nijmegen Cochlear Implant Questionnaire [53] (NCIQ), and the Cochlear Implant Quality of Life Questionnaire (CIQoL Profile and CIQoL-Global) [30], which have been recommended by the Adult Hearing Standards of Care; Living Guidelines [88], were not commonly used. The NCIQ was only used by 5% of participants regularly and by 45% occasionally. The CIQoL was used only occasionally by 35% (Global version, 35 items) and 20% (Profile version, 10 items) of participants.

3.1.2. Ease of Use, Trustworthiness, Usefulness, and Likely Recommendation to Use Ratings

Ratings were provided for ten speech perception outcome measures (Figure 2), and 21 PROMs (Figure 3), which had been used by ≥3 participants. The CIQoL Profile ratings were also included, although they had only been used by two participants. Outcome measures with which participants were more familiar were, in general, considered easier to use (τB = 0.383, p < 0.001), more trustworthy (τB = 0.323, p < 0.001), and as providing more useful results (τB = 0.300, p < 0.001). Participants also reported that they would be more likely to use them in practice if they were recommended (τB = 0.406, p < 0.001) (Figure 3). Ratings were generally high, with very few respondents disagreeing with any of the statements.
The correlation between the ratings of ease of use, trustworthiness, clinical usefulness, and willingness to use outcome measures was assessed using univariate and bivariate kernel density plots, and correlations between all rating scales were high (see Supplementary Figure S1).

3.1.3. Free Text Responses

Participants suggested several additional PROMS not listed in the survey (Supplementary Table S2), including the Australian Quality of Life Scale (AQoL; n = 4), the Strengths and Difficulties Questionnaire (n = 3), and the Listening Effort Questionnaire (LEQ-CI; n = 3). The most recommended physiological test was Neural Response/Auditory Response Telemetry (n = 3). The Ling Sounds speech sounds identification test was also suggested (n = 3).
The most important factors to consider when choosing a speech perception test (see Supplementary Table S3) were as follows: tests available in the primary language (n = 11), accent (n = 7) of the CI user, cognitive appropriateness (n = 4), speed of delivery (n = 2), and measure length (n = 2) (see Supplementary Table S3).

3.2. Final Recommendation Workshops

Transcriptions of the final workshops were reviewed and summarised by authors CS and MF, then reviewed by all other authors. Key findings from the workshops are shown in Table 4.
Clinicians felt that future outcome measures should include domains such as cognition, listening effort, listening fatigue, empowerment, social connectedness, relationships, and fatigue. There was a general consensus between both groups that more holistic measures of CI outcomes are needed to provide a more comprehensive understanding of the communication difficulties of CI users and their real-life impact. A combination of different types of outcome measures and understanding their interactions was considered crucial for advancing the field and improving clinical outcomes. However, with the emergence of newly developed outcome measures, interpretation of results may be challenging due to a lack of clinician familiarity. Thus, when considering the introduction of new outcome measures, it is vital to consider training as part of implementation, to raise awareness, familiarity, and ensure trust in the data obtained with the measure.
CI users felt that interaction with the clinician in some aspect of the remote service was important to feel engaged in the process. They recommended that remote services be well-considered and designed to ensure a seamless process for all aspects of the service, from enrolment, validation of enrolment, and login to completion of the remote checks, appointments, payment, etc. Whilst benefit was seen in the ability to adjust CI settings remotely, CI users reported it was essential that there was a built-in fail-safe or reboot option at the CI user’s end if the service failed midway for some reason.

4. Discussion

Whilst remote care has been shown to be a feasible option for CI service provision, the uptake and sustained use of such services have remained low. In order to compare remote and in-person services effectively, as well as address the concerns about accessibility, usability of services, and accuracy of results, it is essential that the same set of outcome measures that are sensitive and meaningful to both CI users and CI professionals are compared across clinics, modes of service provision, and clinical trials/studies.
While the COS recommended by Andries et al. [22] included CI-specific outcome measures, the measures and domains selected for inclusion were not selected with the input of CI users but rather by a core group of CI experts. While well-known, commonly used instruments and assessment methods were identified, several of the PROMs selected were not designed according to the current recommended best practice, e.g., using consumer input, considering the risks of bias, or following evidence-based criteria for good psychometric measurement properties [89], nor were they CI-specific. Finally, the outcome measures recommended did not specifically consider implementation within a remote care service and the issues associated with this. The COS [22] included a large number of outcome measures: PROMs (Work Rehabilitation Questionnaire [WORQ; 59 items], Abbreviated Profile of Hearing Aid Benefit [APHAB; 24 items], Audio Processor Satisfaction Questionnaire [APSQ; 15 items], Speech Spatial and Qualities of Hearing Questionnaire [SSQ-12; 12 items], Hearing Implant Sound Quality Index 19 [HISQUI19; 19 items]), and included audiometric outcome measures: Aided Pure tone audiometry, Speech perception [Monosyllabic words in quiet, Sentences in noise], and Sound localisation.
Several of the domains identified as most important by CI users and CI clinicians in the earlier stages of our study [33] were not included in the domains in the above-mentioned COS; of particular note are device integrity and status, device use, and hearing-related quality of life. Understandably, remote service domains of reliability, accessibility, and ease of use were not included. In Phase two, a complete lack of consensus between CI users and CI professionals was observed in the most important domains for self-report “Patient” outcome measures, and there was only a limited consensus for objective “Clinical” outcome measures. Thus, outcomes recommended in the COS by Andreis et al.’s [22] may not be important to CI users, given the lack of CI user input. Further, our final recommendation workshop revealed a consensus between CI users and CI clinicians for a minimalist approach to the number of core outcome measures. One must consider the potential for overburdening CI users and clinicians with multiple outcome measures, particularly PROMs with large numbers of items, some of which ask similar questions. A large number of time-consuming outcome measures may result in poor completion compliance and inefficient or limited clinical use of completed outcome measures. Outcome measures completed or received immediately prior to, or during, an appointment, particularly PROMs, may be difficult and time-consuming for the clinician to analyse appropriately during the appointment.
Although the original aim of this study was to develop a COS for CI users utilising remote technology in Australia and New Zealand, this has proved difficult for several reasons:
  • Lack of consensus between CI users and CI professionals on the most important domains for the patient supra-domain [33] means that the implementation of a concise COS is problematic if one is to measure the most important domains within each supra-domain.
  • Lack of well-designed and/or well-validated outcome measures for some of the domains rated as most important to assess. Rigorous development and assessment of novel outcome measures is, therefore, required.
  • Current clinical practice trends in Australia and New Zealand, observed in our online survey of CI clinicians, indicate that CI services rely predominantly on a relatively small pool of specific speech perception outcome measures as the primary measure of CI outcomes. Clinicians are far less familiar with most PROMs that align with the CODS.
  • Several of the outcome measures identified and explored in the outcomes survey have proprietary test materials, technical requirements, and licencing costs associated with them.
These issues present several problems when considering the widespread implementation of a COS for remote technology into well-established CI clinics. Large CI clinics often perform retrospective analysis of CI outcomes over time as an important indicator of the success of both individual CI users and CI clinics as a whole [90]. Thus, implementation of a brand-new set of outcome measures must consider the impact on the ability to compare outcomes over time. It may be necessary to align the outcomes of new outcome measures with pre-existing outcome measures for a period of time in order to retain the ability to compare outcomes over time. Significant support, resources, and training regarding outcome measures that are new to the field, or simply new to the clinic, will be required.
Clinicians must have confidence and trust in the recommended outcome measures, both in the methods of data collection and interpretation of results, as evidenced by the high levels of correlation between the ease of use, trustworthiness, usefulness, and recommendation ratings provided by participants. Familiarity with an outcome measure engenders an understanding of effective methods of use and interpretation, and, in turn, outcome measures that are inherently easier to apply and interpret are more likely to become part of the existing clinical practice [91,92,93]. Outcome measures that provide trustworthy results are also more likely to be considered useful in clinical practice. Understanding which of these four factors of clinician experience, if any, are primarily responsible for positive clinician experience and uptake is essential to support implantation efforts. This is particularly salient, given our findings in relation to some more commonly used surveys, such as the CIQoL and the NCIQ, both of which were recommended outcome measures in the recently drafted Adult Hearing Standards of Care; Living Guidelines [88]. Our study revealed poor ratings for ease of use, likelihood to use if recommended, and, to a lesser extent, usefulness for the NCIQ, which would indicate that it is unlikely that this measure would be readily adopted into CI clinical practice in Australia or New Zealand. Furthermore, the NCIQ contains 60 items and so fails to meet the recommendations of our CI users about shorter, more concise outcome measures. The CIQoL, whilst receiving relatively good ratings for all four categories, had only been used by a maximum of four clinicians; thus, training for its implementation is required.
In light of these considerations, it appears most appropriate to recommend a pragmatic, interim COS for remote technologies for CI users in order to facilitate uptake into current clinical practice, with the recognition that CI outcomes are constantly evolving [88], and as such, so are the important outcome domains, and outcome measures with which to assess them are also evolving. Furthermore, it was decided to limit outcome measures to those commonly used in English-speaking countries in the first instance, as Australia and New Zealand were the focus of this study, to further facilitate compliance with the use of the COS, as commonly used outcome measures differ substantially across countries. There was a strong focus in the workshops on the need to ensure that implementation of any new set of outcome measures did not overburden either CI users or CI clinicians. Whilst there was a push to utilise more meaningful, “real-life” outcome measures, this was not to be at the expense of additional time and effort for key stakeholders. In fact, the preference was for a reduction in time allocated towards the assessment of outcomes. Similar findings have been noted in other allied health fields [94]. However, a reduction in the length of speech test lists, or the number of questions in surveys, should not be at the expense of a reduction in their psychometric properties, such as test–retest reliability and validity. Any recommended outcome measures must have and retain good psychometric properties to ensure their usefulness.

4.1. Recommended Interim, Pragmatic COS

4.1.1. Service Outcomes

A single Likert item was assigned for reliability, usability, and acceptability, with an option for free text, as recommended in stakeholder workshops.
The wording of the scale needs to be further defined, but a regularly mentioned example was a five-response option based on agreement within stakeholder workshops (e.g., the remote technology was reliable: strongly agree to strongly disagree).

4.1.2. Clinically Measured Outcomes

The CNC (or similar CVC) and optional digit triplet testing (DTT), testing device integrity/system check, device use, and adverse events.
Recommended outcome measures are as follows:
An adaptive speech test that is presented in noise but could also be completed in quiet, depending on the CI user’s speech perception ability. Of the four most regularly used tests identified by CI professionals (BKB-A, CUNY, CNC, and AB), the CNC test was included, as this is a current requirement for CI candidacy determination. It is, however, noted that work is required to ensure appropriate integration of a “remote” version of the CNC word lists into the remote clinical workflow.
The DTT is a speech-in-noise test that is often delivered remotely, thus having the appropriate underpinning architecture for delivery via remote technology systems. Other advantages of this test include that it is often delivered adaptively, the digitized material is easily translatable into other languages with easily understandable stimuli, it can be delivered without the need for calibration equipment, and it has been widely used and validated across the world.
Device integrity and status, device use, and adverse events were the other three most highly rated clinical tests, in addition to speech perception testing. Both groups felt it was vital to ensure that both internal and external components of the CI were functioning appropriately (device integrity) to ensure that any outcome measures added to these measures are not impacted by device malfunction. Device use, via datalogging, has been included in the COS to ensure that limited CI outcomes are not the result of limited CI use. We acknowledge, however, that there are known discrepancies between reported use and logged data, and that this discrepancy could be due to either technological errors or the user’s decision not to report limited use.
Any potential worry about device usage should be discussed in a supportive and caring manner with the CI user.
Outcome measures excluded include the following:
The BKB-A test (quiet and fixed S:N) was excluded because of the fixed-level presentation of sentences and the fact that it was originally developed for a low (kindergarten age) literacy level. It is somewhat child-like, resulting in ceiling effects, and its typical presentation mode is quiet. Significant ceiling effects were reported by Gifford et al. [95] when assessing adult CI users with the HINT, which consists of selected BKB/A sentences, presented in quiet and at fixed levels of noise.
The CUNY sentence test was excluded due to ceiling effects relating to the high level of predictability of sentences seen amongst post-lingually deafened adult CI users [96] and the relatively high language knowledge/literacy level required, in addition to the potential influence of auditory memory on outcomes.
The AB word test [43] was excluded due to the limited number, 15, of short (10-word) lists available, in addition to the widespread use of this test material throughout hearing clinics in Australia and the lack of evidence-based empirical data on their use, which could lead to practice effects [97]. Furthermore, the Australian accent version of the test materials is no longer available for purchase from the National Acoustic Laboratories.
BKB-SIN and Quick SIN were excluded because of the lack of accessibility to recordings in an Australian accent and the inclusion of words not routinely used in Australian English, as per final workshop recommendations from both CI users and CI clinicians.
Speech sound identification/discrimination assessment (e.g., the LING test), whilst rated highly by CI users, was perceived by CI professionals as a more diagnostic measure to indicate specific hearing difficulties, rather than an overall measure of CI outcome, and thus was not included in the current interim COS.

4.1.3. Patient-Reported Outcome Measures (PROMs)

Given the discrepancy in domain importance between CI users and CI professionals in this supra-domain, preference was given to CI users based on feedback provided in Phase two, in which it was suggested that CI users’ everyday life experiences should be prioritised.
Recommended outcome measures are as follows:
The SSQ or short-form SSQ-12 is an interim PROM because it is the most regularly used, easy to use, trustworthy, and most likely to be used if recommended in the interim, whilst other PROMs gain clinical acceptance and implementation. CI professionals also felt that, in the context of remote services, given the potential impacts associated with variability in the home test environment at each test point, there was a necessity to have a reported measure of hearing ability to confirm the behavioural test measure. The SSQ-12 has been recommended in the ANZ adaptation of the Adult Standards of Hearing Care; Living Guidelines ANZ adaptation [98]. However, it should be noted that it does not address the most important domains of hearing-related quality of life, satisfaction, and wellbeing. Thus, although it is commonly used, this alone is not an appropriate criterion for its long-term inclusion in future COSs.
Hearing-related quality of life, satisfaction, and wellbeing are outcome domains for consideration. Based on these domains, the CIQoL (Cochlear Implant Quality of Life) [30] would be a suitable PROM, augmented by a satisfaction measure. The CIQoL is a well-validated outcome measure developed with stakeholder input, using modern psychometric Item Response Theory analysis. The CIQoL Profile (35 items) has sub-domains: hearing, communication, social relationships, emotional wellbeing, independence and daily life, device satisfaction and use, cognitive and mental engagement, and perception of self and identity. It is not a unidimensional measure (i.e., quality of life), but the authors suggest the broader sub-domains reflect quality of life. Alternatively, the Living with Cochlear Implants (LivCI) [99], a recently developed CI-specific, 22-item PROM, which includes four sub-domains addressing psychosocial and wellbeing, participation (i.e., HRQoL), aesthetics and visibility (a primary driver of satisfaction), and stigma, could be considered. Like the CIQoL, the LivCI has been developed according to COSMIN best practice principles, including extensive stakeholder (e.g., CI professionals and CI users), content evaluation, and contemporary Rasch analysis to ensure high-quality items that are independent, alongside Classical Test Theory analyses. Either or both of these outcome measures, which address several of the most important “Patient-reported” outcome domains identified as part of our CODs, should be included alongside the SSQ-12 following appropriate training and implementation into the clinical setting, and could be potential candidates to replace the SSQ in the future. Furthermore, both PROMs are recommended for use in the ANZ adaptation of the Adult Standards of Hearing Care; Living Guidelines ANZ adaptation [98].
In the absence of an appropriate PROM for CI user satisfaction with devices, it was suggested that, as for Service outcome measures, a Likert single-item measure could be used in the interim.
Of the numerous PROMs assessed in this study, many were not familiar to or trusted by experienced CI clinicians. Furthermore, most (see Supplementary Table S1) were not designed specifically for use with people who were users, or potential users, of cochlear implants. Some, although designed specifically for CI users/candidates, had not been well-designed according to best principles, such as inclusion of CI user input during item selection, unidimensionality of items, or appropriate scale sensitivity. Finally, an important consideration brought up in recommendation workshops was the need to ensure that CI users and CI clinicians, as stated in the final workshop, were not overburdened with surveys that included excessively large numbers of items, asking similar questions several different ways, and thus making them prone to poor or invalid completion and difficult to analyse and utilise effectively within a clinical setting. In the next version, a broader stakeholder involvement in a final version is recommended.

4.1.4. Future Considerations for Validation, Revision, and Expansion of the COS

Other domains that are emerging as important for remote technologies within audiology [21], but not widely considered in the CI field, such as empowerment, listening effort, and auditory fatigue, should also be considered for a future COS. There are a number of well-developed CI- or hearing-specific PROMs which address such domains. Additionally, an assessment of digital literacy, whilst not an outcome measure per se, prior to CI users using remote technologies should also be considered.
A noted limitation of our study is the limited generalizability of the current version of the COS, developed for the Australian/New Zealand context, to international markets. Given differences in language requirements of test materials, in addition to differing candidacy for CI, at present, we believe it is particularly difficult to create a generalised COS for an international market. This is particularly evident with the recent publication of ANZ-specific guidelines for Adult CI management [98]. To overcome this, adaptation and validation of COS-recommended outcome measures into multiple languages are required. This said, the results of this study can inform CI COS development internationally, for example, by highlighting the importance of standardising outcome measures for both in-person and remote assessments, the preference for a reduced length of assessments, and the likelihood of different outcome prioritisation between CI users and professionals.
It is noted that, despite the increasing availability of COSs across the medical and allied health landscape, their use is often limited. A number of barriers to the uptake of COSs have been reported, including a lack of awareness of the COS, poor integration of outcome tools into clinical workflows, and limited incentive for adherence [100,101]. Education is therefore a vital component of uptake of the COS across CI centres, as is the use of early adopters of the COS to “champion” its use within the clinical setting. Development of templates or tools that integrate COS measures, particularly electronically, will also greatly enhance its use [101]. Finally, it is recommended that the COS measures be utilised for voluntary audit of CI outcomes across clinics, allowing clinics to benchmark outcomes against each other, and incentivising the use of the COS.
It is strongly recommended that ongoing monitoring of the clinical practices and opinions of CI clinicians is carried out every five years to ensure the advancement of clinical practice. This may be possible through professional groups, such as Audiology Australia. Adoption of future COS iterations into national guidelines, such as the recently published ANZ National Living Guidelines, will also greatly enhance awareness of the COS. A part of this process would be to update the interim COS recommended here over time, as well as to guide the development of policy and ongoing implementation, training, and de-implementation within clinical practice.

5. Conclusions

Development of a core outcome set (COS) to assess remote technologies used by CI users is vital, given the increase in the use of remote technologies for CI care. It is important that such a COS is relevant across both remote and in-clinic services to enable comparison and seamless integration of the two modes of service. It must also incorporate meaningful, useful outcome measures for CI users, their families, and CI clinicians alike, using well-designed, trusted outcome measures that can be incorporated into clinical practice without unnecessarily overburdening staff, CI users, or their families. We present a pragmatic, interim COS for use in hybrid clinical practice, noting that ongoing monitoring of meaningful future outcomes and clinical practices may result in adaptations to the recommended COS in the future.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/jcm14217697/s1: Figure S1: Bivariate kernel density plots of the correlations between outcome measure ratings; Table S1: PROM description and development information; Table S2: Additional outcome measures using in clinical practice in Australia and New Zealand; Table S3: Factors identified as important to consider when choosing a speech test; File S1: Example survey; File S2: Final Workshop Agenda.

Author Contributions

C.S.: Project conception and design, administrative support, collection and assembly of data, data interpretation, manuscript writing and review, funding acquisition; D.A.: project design, administrative support, collection and assembly of data, data analysis and interpretation, manuscript review; E.L.: project design, administrative support, data interpretation, manuscript review; I.B.: project design, administrative support, data interpretation, manuscript review; M.F.: project conception and design, data interpretation, substantial manuscript review, funding acquisition, supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by an Investigator-Initiated Grant from Cochlear Australia Pacific Ltd. protocol no: APAC AU IIR 220001/CTR-17855.

Institutional Review Board Statement

Ethical approval was obtained from. The study was conducted in accordance with the Declaration of Helsinki and approved by the Human Ethics Office of Research at the University of Western Australia (Re: 2022/ET001007, date of approval: 9 January 2023).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the participant(s) to publish this paper.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to privacy and ethical restrictions enforced on research relating to human participants by the University of Western Australia.

Acknowledgments

We would like to thank Helen Cullington and Rene Gifford for their valuable insights and support of this study.

Conflicts of Interest

The employing organisation of MF received funding from Cochlear Ltd. for this study. The remaining authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

CICochlear Implant
CODSCore Outcome Domain Set
COSCore Outcome Set
PROMPatient Reported Outcome Measure
WHOWorld Health Organisation
ICFInternational classification of Functioning, Disability, and Health framework
COMETCore Outcome Measures in Effectiveness Trials

References

  1. Ferguson, M.A.; Eikelboom, R.H.; Sucher, C.M.; Maidment, D.W.; Bennett, R.J. Remote Technologies to Enhance Service Delivery for Adults: Clinical Research Perspectives. Semin. Hear. 2023, 44, 328–350. [Google Scholar] [CrossRef]
  2. Kim, J.; Jeon, S.; Kim, D.; Shin, Y. A Review of Contemporary Teleaudiology: Literature Review, Technology, and Considerations for Practicing. J. Audiol. Otol. 2021, 25, 1–7. [Google Scholar] [CrossRef]
  3. Luryi, A.L.; Tower, J.I.; Preston, J.; Burkland, A.; Trueheart, C.E.; Hildrew, D.M. Cochlear Implant Mapping Through Telemedicine-A Feasibility Study. Otol. Neurotol. 2020, 41, e330–e333. [Google Scholar] [CrossRef]
  4. Maruthurkkara, S.; Allen, A.; Cullington, H.; Muff, J.; Arora, K.; Johnson, S. Remote check test battery for cochlear implant recipients: Proof of concept study. Int. J. Audiol. 2022, 61, 443–452. [Google Scholar] [CrossRef]
  5. Schepers, K.; Steinhoff, H.-J.; Ebenhoch, H.; Böck, K.; Bauer, K.; Rupprecht, L.; Möltner, A.; Morettini, S.; Hagen, R. Remote programming of cochlear implants in users of all ages. Acta Otolaryngol. 2019, 139, 251–257. [Google Scholar] [CrossRef] [PubMed]
  6. Cullington, H.; Kitterick, P.; Weal, M.; Margol-Gromada, M. Feasibility of personalised remote long-term follow-up of people with cochlear implants: A randomised controlled trial. BMJ Open 2018, 8, e019640. [Google Scholar] [CrossRef] [PubMed]
  7. Maruthurkkara, S.; Case, S.; Rottier, R. Evaluation of Remote Check: A Clinical Tool for Asynchronous Monitoring and Triage of Cochlear Implant Recipients. Ear Hear. 2022, 43, 495–506. [Google Scholar] [CrossRef] [PubMed]
  8. Philips, B.; Smits, C.; Govaerts, P.J.; Doorn, I.; Vanpoucke, F. Empowering Senior Cochlear Implant Users at Home via a Tablet Computer Application. Am. J. Audiol. 2018, 27, 417–430. [Google Scholar] [CrossRef]
  9. Carner, M.; Bianconi, L.; Fulco, G.; Confuorto, G.; Soloperto, D.; Molteni, G.; Sacchetto, L. Personal experience with the remote check telehealth in cochlear implant users: From COVID-19 emergency to routine service. Eur. Arch. Otorhinolaryngol. 2023, 280, 5293–5298. [Google Scholar] [CrossRef]
  10. Nassiri, A.M.; Saoji, A.A.; DeJong, M.D.; Tombers, N.M.; Driscoll, C.L.W.; Neff, B.A.; Haynes, D.S.; Carlson, M.L. Implementation Strategy for Highly-Coordinated Cochlear Implant Care With Remote Programming: The Complete Cochlear Implant Care Model. Otol. Neurotol. 2022, 43, e916–e923. [Google Scholar] [CrossRef]
  11. Nittari, G.; Savva, D.; Tomassoni, D.; Tayebati, S.K.; Amenta, F. Telemedicine in the COVID-19 Era: A Narrative Review Based on Current Evidence. Int. J. Environ. Res. Public Health 2022, 19, 5101. [Google Scholar] [CrossRef]
  12. Chong-White, N.; Incerti, P.; Poulos, M.; Tagudin, J. Exploring teleaudiology adoption, perceptions and challenges among audiologists before and during the COVID-19 pandemic. BMC Digit. Health 2023, 1, 24. [Google Scholar] [CrossRef]
  13. Lilies, A.; Darnton, P.; Kryl, D.; Sibley, A.; Chandler, J.; Barton, S.; Benson, T.; Robertson, A. Independent Evaluation of CHOICE 2021; Darnton, P., Ed.; Health Innovation Wessex: Southampton, UK, 2021; pp. 1–53. [Google Scholar]
  14. Sucher, C.; Norman, R.; Chaffey, E.; Bennett, R.; Ferguson, M. Patient preferences for Remote cochlear implant management: A discrete choice experiment. PLoS ONE 2025, 20, e0320421. [Google Scholar] [CrossRef] [PubMed]
  15. Barreira-Nielsen, C.S.C.; Campos, L.S. Implementation of the hybrid teleaudiology model: Acceptance, feasibility and satisfaction in a cochlear implant program. Audiol. Commun. Res. 2022, 27, e2538. [Google Scholar] [CrossRef]
  16. Department of Health, Victoria (Ed.) Virtual Care Operational Framework; Department of Health, Victoria: Melbourne, Australia, 2023.
  17. Granberg, S.; Dahlström, J.; Möller, C.; Kähäri, K.; Danermark, B. The ICF Core Sets for hearing loss-researcher perspective. Part I: Systematic review of outcome measures identified in audiological research. Int. J. Audiol. 2014, 53, 65–76. [Google Scholar] [CrossRef] [PubMed]
  18. Akeroyd, M.A.; Wright-Whyte, K.; Holman, J.A.; Whitmer, W.M. A comprehensive survey of hearing questionnaires: How many are there, what do they measure, and how have they been validated? Trials 2015, 16, P26. [Google Scholar] [CrossRef]
  19. Neal, K.; McMahon, C.M.; Hughes, S.E.; Boisvert, I. Listening-based communication ability in adults with hearing loss: A scoping review of existing measures. Front. Psychol. 2022, 13, 786347. [Google Scholar] [CrossRef]
  20. Danermark, B.; Granberg, S.; Kramer, S.E.; Selb, M.; Möller, C. The creation of a comprehensive and a brief core set for hearing loss using the international classification of functioning, disability and health. Am. J. Audiol. 2013, 22, 323–328. [Google Scholar] [CrossRef]
  21. Allen, D.; Hickson, L.; Ferguson, M. Defining a Patient-Centred Core Outcome Domain Set for the Assessment of Hearing Rehabilitation With Clients and Professionals. Front. Neurosci. 2022, 16, 787607. [Google Scholar] [CrossRef]
  22. Andries, E.; Lorens, A.; Skarżyński, P.H.; Skarżyński, H.; Calvino, M.; Gavilán, J.; Lassaletta, L.; Tavora-Vieira, D.; Acharya, A.; Kurz, A.; et al. Implementation of the international classification of functioning, disability and health model in cochlear implant recipients: A multi-center prospective follow-up cohort study. Front. Audiol. Otol. 2023, 1, 1257504. [Google Scholar] [CrossRef]
  23. Boisvert, I.; Ferguson, M.; van Wieringen, A.; Ricketts, T.A. Editorial: Outcome Measures to Assess the Benefit of Interventions for Adults With Hearing Loss: From Research to Clinical Application. Front. Neurosci. 2022, 16, 955189. [Google Scholar] [CrossRef]
  24. Clarke, M.; Williamson, P.R. Core outcome sets and systematic reviews. Syst. Rev. 2016, 5, 11. [Google Scholar] [CrossRef]
  25. COMET. Core Outcome Measures in Effectiveness Trials. 2022. Available online: https://www.comet-initiative.org/ (accessed on 1 October 2022).
  26. Hall, D.A.; Haider, H.; Kikidis, D.; Mielczarek, M.; Mazurek, B.; Szczepek, A.J.; Cederroth, C.R. Toward a global consensus on outcome measures for clinical trials in tinnitus: Report from the first international meeting of the COMiT Initiative, November 14, 2014, Amsterdam, The Netherlands. Trends Hear. 2015, 19, 2331216515580272. [Google Scholar] [CrossRef] [PubMed]
  27. Dietz, A.; Heinrich, A.; Törmäkangas, T.; Iso-Mustajärvi, M.; Miettinen, P.; Willberg, T.; Linder, P.H. The effectiveness of cochlear implantation on performance-based and patient-reported outcome measures in Finnish recipients. Front. Neurosci. 2022, 16, 786939. [Google Scholar] [CrossRef] [PubMed]
  28. Laird, E.; Sucher, C.; Nakano, K.; Ferguson, M. Systematic review of patient and service outcome measures of remote digital technologies for cochlear implant and hearing aid users. Front. Audiol. Otol. 2024, 2, 1403814. [Google Scholar] [CrossRef]
  29. Hughes, S.E.; Watkins, A.; Rapport, F.; Boisvert, I.; McMahon, C.M.; Hutchings, H.A. Rasch analysis of the listening effort Questionnaire—Cochlear implant. Ear Hear. 2021, 42, 1699–1711. [Google Scholar] [CrossRef]
  30. McRackan, T.R.; Hand, B.N.; Cochlear Implant Quality of Life Development Consortium; Velozo, C.A.; Dubno, J.R. Cochlear Implant Quality of Life (CIQoL): Development of a profile instrument (CIQOL-35 Profile) and a global measure (CIQOL-10 Global). J. Speech Lang. Hear. Res. 2019, 62, 3554–3563. [Google Scholar] [CrossRef]
  31. Maidment, D.; Heyes, R.; Gomez, R.; Coulson, N.S.; Wharrad, H.; Ferguson, M.A. Evaluating a theoretically informed and co-created mHealth educational intervention for first-time hearing aid users: A qualitative interview study. J. Med. Internet Res. 2020, 8, e17193. [Google Scholar]
  32. Gomez, R.; Habib, A.; Maidment, D.W.; Ferguson, M.A. Smartphone-Connected Hearing Aids Enable and Empower Self-Management of Hearing Loss: A Qualitative Interview Study Underpinned by the Behavior Change Wheel. Ear Hear. 2022, 43, 921–932. [Google Scholar] [CrossRef]
  33. Sucher, C.; Laird, E.; Allen, D.; Boisvert, I.; Ferguson, M. Development of a Core Outcome Set to evaluate Remote Technologies for Cochlear Implant Users. In Proceedings of the World Congress of Audiology, Paris, France, 19–22 September 2024. [Google Scholar]
  34. The Python Software Foundation. The Python Language Reference; The Python Software Foundation: Beaverton, OR, USA, 2001. [Google Scholar]
  35. McKinney, W. Data Structures for Statistical Computing in Python. In Proceedings of the 9th Python in Science Conference, Austin, TX, USA, 28 June–3 July 2010. [Google Scholar]
  36. Harris, C.R.; Millman, K.J.; van der Walt, S.J.; Gommers, R.; Virtanen, P.; Cournapeau, D.; Wieser, E.; Taylor, J.; Berg, S.; Smith, N.J.; et al. Array Programming with NumPy. Nature 2020, 585, 357–362. [Google Scholar] [CrossRef]
  37. Virtanen, P.; Gommers, R.; Oliphant, T.E.; Haberland, M.; Reddy, T.; Cournapeau, D.; Burovski, E.; Peterson, P.; Weckesser, W.; Bright, J.; et al. SciPy 1.0: Fundamental algorithms for scientific computing in Python. Nat. Methods 2020, 17, 261–272. [Google Scholar] [CrossRef]
  38. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  39. The Matplotlib Development Team. Matplotlib: Visualisation with Python; Zenodo, Ed.; CERN: Geneva, Switzerland, 2025. [Google Scholar]
  40. Atkinson, J.; Salmond, C.; Crampton, P.; Viggers, H.; Lacey, K. NZDep2023 Index of Socioeconomic Deprivation: Research Report; Department of Public Health, University of Otago: Wellington, New Zealand, 2024. [Google Scholar]
  41. Gatehouse, S.; Noble, W. The Speech, Spatial and Qualities of Hearing Scale (SSQ). Int. J Audiol 2004, 43, 85–99. [Google Scholar] [CrossRef] [PubMed]
  42. Bench, J.; Doyle, J. The BKB/A (Banford-Kowal-Bench/Australian Version) Sentence Lists for Hearing-Impaired Children; La Trobe University: Victoria, Australia, 1979. [Google Scholar]
  43. Boothroyd, A.; Hanin, L.; Hnath, T. A sentence test of speech perception: Reliability, set equivalence, and short term learning. In CUNY Academic Works; City University of New York: New York, NY, USA, 1985. [Google Scholar]
  44. Peterson, G.E.; Lehiste, I. Revised CNC lists for auditory tests. J. Speech Hear. Disord. 1962, 27, 62–70. [Google Scholar] [CrossRef] [PubMed]
  45. Boothroyd, A. Developments in Speech Audiometry. Br. J. Sound 1968, 2, 3–10. [Google Scholar]
  46. Noble, W.; Jensen, N.S.; Naylor, G.; Bhullar, N.; Akeroyd, M.A. A short form of the Speech, Spatial and Qualities of Hearing scale suitable for clinical use: The SSQ12. Int. J. Audiol. 2013, 52, 409–412. [Google Scholar] [CrossRef]
  47. Ventry, I.M.; Weinstein, B.E. The hearing handicap inventory for the elderly: A new tool. Ear Hear. 1982, 3, 128–134. [Google Scholar] [CrossRef]
  48. Killion, M.C.; Niquette, P.A.; Gudmundsen, G.I.; Revit, L.J.; Banerjee, S. Development of a quick speech-in-noise test for measuring signal-to-noise ratio loss in normal-hearing and hearing-impaired listeners. J. Acoust. Soc. Am. 2004, 116, 2395–2405. [Google Scholar] [CrossRef]
  49. Etymotic Research. Etymotic BKB-SIN Speech-in-Noise Test User Manual; Interacoustics: Middelfart, Denmark, 2005; pp. 1–27. [Google Scholar]
  50. Smits, C.; Goverts, S.T.; Festen, J.M. The digits-in-noise test: Assessing auditory speech recognition abilities in noise. J. Acoust. Soc. Am. 2013, 133, 1693–1706. [Google Scholar] [CrossRef]
  51. Gatehouse, S. A self-report outcome measure for the evaluation of hearing aid fittings and services. Health Bull. 1999, 57, 424–436. [Google Scholar]
  52. Cox, R.M.; Alexander, G.C. The abbreviated profile of hearing aid benefit. Ear Hear. 1995, 16, 176–186. [Google Scholar] [CrossRef]
  53. Hinderink, J.B.; Krabbe, P.F.; Van Den Broek, P. Development and application of a health-related quality-of-life instrument for adults with cochlear implants: The Nijmegen cochlear implant questionnaire. Otolaryngol. Head Neck Surg. 2000, 123, 756–765. [Google Scholar] [CrossRef]
  54. King, N.; Nahm, E.A.; Liberatos, P.; Shi, Q.; Kim, A.H. A new comprehensive cochlear implant questionnaire for measuring quality of life after sequential bilateral cochlear implantation. Otol. Neurotol. 2014, 35, 407–413. [Google Scholar] [CrossRef]
  55. Spitzer, R.L.; Kroenke, K.; Williams, J.B.W.; Löwe, B. A brief measure for assessing generalized anxiety disorder: The GAD-7. Arch. Intern. Med. 2006, 166, 1092–1097. [Google Scholar] [CrossRef] [PubMed]
  56. Nilsson, M.; Soli, S.D.; Sullivan, J.A. Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. J. Acoust. Soc. Am. 1994, 95, 1085–1099. [Google Scholar] [CrossRef] [PubMed]
  57. Cassarly, C.; Matthews, L.J.; Simpson, A.N.; Dubno, J.R. The Revised Hearing Handicap Inventory and Screening Tool Based on Psychometric Reevaluation of the Hearing Handicap Inventories for the Elderly and Adults. Ear Hear. 2020, 41, 95–105. [Google Scholar] [CrossRef] [PubMed]
  58. Dawson, P.W.; Hersbach, A.A.; Swanson, B.A. An adaptive Australian Sentence Test in Noise (AuSTIN). Ear Hear. 2013, 34, 592–600. [Google Scholar] [CrossRef]
  59. Spahr, A.J.; Dorman, M.F. Performance of subjects fit with the Advanced Bionics CII and Nucleus 3G cochlear implant devices. Arch. Otolaryngol. Head Neck Surg. 2004, 130, 624–628. [Google Scholar] [CrossRef]
  60. Cox, R.; Hyde, M.; Gatehouse, S.; Noble, W.; Dillon, H.; Bentler, R.; Stephens, D.; Arlinger, S.; Beck, L.; Wilkerson, D.; et al. Optimal outcome measures, research priorities, and international cooperation. Ear Hear. 2000, 21, 106S–115S. [Google Scholar] [CrossRef]
  61. Dillon, H.; Birtles, G.; Lovegrove, R. Measuring the Outcomes of a National Rehabilitation Program: Normative Data for the Client Oriented Scale of Improvement (COSI) and the Hearing Aid User’s Questionnaire (HAUQ). J. Am. Acad. Audiol. 1999, 10, 67–79. [Google Scholar] [CrossRef]
  62. Amann, E.; Anderson, I. Development and validation of a questionnaire for hearing implant users to self-assess their auditory abilities in everyday communication situations: The Hearing Implant Sound Quality Index (HISQUI19). Acta Otolaryngol. 2014, 134, 915–923. [Google Scholar] [CrossRef] [PubMed]
  63. The ida Institute. ida Institute Motivational Tools: The Line. Available online: https://idainstitute.com/tools/motivation-tools (accessed on 1 October 2022).
  64. Hawthorne, G.; Hogan, A. Measuring disability-specific patient benefit in cochlear implant programs: Developing a short form of the Glasgow Health Status Inventory, the Hearing Participation Scale. Int. J. Audiol. 2002, 41, 535–544. [Google Scholar] [CrossRef] [PubMed]
  65. McBride, W.S.; Mulrow, C.D.; Aguilar, C.; Tuley, M.R. Methods for screening for hearing loss in older adults. Am. J. Med. Sci. 1994, 307, 40–42. [Google Scholar] [CrossRef] [PubMed]
  66. Yesavage, J.A.; Brink, T.L.; Rose, T.L.; Lum, O.; Huang, V.; Adey, M.; Leirer, V.O. Development and validation of a geriatric depression screening scale: A preliminary report. J. Psychiatr. Res. 1982, 17, 37–49. [Google Scholar] [CrossRef]
  67. Luetje, C.M.; Brackman, D.; Balkany, T.J.; Maw, J.; Baker, R.S.; Kelsall, D.; Backous, D.; Miyamoto, R.; Parisier, S.; Arts, A. Phase III clinical trial results with the Vibrant Soundbridge implantable middle ear hearing device: A prospective controlled multicenter study. Otolaryngol. Head Neck Surg. 2002, 126, 97–107. [Google Scholar] [CrossRef]
  68. Beck, A.T.; Ward, C.H.; Mendelson, M.; Mock, J.; Erbaugh, J. An inventory for measuring depression. Arch. Gen. Psychiatry 1961, 4, 561–571. [Google Scholar] [CrossRef]
  69. Antony, M.M.; Bieling, P.J.; Cox, B.J.; Enns, M.W.; Swinson, R.P. Psychometric properties of the 42-item and 21-item versions of the Depression Anxiety Stress Scales in clinical groups and a community sample. Psychol. Assess. 1998, 10, 176–181. [Google Scholar] [CrossRef]
  70. Zigmond, A.S.; Snaith, R.P. The hospital anxiety and depression scale. Acta Psychiatr. Scand. 1983, 67, 361–370. [Google Scholar] [CrossRef]
  71. Kompis, M.; Pfiffner, F.; Krebs, M.; Caversaccio, M.D. Factors influencing the decision for Baha in unilateral deafness: The Bern benefit in single-sided deafness questionnaire. Adv. Otorhinolaryngol. 2011, 71, 103–111. [Google Scholar]
  72. Topp, C.W.; Østergaard, S.D.; Søndergaard, S.; Bech, P. The WHO-5 Well-Being Index: A systematic review of the literature. Psychother. Psychosom. 2015, 84, 167–176. [Google Scholar] [CrossRef]
  73. Cox, R.M.; Alexander, G.C. Expectations about hearing aids and their relationship to fitting outcome. J. Am. Acad. Audiol. 2000, 11, 368–382. [Google Scholar] [CrossRef]
  74. Billinger-Finke, M.; Bräcker, T.; Weber, A.; Amann, E.; Anderson, I.; Batsoulis, C. Development and validation of the audio processor satisfaction questionnaire (APSQ) for hearing implant users. Int. J. Audiol. 2020, 59, 392–397. [Google Scholar] [CrossRef] [PubMed]
  75. De Jong Gierveld, J.; Kamphuls, F. The Development of a Rasch-Type Loneliness Scale. Appl. Psychol. Meas. 1985, 9, 289–299. [Google Scholar] [CrossRef]
  76. De Jong Gierveld, J.; Van Tilburg, T. A 6-Item Scale forOverall, Emotional, and Social Loneliness. Confirmatory Tests on Survey Data. Res. Aging 2006, 28, 582–598. [Google Scholar] [CrossRef]
  77. Terluin, B.; van Marwijk, H.W.; Adèr, H.J.; de Vet, H.C.; Penninx, B.W.; Hermens, M.L.; van Boeijen, C.A.; van Balkom, A.J.; van der Klink, J.J.; AB Stalman, W. The Four-Dimensional Symptom Questionnaire (4DSQ): A validation study of a multidimensional self-report questionnaire to assess distress, depression, anxiety and somatization. BMC Psychiatry 2006, 6, 34. [Google Scholar] [CrossRef]
  78. Diener, E.; Emmons, R.A.; Larsen, R.J.; Griffin, S. The Satisfaction with Life Scale. J. Personal. Assess. 1985, 49, 71–75. [Google Scholar] [CrossRef]
  79. Russell, D.; Peplau, L.A.; Cutrona, C.E. The revised UCLA Loneliness Scale: Concurrent and discriminant validity evidence. J. Personal. Soc. Psychol. 1980, 39, 472–480. [Google Scholar] [CrossRef]
  80. Davies, A.R.; Ware, J.E., Jr. GHAA’s Consumer Satisfaction Survey and User’s Manual, 2nd ed; Group Health Association of America: Washington, DC, USA, 1991. [Google Scholar]
  81. McConnaughy, E.A.; Prochaska, J.O.; Velicer, W.F. Stages of change in psychotherapy: Measurement and sample profiles. Psychother. Theory Res. Pract. 1983, 20, 368–375. [Google Scholar] [CrossRef]
  82. Levenstein, S.; Prantera, C.; Varvo, V.; Scribano, M.; Berto, E.; Luzi, C.; Andreoli, A. Development of the Perceived Stress Questionnaire: A new tool for psychosomatic research. J. Psychosom. Res. 1993, 37, 19–32. [Google Scholar] [CrossRef]
  83. Heffernan, E.; Coulson, N.S.; Ferguson, M.A. Development of the Social Participation Restrictions Questionnaire (SPaRQ) through consultation with adults with hearing loss, researchers, and clinicians: A content evaluation study. Int. J. Audiol. 2018, 57, 791–799. [Google Scholar] [CrossRef]
  84. Sansoni, J.; Hawthorne, G.; Fleming, G.; Owen, E.; Marosszeky, N. Technical Manual and Instructions for the Revised Incontinence and Patient Satisfaction Tools; Centre for Health Service Development, University of Wollongong: Wollongong, Australia, 2011. [Google Scholar]
  85. Cox, R.M.; Alexander, G.C. Measuring Satisfaction with Amplification in Daily Life: The SADL scale. Ear Hear. 1999, 20, 306–320. [Google Scholar] [CrossRef] [PubMed]
  86. Reichheld, F.F. The One Number You Need to Grow, in Harvard Business Review; Harvard Business School Publishing: Brighton, MA, USA, 2003. [Google Scholar]
  87. Heffernan, E.; Habib, A.; Ferguson, M. Evaluation of the psychometric properties of the social isolation measure (SIM) in adults with hearing loss. Int. J. Audiol. 2019, 58, 45–52. [Google Scholar] [CrossRef] [PubMed]
  88. CI Task Force. Adult Hearing Standards of Care; Living Guidelines. 2022. Available online: https://adulthearing.com/standards-of-care/ (accessed on 24 March 2025).
  89. Mokkink, L.B.; Elsman, E.B.M.; Terwee, C.B. COSMIN guideline for systematic reviews of patient-reported outcome measures version 2.0. Qual. Life Res. 2024, 33, 2929–2939. [Google Scholar] [CrossRef] [PubMed]
  90. Boisvert, I.; Reis, M.; Au, A.; Cowan, R.; Dowell, R.C. Cochlear implantation outcomes in adults: A scoping review. PLoS ONE 2020, 15, e0232421. [Google Scholar] [CrossRef]
  91. Duncan, E.A.; Murray, J. The barriers and facilitators to routine outcome measurement by allied health professionals in practice: A systematic review. BMC Health Serv. Res. 2012, 12, 96. [Google Scholar] [CrossRef]
  92. Hatfield, D.R.; Ogles, B.M. Why some clinicians use outcome measures and others do not. Adm. Policy Ment. Health 2007, 34, 283–291. [Google Scholar] [CrossRef]
  93. O’Connor, B.; Kerr, C.; Shields, N.; Imms, C. Understanding allied health practitioners’ use of evidence-based assessments for children with cerebral palsy: A mixed methods study. Disabil. Rehabil. 2019, 41, 53–65. [Google Scholar] [CrossRef]
  94. Aiyegbusi, O.L.; Rivera, S.C.; Roydhouse, J.; Kamudoni, P.; Alder, Y.; Anderson, N.; Baldwin, R.M.; Bhatnagar, V.; Black, J.; Bottomley, A.; et al. Recommendations to address respondent burden associated with patient-reported outcome assessment. Nat. Med. 2024, 30, 650–659. [Google Scholar] [CrossRef]
  95. Gifford, R.H.; Dorman, M.F. Bimodal Hearing or Bilateral Cochlear Implants? Ask the Patient. Ear Hear. 2019, 40, 501–516. [Google Scholar] [CrossRef]
  96. Ebrahimi-Madiseh, A.; Eikelboom, R.H.; Jayakody, D.M.; Atlas, M.D. Speech perception scores in cochlear implant recipients: An analysis of ceiling effects in the CUNY sentence test (Quiet) in post-lingually deafened cochlear implant recipients. Cochlear Implants Int. 2016, 17, 75–80. [Google Scholar] [CrossRef]
  97. Myles, A.J. The clinical use of Arthur Boothroyd (AB) word lists in Australia: Exploring evidence-based practice. Int. J. Audiol. 2017, 56, 870–875. [Google Scholar] [CrossRef]
  98. ANZ Hearing Health Collaborative. ANZ Hearing Health Collaborative (ANZ HHC): Living Guidelines for Cochlear Implant (CI) Referral, CI Evaluation and Candidacy, and CI Outcome Evaluation in Adults 2025. Available online: https://app.magicapp.org/#/guideline/10161 (accessed on 10 August 2025).
  99. Sucher, C.; Laird, E.; Hughe, S.; Elks, B.; Ferguson, M. Development of the LivCI: A patient-reported outcome measure of personal factors that affect quality of life, use and acceptance of cochlear implants. In Proceedings of the World Congress of Audiology, Paris, France, 19–22 September 2024. [Google Scholar]
  100. Bellucci, C.; Hughes, K.; Toomey, E.; Williamson, P.R.; Matvienko-Sikar, K. A survey of knowledge, perceptions and use of core outcome sets among clinical trialists. Trials 2021, 22, 937. [Google Scholar] [CrossRef]
  101. Blum, E.; Dy, C.J. Why Core Outcome Sets Do Not Get Used—And How Dissemination and Implementation Science can Help. J. Surg. Res. 2025, 314, 415–420. [Google Scholar] [CrossRef]
Figure 1. The three phases of the study [28,33].
Figure 1. The three phases of the study [28,33].
Jcm 14 07697 g001
Figure 2. Speech perception outcome measures. Ratings for ease of use (E), trustworthiness (T), usefulness (U), and likelihood of future recommendation for use (R). Number of participants who had used each measure, either occasionally or regularly, and thus provided ratings, is shown in parentheses after the named outcome measure. For abbreviations, see Table 2.
Figure 2. Speech perception outcome measures. Ratings for ease of use (E), trustworthiness (T), usefulness (U), and likelihood of future recommendation for use (R). Number of participants who had used each measure, either occasionally or regularly, and thus provided ratings, is shown in parentheses after the named outcome measure. For abbreviations, see Table 2.
Jcm 14 07697 g002
Figure 3. Patient-reported outcome measure (PROM) ratings for ease of use (E), trustworthiness (T), usefulness (U), and likelihood of future recommendation for use (R). Number of participants who had used each measure, either occasionally or regularly, and thus provided ratings, is shown in parentheses after the named outcome measure.
Figure 3. Patient-reported outcome measure (PROM) ratings for ease of use (E), trustworthiness (T), usefulness (U), and likelihood of future recommendation for use (R). Number of participants who had used each measure, either occasionally or regularly, and thus provided ratings, is shown in parentheses after the named outcome measure.
Jcm 14 07697 g003
Table 1. Core outcome domains (CI outcomes areas, or “what” to measure) identified in Phase two [33]. * Equal rating of second for both domains by CI users. HL: hearing loss; CI: cochlear implant.
Table 1. Core outcome domains (CI outcomes areas, or “what” to measure) identified in Phase two [33]. * Equal rating of second for both domains by CI users. HL: hearing loss; CI: cochlear implant.
Supra-Domain
ServiceClinicalPatient
Domain
Priority
CI
Users
CI ProfessionalsCI
Users
CI ProfessionalsCI
Users
CI Professionals
FirstReliability
of remote technology
Usability
of remote technology
Speech recognition
in noise
Device
integrity and status
Participation restriction due to HLExpectations of hearing health outcomes
SecondUsability
of the remote technology
Accessibility
of the remote service (for CI user)
Speech recognition
in quiet
Speech discriminationHearing Related Quality of Life
AND *
Satisfaction with CI
Motivation and Readiness to Act
on hearing difficulties
ThirdAccessibility
of the remote service (for CI user)
Reliability
of remote technology
Speech discriminationDevice UseMental Health and WellbeingAcceptability and Tolerability
of the CI
(for CI user)
Table 2. Clinical and research experience of CI clinicians who participated in the survey.
Table 2. Clinical and research experience of CI clinicians who participated in the survey.
Number of Participants (%)Median (Years)Range
(Years)
Duration of Clinical Audiology Experience 20 (100%)20.07–41
Duration of CI-specific clinical Audiology Experience 20 (100%)19.04–40
Experience in Audiology-focused research 12 (60%)7.50–40
Experience in CI-specific research14 (70%)120–40
Table 3. Familiarity ratings for patient-reported outcome measures (PROMs) and clinical outcome measures, ordered by median response. PROMs are shaded in light grey; speech perception tests are unshaded. Numbers under ratings represent the number of participants providing each rating for each measure.
Table 3. Familiarity ratings for patient-reported outcome measures (PROMs) and clinical outcome measures, ordered by median response. PROMs are shaded in light grey; speech perception tests are unshaded. Numbers under ratings represent the number of participants providing each rating for each measure.
Clinical
Measure/PROM
Never HeardNever UsedOccasionally UsedRegularly UsedMedian Response
Speech and Spatial Qualities Scale (SSQ) [41]02513Regularly Used
Bamford–Kowal–Bench Sentence Test, Australian Version (BKB/A) [42]01610Regularly Used
City University of New York Sentence Test (CUNY©) [43]02213Regularly Used
Consonant–Nucleus–Consonant Words (CNC Words) [44]01016Regularly Used
Arthur Boothroyd Words (AB Words) [45]01115Regularly Used
Short-Form Speech and Spatial Qualities Scale (SSQ-12) [46]41114Regularly Used
Hearing Handicap Inventory for the Elderly (HHIE) [47]26102Occasionally Used
Quick Speech In Noise Test (QuickSIN™) [48]2492Occasionally Used
Bamford–Kowal–Bench Sentences In Noise Test (BKB-SIN™) [49]1547Occasionally Used
Digits-In-Noise/Digit Triplet Test (DIN/DTT) [50]2375Occasionally Used
Glasgow Hearing Aid Benefit Profile (GHABP) [51]0983Occasionally Used
Abbreviated Profile of Hearing Aid Benefit (APHAB) [52]2747Occasionally Used
Nijmegen Cochlear Implant Questionnaire (NCIQ) [53]4691Never Used
Comprehensive Cochlear Implant Questionnaire (CCIQ) [54]8570Never Used
General Anxiety Disorder-7 (GAD-7) [55]71300Never Used
Hearing In Noise Test (HINT) [56]2861Never Used
Revised Hearing Handicap for the Elderly (RHHI) [57]71210Never Used
Revised Hearing Handicap for the Elderly—Screening (RHHI-S) [57]81110Never Used
Austin Sentence Test (Austin) [58]3644Never Used
AzBio Sentence Test (AzBio) [59]11231Never Used
International Outcomes Inventory—Cochlear Implants (IOI-CI) [60]21143Never Used
Hearing Aid Users Questionnaire (HAUQ) [61]9821Never Used
Cochlear Implant Quality of Life Questionnaire—Global (CIQoL-Global) [30]9470Never Used
Hearing Implant Sound Quality Index (HISQUI19) [62]9731Never Used
IDA Tool—The Line (The Line) [63]8822Never Used
Hearing Participation Scale (HPS) [64]81110Never Used
Hearing Handicap Inventory for the Elderly—Screening (HHIE-S) [65]4952Never Used
Geriatric Depression Scale—Long (GDS-L) [66]91100Never Used
Cochlear Implant Quality of Life Questionnaire—Profile (CIQoL-Profile) [30]9740Never Used
Hearing Device Satisfaction Scale (HDSS) [67]81110Never Used
Beck’s Depression Index (BDI) [68]91010Never Used
Depression Anxiety Stress Scale (21 Item) (DASS-21) [69]81020Never Used
Depression Anxiety Stress Scale (42 Item) (DASS-42) [69]71120Never Used
Hospital Anxiety and Depression Scale (HADS) [70]9551Never Used
Bern Benefit in Single-Sided Deafness (BBSS) [71]10910Never Heard
WHO Wellbeing Index (WHO-S) [72]101000Never Heard
Expected Consequences of Hearing Aid Ownership (ECHO) [73]13700Never Heard
Audio Processor Satisfaction Questionnaire (APSQ) [74]12710Never Heard
De Jong Gierveld Loneliness scale (11 Item) (DJGLS-11) [75]15500Never Heard
De Jong Gierveld Loneliness scale (6 Item) (DJGLS-6) [76]15500Never Heard
The Four-Dimensional Symptom Questionnaire (4DSQ) [77]16400Never Heard
Satisfaction With Life Scale (SWLS) [78]16400Never Heard
UCLA Loneliness Index (Revised) (UCLA) [79]16400Never Heard
Visit-Specific Satisfaction Questionnaire (VSQ-9) [80]19100Never Heard
University of Rhode Island Change Assessment adapted for hearing loss (URICA-HL) [81]13700Never Heard
Perceived Stress Questionnaire (PSQ) [82]12800Never Heard
Social Participation Restrictions Questionnaire (SPaRQ) [83]12800Never Heard
Short Assessment of Patient Satisfaction (SAPS) [84]14501Never Heard
Satisfaction with Amplification in Daily Life (SADL) [85]11900Never Heard
Net Promoter Score (NPS) [86]12620Never Heard
Social Isolation Measure (SIM) [87]14600Never Heard
Table 4. Key findings from the CI user and CI professional final recommendation workshops. * CALD = Culturally and Linguistically Diverse.
Table 4. Key findings from the CI user and CI professional final recommendation workshops. * CALD = Culturally and Linguistically Diverse.
CI ProfessionalsCI Users
GeneralAssessment of all three supra-domains is important.Assessment of all three supra-domains is important.
Asynchronous remote assessments must consider the amount of time required for the user to complete.
-
Completion can be burdensome on the user.
Service Supra-domainPreference for simplicity of measurement
-
1-item outcome measures per domain;
-
Likert/binary outcome scale;
-
Use of automated methods of data collection to reduce burden of collection and completion.
Preference for simplicity of measurement
-
1-item outcome measures per domain;
-
Likert/star rating scale;
-
Option to expand on answer;
-
Outcomes should be assessed immediately after service use to ensure responses are contextual.
No more than 3–4 times a year.
Technology for remote services should be accessible to everyone
-
Consider CALD *;
-
Important to assess CI users’ ability to use the remote service.
The need to do so is likely to decrease in the future with increased familiarity with technology.
Technology for remote services should be accessible to everyone
-
May depend on end-user connectivity;
-
CI users should be able to complete remote care sessions independently if required;
-
Digital literacy is an important consideration.
Clinical Supra-domainSystem checks are essential
-
Outcomes are dependent on working hardware.
System checks are essential
-
Should be routine;
-
Could be performed once a month without the need for patient feedback.
May catch issues quicker than client self-report.
Consider sending a status report to the client.
Preference for a minimalist approach focusing on a few key outcome measures
-
Recommendation to assess device use and speech perception in noise.
CI outcomes are dependent on CI use.
Preference for a minimalist approach focusing on a few key outcome measures
-
Adaptive tests are often quicker;
-
Device use.
Important to measure but ultimately up to the CI user to determine how much they wear their device.
Preference for datalogging rather than self-report.
Speech perception tests
-
Should be suitable for a range of hearing abilities and be “real-world” applicable.
Speech in noise.
Adaptive speech tests.
Need to keep abreast of tests in development, as they may be more appropriate (e.g., ECO-SIN test).
-
Important to differentiate between diagnostic outcome measures (e.g., confusion matrices to identify which speech sounds are not perceived), which may only be required for some individuals at certain times, and functional outcome measures that assess overall ability to follow speech in quiet and noisy conditions and are often used to track general progress of both individuals and CI groups as a whole.
Speech perception tests
-
Should be “real-world”-applicable.
Historical testing
-
Recognition that some outcome measures (e.g., speech in quiet) persist for historical reasons.
CI users and CI clinicians to compare outcomes over time.
Use in retrospective outcomes research.
Historical testing
-
Important to be able to compare current and previously measured results to view progress over time.
Remote test environment
-
Must be considered when implementing speech test outcome measures remotely;
-
Replication of the same test environment may not be possible.
If this does not matter, it should be communicated to the CI user.
Patient Supra-domainPROMs (patient-reported outcome measures)
-
Must be practical to implement and use in the clinic.
Need to consider:
Length and ease of administration.
Use of a mix of broad and specific PROMs for future COS.
PROMs (patient-reported outcome measures)
-
Need to be short and quick to complete.
Maximum 20–25 items.
No mandatory free text items, but there should be free text options.
Should not include multiple items asking similar things.
Must use simple language.
-
CI users must be made aware that PROMs need to be completed prior to the appointment.
Mental Health and Wellbeing
-
Can both affect and be affected by hearing loss.
-
Assessment of this area is vital for the provision of holistic care and support.
-
May be difficult to distinguish hearing-loss-related mental health issues from those caused by other life stressors.
Hearing-related mental health tools are vital.
Subjective hearing disability
-
PROMs measuring subjective hearing disability are not sensitive enough to pick up deterioration in performance.
-
Perception that a speech test that reflects “real-life” situations may be more accurate.
Satisfaction with CI
-
Assessment of satisfaction is crucial because it reflects overall quality of life as well as effectiveness of the CI.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sucher, C.; Allen, D.; Laird, E.; Boisvert, I.; Ferguson, M. What to Measure? Development of a Core Outcome Set to Assess Remote Technologies for Cochlear Implant Users. J. Clin. Med. 2025, 14, 7697. https://doi.org/10.3390/jcm14217697

AMA Style

Sucher C, Allen D, Laird E, Boisvert I, Ferguson M. What to Measure? Development of a Core Outcome Set to Assess Remote Technologies for Cochlear Implant Users. Journal of Clinical Medicine. 2025; 14(21):7697. https://doi.org/10.3390/jcm14217697

Chicago/Turabian Style

Sucher, Catherine, David Allen, Emma Laird, Isabelle Boisvert, and Melanie Ferguson. 2025. "What to Measure? Development of a Core Outcome Set to Assess Remote Technologies for Cochlear Implant Users" Journal of Clinical Medicine 14, no. 21: 7697. https://doi.org/10.3390/jcm14217697

APA Style

Sucher, C., Allen, D., Laird, E., Boisvert, I., & Ferguson, M. (2025). What to Measure? Development of a Core Outcome Set to Assess Remote Technologies for Cochlear Implant Users. Journal of Clinical Medicine, 14(21), 7697. https://doi.org/10.3390/jcm14217697

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop