Next Article in Journal
Housing for Artful Ageing: Reconceptualising Housing for Older Adults Through the Care Ecology of Everyday Life
Previous Article in Journal
UnderstandingMCI.ca: Mixed-Methods Evaluation of a Brief Web-Based Multimedia Lesson to Improve Public and Family Care Partner Knowledge of Mild Cognitive Impairment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Measuring Cognition and Cognitive Impairment in the Survey of Health, Ageing and Retirement in Europe (SHARE): A Scoping Review and Instrument Mapping Study

by
Mark R. O’Donovan
1,*,
Nicola Cornally
2 and
Rónán O’Caoimh
1,3
1
Health Research Board Clinical Research Facility, University College Cork, Mercy University Hospital, T12WE28 Cork City, Ireland
2
Catherine McAuley School of Nursing and Midwifery, University College Cork, T12AK54 Cork City, Ireland
3
Department of Geriatric Medicine, Mercy University Hospital, T12WE28 Cork City, Ireland
*
Author to whom correspondence should be addressed.
J. Ageing Longev. 2026, 6(1), 30; https://doi.org/10.3390/jal6010030
Submission received: 11 December 2025 / Revised: 4 February 2026 / Accepted: 4 March 2026 / Published: 12 March 2026

Abstract

The Survey of Health, Ageing and Retirement in Europe (SHARE) is a cross-national panel study including approximately 160,000 adults aged ≥50 years from 29 countries. While multiple cognitive subtests are available, the SHARE consortium does not currently recommend a standardised approach to cognitive screening. This scoping review and mapping study aimed to (1) assess how cognition is measured in SHARE publications, (2) identify whether any cognitive screening instruments (CSIs) are validated in the SHARE, and (3) explore the potential to replicate additional CSIs using cognitive measures available in recent waves that include an expanded battery of subtests. SHARE-related publications were identified by searching PubMed, and a dedicated online registry of SHARE publications. Methodical details were extracted and quantitative counts calculated. Among 234 SHARE publications, the most common choices were using single subtests (n = 94), CSIs (n = 56), and standardised scores (n = 50). From 22 unique CSIs used in the SHARE, only the SHARE Cognitive Instrument and Langa–Weir Criteria were formally validated. Cognitive impairment was assessed in 36 studies, yet no validated recognised definition of mild cognitive impairment (MCI) was found. Mapping other potential CSIs (n = 81) identified the 10-Point Cognitive Screener, Six-Item Screener and Mini-Cog as other potential CSIs for use across SHARE waves. Further research is needed to validate existing CSIs and to better operationalise MCI in the SHARE.

1. Introduction

The prevalence of cognitive impairment among older adults is expected to rise globally in the coming decades [1,2] with particularly marked increases expected in European countries [3]. The development and validation of brief, reliable cognitive screening instruments (CSIs) for use within longitudinal ageing studies and in community-based samples is important for advancing our understanding of cognitive ageing at both the individual level and population level [4]. The use of short instruments can facilitate the study of its prevalence, risk factors, and progression across diverse populations [5]. They can also be used in clinical settings such as in primary care or general outpatient units to opportunistically screen individuals at risk for mild cognitive impairment (MCI), a prodromal stage before the onset of dementia, which identifies individuals who are at higher risk of developing dementia. Early intervention may support timely interventions to help mitigate the impact of cognitive decline on daily functioning and quality of life [6,7]. Brief CSIs (i.e., those taking ≤20 min to complete) can also be used as diagnostic tests as part of a more detailed comprehensive cognitive and functional assessment [5] and also to track decline in cognition over time.
The Survey of Health, Ageing and Retirement in Europe (SHARE) is a multi-country longitudinal study focused on middle-aged and older adults aged ≥ 50 years and includes questions assessing physical, psychological, cognitive and social factors [8]. Data collection began in 2004 and has (as of wave 9) spanned 29 countries including all continental European Union countries, Switzerland and Israel, which are included in the United Nations European Region. The SHARE uses standard questionnaires and in-person interviews based on the Health and Retirement Study (HRS) in the United States [9], which has inspired a network of sister studies globally [10,11]. The HRS network of studies has more recently started collecting detailed cognitive data through the Harmonized Cognitive Assessment Protocol (HCAP) project, acknowledging the need for more robust measures of cognitive impairment [12]. The first SHARE-HCAP study was conducted in five European Union countries in 2022 (data released 2025) [13]. While the SHARE consortium have made a probabilistic diagnosis of cognitive impairment within their HCAP study [13], no standardised measure of cognitive impairment has been recommended by the consortium for use in previous main (core) waves of the SHARE [14].
The core SHARE study includes a number of cognitive subtests across five cognitive domains including memory, language, orientation, visuospatial and executive function [14]. The number of subtests varies across waves, and the two most recent, waves 8 and 9, include 11 cognitive subtests such as delayed recall (10 word recall of a list of words) and verbal fluency (animal naming), amongst others. While these can be used alone or in combination to form proto instruments to assess cognition, no recognised standardised CSI has been recommended by the SHARE consortium to measure global cognitive function [14]. Thus, there is a lack of guidance on how cognition should be assessed within the SHARE. Efforts to support comparability across waves and between cohorts studies have focused predominantly on documenting the availability of individual cognitive subtests across studies [10,11,15,16], and to our knowledge, no study has examined which subtests or combination of subtests have been used in SHARE publications to measure cognition.
The SHARE, unlike some HRS-based surveys of ageing [11,14], did not originally include any complete established CSIs for global cognition such as the Mini Mental State Examination (MMSE) [17] or the Montreal Cognitive Assessment (MoCA) [18], the Telephone Interview for Cognitive Status (TICS) [19] or the Community Screening Instrument for Dementia (CSI-D) [20]. However, given the plethora of short CSIs that have been proposed in the literature [21,22], there may be multiple brief CSIs that could be replicated in the SHARE using the subtests provided. To identify these, and how researchers have tackled this deficit to date, a systematic overview of the existing literature is required. Scoping reviews are designed to summarise broad topics or fields in order to clarify working definitions, identify research gaps and make recommendations for future research [23]. Hence, we conducted a scoping review and instrument mapping study to address three objectives aimed at providing more clarity and consistency regarding the measurement of cognition in the main waves of the SHARE:
(1)
To review how cognitive function and cognitive impairment have been assessed in previous SHARE publications including any details on what subtests were used, any cut-offs applied, the number of studies using the approach, and the longitudinal availability of each approach across study waves.
(2)
To identify which, if any, CSIs and measures of cognitive impairment have been validated internally or externally for use in the SHARE.
(3)
To explore which additional CSIs could be operationalised for use within the SHARE based on cognitive subtests (items) available and cross-referenced with previously published reviews of brief CSIs.

2. Materials and Methods

This study involved three steps: (1) a scoping review of published SHARE studies examining cognition; (2) a review of brief CSIs identified from the broader literature; and (3) an instrument mapping exercise to assess which CSIs could be operationalised using available SHARE subtests. This mixed-methods approach is outlined in Figure 1.
In the first step, the SHARE register and PubMed were searched to identify SHARE studies that assessed cognition and were published between 2003 and the 25 July 2025. The SHARE online register (available at: https://share-eric.eu/publications accessed on 25 July 2025) documents studies that use SHARE data. PubMed was added as a second source after identifying that some studies were not present in the register at this time. Separate to this, in the second step, short (less than 30 min to administer) CSIs from the wider literature were identified by a search of existing narrative and systematic review papers examining brief CSIs. In the third step, an instrument mapping exercise was conducted, which reviewed all the CSIs identified in the previous step and what items (i.e., cognitive subtests) and which cognitive domains these included. An assessment of their fidelity (indicating their completeness with the original instrument compared to what is available in the SHARE) against the subtests available in the SHARE was carried out. The scoping review component of this study was conducted in accordance with the Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) [24] (checklist provided in the appendix). A protocol was published on Open Science Framework (https://doi.org/10.17605/OSF.IO/NTPMF registered on 21 May 2025). A PRISMA flow diagram was produced to summarise the study selection and exclusion using the R package “PRISMA2020” [25,26].

2.1. Eligibility Criteria for the Scoping Review

This review included studies assessing cognition in the SHARE published using SHARE data since inception (2003). The SHARE studies were limited to primary research papers assessing cognitive impairment or function (cognitive) using secondary data from the SHARE. It could include any primary study which proposed a unique short CSI for use in this cohort. Only studies that used an objective measure of cognition were included; those using only subjective or informant rated measures of cognition were excluded such as self-rated memory [27] or the Informant Questionnaire on Cognitive Decline in the Elderly [28]. Studies from any year and published in any language were considered. Publications in languages other than English were translated using Google Translate. Included studies were limited to full peer-reviewed articles. We excluded abstract publications, book chapters, protocols, reviews and SHARE working documents for several reasons including concerns over methodological rigour and a focus on describing the SHARE study itself. A list of all the excluded studies is provided in the Supplementary Material Table S1.

2.2. Information Sources, Search Terms and Screening for the Scoping Review

The SHARE studies were identified in three ways: from an online register of SHARE studies, from a search of the PubMed database and from citation tracking of included studies. The search was updated on 25 July 2025. The SHARE study register (https://share-eric.eu/publications accessed on 25 July 2025) contains over 4000 submissions, and potential papers were identified by applying the following search terms without any filters: “cognitive”, “cognition”, “cognitive impairment”, “cognitive screening”, “cognitive assessment” and “cognitive screening instruments”. This register should include a complete list of SHARE studies, given it is required that all SHARE studies are reported to the SHARE consortium in the conditions for use [29]. To supplement this, PubMed was also searched using the following string: “Survey of Health Ageing and Retirement in Europe” AND (“cognitive” OR “cognition” OR “cognitive impairment” OR “cognitive screening” OR “cognitive assessment” OR “cognitive screening instruments”). Study citations were copied to an Excel spreadsheet and were screened by a single reviewer. Most ineligible study types (e.g., book chapters and working documents) could be identified from details in the SHARE database and could thus be excluded at the title screening stage. The SHARE database provided the study title, a citation/reference and the publication year. The rest of the studies were moved to full-text review.

2.3. Data Extraction (Charting) Process and Items

A data-charting form was jointly developed by two reviewers (MO’D and RO’C) to determine which variables to extract as part of the scoping review. This included descriptive details (study identifier, study title, a citation/reference, and publication year), screening details (study design), and cognitive methods (subtests used, CSI used, another details on methods and cut-offs scores). One reviewer charted the data, and both reviewers discussed the results and continuously updated the data-charting form in an iterative process. Where necessary, methodological details were extracted from the articles’ Supplementary Material or citated papers. The range of scores could also be used to differentiate SHARE subtests including serial 7s (scored 0–5) and numeracy (scored 1–5).

2.4. Instrument Mapping Study

The instrument mapping exercise was based on a search of two large reviews of short CSIs including a 2007 narrative review describing the psychometric characteristics of 39 CSIs [21] and a more recent systematic review published in 2019, which included 50 CSIs [22]. These were identified based on a limited search of Google Scholar using the term “brief cognitive screening instrument”. Hand searching included any additional CSIs known to the study authors that were not listed in either of these review papers. Details of the instruments including their subtests and scoring structure were extracted. Instruments were excluded if they were informant-generated, had an administration time more than 20 min long or included only a single subtest. For CSIs identified in the literature, their fidelity within the SHARE was assessed as the proportion of the total score which could be replicated using items available in the SHARE (including item substitution if very similar cognitive subtests were available).

2.5. Synthesis of Results

This study was designed to identify cognitive screening approaches and CSIs that are available or could be developed for use in the SHARE; hence, a quality appraisal of the included studies was not considered. For the quantitative analysis, the number and proportion of studies using each approach were calculated from the extracted details using R version 4.4.2 [30].

2.6. Terminology and Available Subtests

General cognitive performance was divided into five domains to reflect those available in the SHARE, i.e., memory, language/fluency, orientation, executive functioning and visuospatial, which is consistent with HCAP studies, a cognitive sub-study replicated across HRS-related studies [31,32,33]. Memory was further subdivided into immediate and delayed memory [31]. There are 11 cognitive subtests available in the SHARE across waves 1–9 derived from different cognitive tests including the MMSE [17], TICS [19], (CSI-D) [20] and Addenbrooke’s Cognitive Examination III (ACE-III) [34]. These include the following cognitive subtests: 10-word immediate recall [registration], 10-word (delayed) recall, animal naming, time orientation (date, day, month, year), serial 7s, numeracy (4 items), counting backwards, copying cube, copying infinity loop, clock drawing, and object naming (3 items). These 11 items are not available across all waves of the SHARE with an increasing number added over waves. For this review, the term “word registration” is used for the 10-word immediate recall test, which some studies have also termed verbal learning. These subtests and their availability across waves of the SHARE are summarised in Table 1.

3. Results

From a total of 442 reports found in the search of the SHARE register and PubMed (step 1), there were 234 studies eligible for inclusion. Thirty-four full texts were excluded at screening for the following reasons: did not assess cognition (n = 25), did not use SHARE data (n = 4), abstract only publication (n = 2), a review article (n = 2), and a trial protocol (n = 1). Four of the included studies were published in languages other than English including Hebrew (n = 1), Bulgarian (n = 1), German (n = 1), and Spanish (n = 1) and were translated using Google Translate. In the separate search (step 2), after removing duplicates (n = 10), informant-reported measures (n = 7) and screens based on single subtest items (n = 11), a total of 61 unique CSIs were identified [21,22]. The selection of studies is detailed in two PRISMA (2020) flow diagrams (Figure 2). Data extracted for the included studies is provided in the Supplementary Material Tables S2 and S3. Of the 234 identified studies 146 (62%) came from the SHARE register, 77 (33%) from PubMed and 11 (5%) from citation tracking.

3.1. Approaches to Measuring Cognition in the SHARE Studies

Among the 234 SHARE studies, 140 (60%) applied one or more composite scores (i.e., multiple subtests combined) and 94 (40%) provided results for individual cognitive subtests only. A full breakdown of the methods utilised in the SHARE studies to assess cognition is provided in Figure 3.

3.2. Cognitive Subtests Used in the SHARE

The subtests most commonly applied in the SHARE studies are displayed below (Table 2) and are divided according to each of the categories in Figure 3. Only one study [38] included any of the newly available subtests from the latter waves of the SHARE such as naming descriptions, the clock drawing test, copying cube or the infinity loop, as these were only introduced at wave 8 (data released on 10 February 2022). Reflecting this, the most commonly used subtests overall were word registration, word recall and verbal fluency (each found in over 75% of studies). Other measures (orientation, serial 7s or numeracy tests) were each used in 26% of the studies or less. As noted, only one study utilised the recent subtests assessing visuospatial function.

3.3. Use of Individual Cognitive Subtests (n = 94)

A total of 94 studies presented results for individual cognitive subtests (items). Of these, only 8 (9%) used cut-offs to define a deficit on one or more of the included subtests. These cut-offs were obtained from either previous works (n = 5; three SHARE reports/papers, one non-SHARE studies, one uncited) or from pre-determined proportions (n = 3) taking either the lowest half (based on the median), the lowest one third (or as close to this share as possible) or the 25th percentile. The number of subtests included ranged between one (19 studies, 20%) and five (9 studies, 10%). Amongst the 19 studies which only included a single subtest, the majority chose to include word recall (n = 10) followed by word registration (n = 4), verbal fluency (n = 3) or orientation (n = 2). One study used word registration scores to compare memory across both the SHARE and the China Health and Retirement Longitudinal Study [39]. Some studies (n = 6), which included multiple subtests, decided to present standardised z-scores instead of the raw scores to make easier comparisons between the subtest scores. Only one study included cognitive subtests as independent variables within a statistical modelling (Random Forest algorithm) [40].

3.4. Use of Cognitive Screening Instruments (n = 56)

In total, 56 SHARE studies included at least one CSI derived from multiple subtests. Most (n = 30) focused on memory, combining 10-word registration and delayed recall into a 20-point sum score (n = 27) or a 10-point mean score (n = 3). Only two of these studies applied cut-offs to define memory impairment: one used a threshold of 1.5 SDs below age-specific means [41], while another incorporated the recall score into an algorithm for probable dementia that also included activities of daily living and country-level dementia prevalence estimates [37]. The remaining studies (n = 26) combined subtests across domains to create 20 distinct CSIs (Table 3), of which only two were formally validated: the Langa–Weir criteria [42] and the SHARE Cognitive Instrument (SHARE-Cog) [38]. The Langa–Weir criteria includes four subtests (10-word registration/recall, serial 7s and counting backwards) [42] and has been predominantly used in the HRS in the United States and used data from the Aging, Demographics, and Memory Study (ADAMS) sub-study, which included gold-standard clinical diagnoses [42]. However, these subtests are only available in the SHARE for participants aged ≥ 60 years in waves 8–9 and have not been validated in European samples. The SHARE-Cog combines registration, recall and verbal fluency, is available across all SHARE waves, and was validated internally in participants aged ≥ 65 (using wave 8), showing excellent accuracy for cognitive impairment and good discrimination between MCI and subjective memory complaints [38]. Cognitive impairment was defined using additional wave-8 subtests and self-reported activities of daily living (ADL) and memory difficulties consistent with Petersen’s criteria [38,43]. To avoid incorporation bias with the items included in the SHARE-Cog, the definition of MCI was restricted to the remaining (non-amnestic) subtests available in wave 8 of the SHARE. Five CSIs were used in more than one study, including a 125-point CSI [44,45,46,47], a 129-point CSI [48,49], a 25-point CSI [50,51], a 29-point CSI [52,53], and a modified 20-point DemTect [54,55]. The 125-point CSI was most common, summing registration, recall, verbal fluency and serial 7s [44,45,46,47]. Modified DemTect versions used in early waves [54,55,56] and a modified 26-point Langa–Weir including verbal fluency [57] had limited applicability, while all remaining CSIs (n = 11) were used in single studies only. None of the identified CSIs included all 5 cognitive domains.

3.5. Identification of Additional CSIs for the SHARE (n = 24)

Of the 61 other CSIs identified from the literature, there were 24 that could be replicated using subtests available in the SHARE to at least 50% fidelity with the original. Of these, three had 100% fidelity (no missing items but subtests could be substituted with very similar items if required), namely the 10-point Cognitive Screener [70], Mini-Cog [71], and Six Item Screener [72], although the 3-word recall subtest had to be replaced with the 10-word recall test available in the SHARE. Potentially available short CSIs that have not been included in the SHARE to date are presented in Table 4 along with the waves they could be generated in, acknowledging that most (n = 18/24) were only partially available in waves 8 and 9 and were not available in any other waves of the SHARE, limiting their utility to measure cognition over time. CSIs based on the MMSE [17,73] or MoCA [18] would include all five cognitive domains available in the SHARE.

3.6. Use of Standardised Cognitive Scores (n = 50)

In total, 50 studies created global cognitive scores by statistically standardising subtests. Global cognition (50 studies) was measured by statistically standardising each subtest and then combining them into a single composite score. The most common standardization approach was to take the average of multiple z-scores (n = 38); other approaches included taking the sum of multiple z-scores (n = 5) or using them to calculate a T-score (n = 7). A z-score is a standardized score that has a mean of 0 and a standard deviation (SD) of 1 in the reference group; and is calculated by subtracting the reference mean from each raw value and then dividing by the reference SD [92]. For the SHARE studies, all composite z-scores were composed of between 3 and 5 subtests. T-scores are generated using z-scores and have a mean of 50 and a SD of 10 [93]. Seven SHARE studies generated a composite T-score to measure cognition including a sum of the z-scores of 3–4 subtests which were standardised according to the mean and SD of those aged 50–54 years. T-scores have also been applied in the recent SHARE-HCAP study to define MCI and probable dementia [13,33].

3.7. Use of Statistical Modelling (n = 17)

There were 17 studies that used statistical modelling (including regression-based approaches, machine learning algorithms, and dimensionality-reduction techniques) to measure global cognition. Most of these used either principal component analysis (n = 9) or an unsupervised machine learning approach based on principal components (n = 3). Principal component analysis is a linear dimensionality-reduction technique that maximizes the variance of the data projected onto the principal components [94]. In other words, it finds a smaller number of new variables (called “components”) that summarize the original data while retaining as much information (i.e., variance) as possible. Factor analysis was also applied including exploratory factor analysis [95] and longitudinal modelling approaches such as a cross-lagged model [96] and a second-order latent growth curve model [97]. Another study fitted a regression model using the HCAP classification to identify the probability of an individual having MCI or dementia using a combination of cognitive subtest scores and the presence (i.e., dementia) or absence (i.e., MCI) of ADL impairment [33], and found that the model had good validity compared with HCAP prevalence values. Finally, a study exploring methods for identifying probable dementia by Klee et al. [37] applied a number of different models and utilised the word recall components of the Langa–Weir (i.e., 20-point word recall), with the number of ADL limitations to define probable dementia. Their relatively simple “Langa-Weir” model adjusted to Organisation for Economic Co-operation and Development (OECD) data outperformed other, more sophisticated methods, including random forest modelling [37]. Random forest models are machine learning algorithms which combine values from multiple randomised decision trees into a single result [98].

3.8. Using Cut-Offs to Measure Global Cognitive Impairment (n = 15)

There were 15 studies that defined cognitive impairment as the presence of impairments in multiple subtests (i.e., cognitive items) using cut-offs. Fixed threshold scores were used in two studies [99,100]. Another used cut-offs based on the number of impaired subtests (e.g., ≥2 subtests below threshold [101] or three impaired subtests [102]). The multi-country ATHLOS Project used proportional cut-offs with the lowest 25% of scores considered to represent cognitive impairment [16]. One study took account of the absolute number of impaired subtests from 0 to 4 [103]. The remaining nine studies all took cut-offs of 1.5 SDs below a mean score.

3.9. Other Approaches (n = 5)

Other methods used to measure cognition were applied in five studies including averaging subtest scores (n = 1) [104], the use of deficit accumulation indices (n = 3) [105,106,107], and meta-analysis (n = 1) [108]. For the study that used average raw scores, a sum of registration, recall, and verbal fluency was taken and divided by three [104]. The deficit accumulation approach [105,106,107], which is based on a frailty index, scores the level of risk using the proportion of deficits present [109]. This is an unusual approach in the context of cognitive assessment but could be considered to be measuring cognitive frailty as a construct. The meta-analysis approach combined Cohen’s d scores from six SHARE subtests (registration; recall, verbal fluency, orientation, serial 7s and numeracy) [108].

3.10. Use of Cut-Offs

The most frequently applied cut-offs for specific cognitive subtests were <5 words for registration (8/19 studies), <4 words for recall (9/18 studies), and <15 for verbal fluency (8/22 studies). Fewer studies were available for other subtests with the most frequent cut-offs being <4/4 (5/9 studies) for orientation, <4/5 (3/6 studies) for serial 7s and <2/5 (2/4 studies) for numeracy. While verbal fluency is scored from 0 to 100 in the SHARE, some studies rescaled it from 0 to 10 (divided by 10) or applied upper limits such as ≤45 words (n = 16 studies). Other approaches to cut-offs included taking scores <1.5 SDs below the mean, 25th percentile and the lowest 7% of scores (full results in the Supplementary Material Table S4).

3.11. Classifying Stages of Cognitive Impairment

The majority of studies only presented mean CSI scores representing cognitive functioning. As illustrated in Table 5, only 36 (26%) of the 140 composite measures of cognition divided participants into different cognitive groups (i.e., stages) based on their level of cognitive impairment, including a total of 46 different measures: 21 for probable dementia, 2 for cognitive impairment no dementia (CIND), 4 for MCI, 17 for CI (MCI or dementia) and 2 for amnestic impairment. For probable dementia, one paper compared 14 approaches including statistical models and CSI cut-offs with/without ADLs [37]. They recommended using a “Langa–Weir algorithm” which applied OECD-adjusted cut-offs to 20-word recall and country-specific cut-offs for the number of instrumental ADL (IADL) difficulties [37], which produced favourable results compared with the more sophisticated models assessed. Two other studies also used a cognitive cut-off score together with the presence of difficulties completing ADLs to identify dementia [38,110], which is consistent with the DSM-4/DSM-5 criteria for dementia and major neurocognitive disorders [111]. Another approach applied was a multivariate regression model fitted to HCAP data which was then applied to calculate probabilities of MCI and dementia for each participant in an earlier wave of the SHARE [33]. Finally, four studies [33,51,57,112] defined dementia using only cognitive cut-off scores. One of these, the Langa–Weir criteria, used a cut-off which has been validated in the Aging, Demographics, and Memory Study (ADAMS) in the United States [42], and another used a modification of this with the same cut-offs [57]. Two studies identified dementia using a cut-off 1.5 SD below a mean score [51,112], which is often considered more reflective of mild impairment [111].
The Langa–Weir criteria also applies a higher cut-off range to identify those with CIND, an approach that has also been validated with ADAMS data [42]. A second study applied these same cut-offs on a similar CSI [57]. For MCI there was one study which approximated Petersen’s criteria [43], taking a cut-off for cognitive impairment on a defined cognitive battery of subtests, the absence of ADL difficulties, presence of self-reported memory complaints and the absence of dementia [38]. A second study used the HCAP sample and a statistical model to calculate the probability of having MCI for each participant [33]. Four other studies [113,114,115,116] applied a cut-off of 1.5 SDs below a mean to identify MCI, without excluding dementia or examining ADLs. Almost half (11 of 23) of studies defining CI (MCI or dementia) used cut-offs based on SDs below mean scores with adjustments for key variables such as age and education. However, none of these studies used a normative sample which may lead to erroneous results.
Table 5. Definitions of cognitive categories and stages identified in the SHARE studies classifying participants with probable dementia, cognitive impairment no dementia (CIND), mild cognitive impairment (MCI), and cognitive impairment (CI) defined as MCI or dementia.
Table 5. Definitions of cognitive categories and stages identified in the SHARE studies classifying participants with probable dementia, cognitive impairment no dementia (CIND), mild cognitive impairment (MCI), and cognitive impairment (CI) defined as MCI or dementia.
Probable Dementia Criteria
Description
Studies (n = 7)
Criteria (n = 21)
Cognitive Measure
Classification Approach
Terminology UsedValidation Status
Notes
Langa–Weir criteria [33]
Langa–Weir (registration, recall, serial 7s, counting backwards) ≤ 6/27
1CSI cut-off
Selected cut-off
Probable dementiaValidated in United States
Best cut-off based on a gold standard diagnosis in the ADAMS [33].
Langa–Weir criteria (modified) [57]
CSI (registration, recall, serial 7s, verbal fluency) ≤ 6/26
1CSI cut-off
Selected cut-off
CI consistent with dementiaNot validated
Cut-off from Langa–Weir. (swapped 2-point counting with 1-point fluency).
Modified DSM-5 [38]
Non-amnestic battery ≥ 2 SDs below mean by age/education (or self-reported dementia/memory disorder) with self-reported IADL difficulty (≥1 of telephone calls, taking medications, managing finances)
1CSI cut-off
SDs below mean + ADLs
DementiaNot validated
Cognitive battery did not include memory issues. IADL difficulty for cognitively orientated tasks only, not necessarily loss of independence. Prevalence was low.
LW algorithms (X2) [37]
20-point word recall 2.5% percentile (i.e., ≥2 SD below mean). Applied with or without IADL difficulties taking a cut-off of 1.5 interquartile ranges above Q3 (by country)
1CSI cut-off
Percentile + ADLs
Probable dementiaPercentile LW algorithms with ADLs recommended
Cognitive battery only included memory issues.
IADL difficulty had different cut-offs by country, not loss of independence. Performed well in this study.
1.5 SD below adjusted mean [51]
1.5 SD below 25-point CSI (registration, recall, serial7s) by age, education, sex, proxy status of interviews
1CSI cut-off
SDs below mean
Probable dementiaNot validated
No normative sample used for memory. 1.5 SDs below a mean for MCI/CIND.
LW algorithms (X2) [37]
Used 20-point word recall cut-offs from equipercentile with country prevalence values in OECD. Applied with or without IADL difficulties taking a cut-off of 1.5 interquartile ranges above Q3 (by country)
1CSI cut-off
equipercentile
Probable dementiaPercentile LW algorithms with ADLs recommended
Cognitive battery only included memory issues.
IADL difficulty had different cut-offs by country, not loss of independence.
Cut-off/1.5 SDs below adjusted mean [112]
Memory (20-word recall) 1.5 SDs below the mean by age or verbal fluency < 15
1Cut-offs (domains)
SDs below mean / cut-off
DementiaNot validated
No normative sample used for memory.
Not adjusted for age/education for fluency.
1.5 SDs below a mean for MCI/CIND.
1.5 SDs below mean [110]
Standardised score (registration, recall, fluency, orientation) 1.5 SDs below mean and BADL difficulty (bathing, eating, dressing, transferring, or walking) or self-reported dementia
1Standardised score
SDs below mean + ADLs
DementiaNot validated
Not adjusted for age/education.
Random forest (X3) [37]
Random Forest fitted using three training dataset approaches (registration, recall, ADLs)
1Statistical model
Machine learning
Probable dementiaPercentile LW algorithms with ADLs recommended
Complex method but the authors provide R codes for it.
XGBoost classifier (X3) [37]
XGBoost classifier in three training dataset approaches (registration, recall, ADLs)
1Statistical model
Machine learning
Probable dementiaPercentile LW algorithms with ADLs recommended
Complex method but the authors provide R codes for it.
Logistic regression model (X3) [37]
logistic regression fitted to self-reported cases (registration, recall, ADLs)
1Statistical model
Regression
Probable dementiaPercentile LW algorithms with ADLs recommended
Complex method but the authors provide R codes for it.
Weighted logistic regression [37]
Weighted logistic regression fitted to self-reported cases (registration, recall, ADLs)
1Statistical model
Regression
Probable dementiaPercentile LW algorithms with ADLs recommended
Complex method but the authors provide R codes for it.
Predicted probabilities [33] (SHARE-HCAP)
A predictive model was fitted between wave 9 items (cognition, ADLs, etc.) and HCAP cognitive categories
1Statistical model
Multivariate regression
DementiaCategorical status not validated
Average of predicted probabilities matched HCAP prevalences very well.
Strong association with education.
CIND criteria
Description
Studies (n = 2)
Criteria (n = 2)
Cognitive measure
Classification approach
Terminology usedValidation status
Notes
Langa–Weir criteria [33]
Langa–Weir (registration, recall, serial 7s, counting backwards) 7–11 out of 27
1CSI cut-off
Selected cut-off
CINDValidated in United States
Best cut-off based on a gold standard diagnosis in the ADAMS [33].
Langa–Weir criteria (modified) [57]
CSI (registration, recall, serial 7s, verbal fluency) 7–11 out of 26
1CSI cut-off
Selected cut-off
Probable dementiaNot validated
Cut-off from Langa–Weir. (swapped 2-point counting with 1-point fluency).
MCI criteria
Description
Studies (n = 6)
Criteria (n = 4)
Cognitive measure
Classification approach
Terminology usedValidation status
Notes
Modified Petersen’s criteria [38]
Non-amnestic battery and self-reported questions on ADLs, memory, dementia used to define subjective memory complaints, MCI and dementia
1CSI cut-off
SDs below mean
MCINot validated
MCI definition incomplete—missing amnestic MCI single domain.
Predicted probabilities [33] (SHARE-HCAP)
A predictive model was fitted between wave 9 items (cognition, ADLs, etc.) and HCAP cognitive categories
1Statistical model
Multivariate regression
MCICategorical status not validated
Average of predicted probabilities matched HCAP prevalences very well.
Strong association with education.
1.5 SDs below adjusted mean score [114,115,116]
1.5 SD below the mean adjusted (age/education) standardised score (registration, recall, fluency)
3Standardised score
SDs below mean
MCINot validated
No normative sample used.
1.5 SDs below adjusted mean score [113]
1.5 SD below the mean adjusted (age/education) standardised score (registration, recall, fluency, serial7s)
1Standardised score
SDs below mean
MCINot validated
No normative sample used.
CI criteria
Description
Studies (n = 23)
Criteria (n = 17)
Cognitive measure
Classification approach
Terminology usedValidation status
Notes
Modified DemTect [54,55]
Modified DemTect (registration, recall, verbal fluency, orientation, numeracy). Cut-off ≤ 14/20
2CSI cut-off
Selected cut-off
Poor cognitive functioningNot validated
Not adjusted for age/education.
1.5 SDs below adjusted mean [63]
1.5 SDs below mean for 34-point CSI (registration, recall, fluency, orientation) by country of residence
1CSI cut-off
SDs below mean
Cognitive impairmentNot validated
No normative sample used. Not adjusted for age/education.
1 SD below adjusted mean [52]
1 SD below mean for a 29-point CSI (registration, recall, orientation, serial7s) by age
1CSI cut-off
SDs below mean
Ageing-associated cognitive declineNot validated
No normative sample used.
Cut-offs (≥1 domain) [99]
Orientation < 3/4 or numeracy < 2/5
1Cut-offs (domains)
≥1/2 cut-offs
Limited cognitive functionNot validated
Not adjusted for age/education.
Cut-offs (2 domains) [101] (Also had number of impairments: 0, 1, 2)
Both memory (registration < 5 and/or recall < 4) and fluency (<15)
1Cut-offs (domains)
Multiple cut-offs
Cognitive impairmentNot validated
Not adjusted for age/education. Two domains typical of Alzheimer’s.
1.5 SDs below adjusted mean [117,118,119,120]
Both memory (registration and/or recall) and fluency are 1.5 SDs below the mean by age
4Cut-offs (domains)
SDs below mean
Cognitive disorder; cognitive impairmentNot validated
No normative sample used.
Cut-off/1.5 SDs below adjusted mean [121]
Memory (20-word recall) 1.5 SDs below the mean by age and country (1 SD 75 years) or verbal fluency < 15
1Cut-offs (domains)
SDs below mean/cut-off
Cognitive impairmentNot validated
No normative sample used for memory. Fluency not adjusted for age/education.
25th percentile (1 subtests) [16] [ATHLOS Project]
One of the following tests in the lowest 25%/quartile (registration; recall; fluency)
1Cut-offs (subtests)
25th percentile
Low cognitive functioningNot validated
Not adjusted for age/education. High cut-off for CI (false positives).
1.5 SDs below adjusted mean (≥2 subtests) [122]
Two or more subtests (registration, recall, orientation) 1.5 SDs below mean by education.
1Cut-offs (subtests)
SDs below mean
Cognitive impairmentNot validated
No normative sample used.
1.5 SDs below adjusted mean (≥2 subtests) [123]
Two or more subtests (registration, recall, fluency orientation) 1.5 SDs below mean by education.
1Cut-offs (subtests)
SDs below mean
Cognitive impairmentNot validated
No normative sample used.
Multiple selected cut-offs (≥3 subtests) [102]
Three or more low scores (registration < 5/10, recall < 4/10, fluency < 15/100, orientation < 2/4, serial7s < 2/5).
1Cut-offs (subtests)
Multiple cut-offs
Cognitive impairmentNot validated
Not adjusted for age/education.
1.5 SDs below adjusted mean [124]
1.5 SDs below mean in at least 1 subtest (registration, recall, fluency, orientation) by age
1Cut-offs (subtests)
SDs below mean
Cognitive conditionNot validated
No normative sample used.
1.5 SDs below adjusted mean [125]
1.5 SDs below the mean standardised score (registration, recall, fluency) by education
1Standardised score
SDs below mean
Cognitive impairmentNot validated
No normative sample used.
≤10% percentile [126]
≤10% percentile on standardised z-score (registration, recall, fluency, orientation, numeracy)
1Standardised score
percentile
Cognitive impairmentNot validated
Not adjusted for age/education.
≤10% percentile [127]
≤10% percentile on standardised z-score (registration, recall, fluency, orientation, serial 7s)
1Standardised score
percentile
Impaired cognitionNot validated
Not adjusted for age/education.
≤10% percentile by sex [128]
≤10% percentile by sex on standardised t-score
1Standardised score
percentile
Poor cognitive functionNot validated
Not adjusted for age/education.
Machine learning with principal components [129,130,131]
Unsupervised machine learning classification (hierarchical clustering on principal components). Both used registration and recall and one included orientation
3Statistical model
Machine learning
High likelihood of dementiaUsed R packages FactoMineR, NbClust and missMDA.
Amnestic impairment criteria
Description
Studies (n = 2)
Criteria (n = 2)
Cognitive measure
Classification approach
Terminology usedValidation status
Notes
Memory cut-offs [41]
1.5 SDs below mean 20-point recall score by age
1CSI cut-off
SDs below mean
Mild memory impairmentNot validated
But multiple papers have applied the same cut-offs of <5 and <4, respectively.
Memory cut-offs [100]
Registration < 5 or recall < 4
1Cut-offs (subtests)
≥1/2 cut-offs
Memory impairmentNot validated
But multiple papers have applied the same cut-offs of <5 and <4, respectively.
Bold text highlights key descriptive information and the shading marks the start of the results for each of the four cognitive classifications (probable dementia, CIND, MCI, and CI) to aid readability. Note while the total number of studies in the table sum to 40 (7 + 2 + 6 + 23 + 2) the total number of unique studies was 36 since four of the probable dementia studies also appear under CIND and the first two rows of MCI.

4. Discussion

This scoping review and instrument mapping exercise is the first study to identify different ways of measuring cognitive impairment in the SHARE and found that a large number of heterogeneous approaches have been used. Through the identification and synthesis of 234 SHARE studies using objective cognitive measures, we provide a novel overview of the wide range of methods applied to assess cognition, including the use of individual subtests, composite scores, validated and non-validated CSIs, statistical modelling approaches, and classification using diagnostic cut-offs. In addition, by mapping existing brief CSIs from the literature to the cognitive subtests available within the SHARE, we offer important insights into which instruments may be feasibly reconstructed or adapted for use in this longitudinal European cohort.
A large number of studies reported the results of subtests individually and/or limited the analysis to mean scores. The most common composite approach was to use single subtests to assess cognition (n = 94) followed by the development of CSIs for use directly within the SHARE, of which 22 unique instruments were identified, albeit only five were used more than once and just two were found to be validated: the recently published SHARE-Cog and the Langa–Weir Criteria. The Langa–Weir criteria was available but limited to a subsample of participants aged ≥ 60 years, while a new CSI, the SHARE-Cog, a bespoke three-item CSI incorporating the three most commonly found subtests in the SHARE across waves, was also identified [38] and was also available for ages 50–59 years. This is similar to how, in the HRS, the TICS [19] was limited to those aged ≥ 60 years and the Langa–Weir tool was generated to include those aged 50–59 years [42]. Comparisons between the SHARE-Cog and Langa–Weir tool are needed.
This study also assessed the feasibility of replicating established, published CSIs within this longitudinal ageing cohort by mapping available subtests and scoring methods available in the dataset compared to those available in the literature, examining the possibility of reconstructing these in the SHARE with high fidelity. It found that the subtests verbal fluency (for animals), working memory (i.e., 10-word registration) and episodic memory (i.e., 10-word delayed recall) were the three most commonly used items and were each used in over 75% of studies. This reflects the limited availability of individual subtests covering other domains across earlier waves, restricting the ability to develop replicable, widely used and validated short CSIs in the SHARE. This mapping exercise identified that other potential CSIs could be developed, though only a few had 100% fidelity with the original instruments. These included the 10-Point Cognitive Screener [70], the Six-Item Screener [72], and the Mini-Cog [71]; however, only the 10-point CS is available across historical waves but not fully. The Mini-Cog was limited to waves 8 and 9 and was only partially available. To date, none of these have been validated formally in the SHARE, and more research is needed to assess this.
Other more widely validated instruments such as the MoCA (67% fidelity of the original MoCA’s scoring items could be replicated using SHARE data) [18], Qmci screen (67% fidelity with the original) [85] and standardised MMSE (60% fidelity) [17,132] could be replicated in the SHARE, albeit with less than 100% fidelity.
As more subtests have been added across waves, including those examining visuospatial and executive function, the potential to develop more complex short CSIs targeting multiple cognitive domains and reflective of instruments in clinical practice has been realised. Reflecting that earlier waves measured only limited domains, primarily focused on the assessment of memory, the most common approach was to use the 10-word registration and recall subtests creating a 20-point score. This 10-word delayed recall question is also available in most other HRS-related studies [10,15], and the 20-point score has been used in other large longitudinal studies of ageing such as in the China Health and Retirement Longitudinal Study in China [133] and the English Longitudinal Study of Ageing in England [133]. Given its widespread use, 20-point recall score should be considered as a stand-alone subtest for future SHARE studies examining memory as it is consistent and comparable within the SHARE and across other longitudinal studies of ageing. In addition, it has been illustrated that the use of 10-word recall (as opposed to a smaller list of words to recall) improves the diagnostic performance when clinically differentiating between probable MCI and dementia [38]. It should be noted, however, that the overreliance on memory-based measures in the SHARE limits the ability to better detect non-amnestic profiles of impairment and, hence, the ability to detect non-AD related MCI or prodromal dementia syndromes.
For global cognition (including at least two cognitive domains), it was common to use verbal fluency (animal naming) in combination with word registration and recall. Verbal fluency has the greatest availability amongst possible non-amnestic tests in the SHARE (albeit limited to a small subsample in wave 7). Verbal fluency is also widely assessed across other studies [15], with the word recall tests and verbal fluency being selected by the Ageing Trajectories of Health-Longitudinal Opportunities and Synergies (ATHLOS) project which created a harmonised dataset to compare cognitive function across nine population-based studies [16]. Several options are available for combining these three scores, with one straightforward and validated approach being a weighted sum as in the SHARE-Cog instrument [38]. Other studies have used standardised (z-) scores to combine them or principal component analysis; however, these approaches may be more difficult to interpret. A z-score for a combined sum (such as the SHARE-Cog) could also be considered, and based on the z-score formula [92], providing the reference mean and reference standard deviation (SD) would allow for a conversion back to raw scores.
For identifying cognitive impairment and cognitive categories (stages), multiple approaches were proposed, and none of these have been well validated or used consistently across studies. A cut-off of 1.5 SDs below the sample mean is the most common approach to date; however, this remains an unvalidated approximation. In a normal distribution, 1.5 SDs below the sample mean is approximately equal to the 7th percentile. So, without isolating a normative subsample as detailed in the HCAP studies [13,134], this approach will always select approximately 7% of the lowest scores (assuming the CSI follows a normal distribution) regardless of the actual prevalence of CI. Thus the prevalence becomes statistically fixed. One study used a measure of ageing-associated cognitive decline defined as 1 SD below the mean score in each group [52], which is approximately the lowest 16% of participants in each age group (i.e., removes prevalence differences by age). Few studies mirrored “real world” clinical practice, with only one study using the well-established Petersen’s criteria for MCI, upon which the 2013 DSM-5 criteria for minor neurocognitive disorder are based [43]. Only a small number of studies adjusted for recognised confounders such as age or education, which is a key consideration in clinical practice. However, the utility of this in an epidemiological context is subject to some debate [135]. If adjusting for these variables, researchers should use an appropriate normative subsample as detailed in the HCAP studies to define age-education specific cut-offs [13,134], rather than the full sample. As stated above using the full sample (instead of a valid “normative” sample), to generate age–education-specific cut-offs would statistically remove any age/education prevalence differences (provided the CSI follows a normal distribution in each group). Further research is needed to compare and validate cognitive criteria in the SHARE. Further validation of cut-off scores is needed for both the 20-point word recall test and/or the SHARE-Cog.
Strengths of this study includes the breadth of the review. We searched both the SHARE study register and PubMed and employed citation tracking to identify additional eligible studies. By applying comprehensive eligibility criteria and following the PRISMA-ScR reporting framework [24], and a PRISMA2020 diagram to clearly document the sources of studies and increase replicability [25,26], this ensured methodological transparency. The inclusion of studies published in languages other than English further increases the comprehensiveness of the review, particularly for a multinational dataset such as the SHARE. The mapping of brief CSIs against SHARE subtests, conducted as a distinct second component of this study, enhances the utility of the findings by providing researchers with practical tools for standardising cognitive measures in future studies. In addition, the Supplementary Material (Table S2) documents all located studies for additional transparency. While the SHARE register provides a consistent and reliable source of SHARE studies our findings illustrate it is currently incomplete with almost 40% of included studies coming from PubMed and citation tracking. Another important contribution of this study is the identification of 24 CSIs from the broader literature that could potentially be reconstructed within the SHARE with at least 50% fidelity, including three instruments with complete fidelity, albeit incomplete availability across past waves. These findings are of practical relevance, particularly in light of the growing demand for brief and reliable cognitive tools that can be used consistently across ageing studies. The mapping exercise also offers a foundation for future validation work using SHARE data, including internal validation of CSIs for different age groups or across countries with varying cultural and educational profiles.
Limitations of this study include that this study focuses on cognition in the SHARE study only, rather than the wider context of HRS-related studies illustrated on the Gateway to Global Aging Data [10]. In addition, gender differences or other diseases were not assessed. Measures available in the SHARE should be harmonisable with most other HRS studies [15], although there can be subtle differences in measurement which can limit comparisons [39]. The SHARE itself is also a multinational cohort. Only a limited number of CSIs were identified from the systematic review papers that were published, and a systematic search of the literature to identify these was not conducted. While the two reviews were comprehensive and reflect the majority of commonly used short CSIs, some that could have been replicated in the SHARE may have been missed. Another limitation is that the data extraction and charting process was conducted primarily by a single reviewer, which may have introduced subjectivity or error in classification despite iterative discussions and consensus with a second reviewer. While the use of automated tools and structured forms helped mitigate this, a dual screening and data extraction process would have further enhanced reliability. In addition, the mapping of CSIs to SHARE subtests, although systematic, involved some degree of judgement, particularly where substitutions were made for similar but not identical subtests. While this pragmatic approach mirrors real-world research applications, it introduces variability that may influence the psychometric equivalence of the adapted instruments. Future research will be needed to empirically evaluate the diagnostic performance and validity of these mapped instruments within the SHARE. Similarly, as high structural fidelity does not automatically translate into psychometric or clinical validity in cohort studies such as the SHARE, i.e., that structural and conceptual equivalence do not always translate diagnostic equivalence, additional research in clinical studies is now needed to confirm that the reconstructed instruments theorised here are accurate, reliable and valid in clinical practice.
Finally, the review was not designed to assess the quality or risk of bias of the included studies, as is typical for scoping reviews. While this allowed for a broad overview of existing methods, it limits the conclusions that can be drawn about the robustness or validity of individual findings. However, this was not the primary aim of the study, and the value of this work lies in its descriptive synthesis and generation of a practical framework for measuring cognition in the SHARE.
We note that some of the newly available cognitive subtests in the SHARE, such as clock drawing and cube copying, were not represented in many of the included studies. This reflects the recency of their introduction (wave 8 onward) but also suggests that their potential for assessing visuospatial or executive function remains underutilised. As such, future studies should explore whether the inclusion of these subtests improves the diagnostic performance of CSIs or composite scores in the SHARE. Moreover, while we identified a wide range of statistical approaches to measuring global cognition (e.g., z-scores, principal component analysis, latent growth models), few of these were validated against external gold standards, limiting their interpretability in clinical or epidemiological contexts.
To assess memory, we recommend using and including a summary score that combines both 10-word registration and recall, such as a 20-point scale. Common cut-offs include fewer than 5 words for registration and fewer than 4 words for recall. However, further research is needed to verify these thresholds and to establish cut-offs for the combined score. It may also be worthwhile to explore whether a weighted sum could improve diagnostic accuracy. For assessing global cognitive function, the combination of 10-word registration, recall, and verbal fluency is widely available. One option is the 45-point SHARE-Cog instrument, but further validation of this CSI and its cut-offs is necessary.
For identifying cognitive impairment, more research is needed to develop a method aligned with Petersen’s criteria for MCI and the more-recent DSM-5 criteria for minor neurocognitive disorders, which uses the principles outlined by Petersen. In the meantime, researchers could use either the Langa–Weir scale cut-offs or scores 1.5 SDs below the age- and education-adjusted mean. In recent epidemiological studies, the Langa–Weir algorithm for probable dementia has been proposed [37]. This method incorporates 20-point word recall, and limitations in ADLs and adjusts recall cut-offs using external dementia statistics. It has shown better performance than some sophisticated machine learning models and is available for all major SHARE waves, including wave 7. However, it lacks measures of non-amnestic cognitive impairment and depends on the accuracy of external prevalence data. For MCI and dementia classification, a regression model based on HCAP data has also been proposed [33], which estimates the probability of MCI or dementia rather than assigning a categorical diagnosis. This holds promise for future waves of the SHARE although the model may not be reliable for those aged < 65 years who were not included in HCAP.
The present study focuses on identifying widely available CSIs, and these may be used for identifying cognitive impairment in an efficient and consistent manner. Case-finding and opportunistic cognitive screening of high-risk individuals followed by more detailed neuropsychological assessment will remain an important clinical approach. However, more recent approaches such as combining neuropsychological assessments with other plasma and cerebrospinal fluid neurodegenerative biomarkers and other objective biomarkers, like measures of intracortical connectivity with transcranial magnetic stimulation, show much promise for even earlier detection [136], and a clear validation framework has been proposed [137]. In light of widely applicable non-invasive biomarkers (such as those for frontotemporal dementia [138]) the role of these CSIs in clinical care may shift somewhat. Hence, they will remain important epidemiological tools for monitoring changes in prevalence and incidence and may still be used to quantify cognitive decline over time as well as to help differentiate MCI subtypes [136].

5. Conclusions

This study provides the first review of cognitive measurement approaches, instruments and subtests within the SHARE study, revealing a high degree of methodological heterogeneity and limited use of validated CSIs. Through a structured instrument mapping process, we identified several brief CSIs that can be reconstructed using SHARE subtests, with some achieving full fidelity. Multiple unique CSIs have been utilised in SHARE to date with little validation/consistency, and a large number of potential cognitive screening approaches were identified that could be used. The inclusion of additional subtests in later waves and the development of the SHARE HCAP will also enable more comprehensive assessments. To support longitudinal analyses of cognitive change in the SHARE, and given the availability across waves, we recommended combining at a minimum the word registration and recall subtest with verbal fluency into a single instrument to provide a broad measure of cognition including several cognitive domains. The recent internal validation of the SHARE-Cog affirms this and represents a significant step forward, but further research is needed to harmonise and standardise cognitive assessment in the SHARE. For cross-national comparisons, instruments with consistent item availability and harmonised scoring are essential, with the SHARE-Cog or similarly constructed composites offering a pragmatic balance between validity and comparability, while internally standardised or percentile-based cut-offs should be used cautiously. For prevalence estimation and classification of cognitive impairment or dementia, approaches with external validation are critical; currently, the Langa–Weir algorithm and HCAP-derived probabilistic models provide the strongest epidemiological grounding, particularly when cognitive performance is combined with functional impairment. Aligning measurement choice with analytic purpose is therefore essential to improve validity, comparability, and interpretability of findings derived from SHARE data.
In conclusion, these findings offer an important resource for researchers and clinicians seeking to use SHARE data to investigate cognitive ageing and impairment and underscore the importance of consistency, transparency, and psychometric validation in the use of cognitive measures across international ageing studies.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/jal6010030/s1, Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) checklist; Table S1: List of studies excluded from the scoping review (n = 208); Table S2: List of included studies and their basic descriptive characteristics (n = 234); Table S3: Review of the fidelity of cognitive screening instruments located in previous literature reviews; Table S4: Cut-offs and modification of subtests applied in published SHARE studies.

Author Contributions

Conceptualization, M.R.O. and R.O.; methodology, M.R.O. and R.O.; software, M.R.O.; validation, M.R.O. and R.O.; formal analysis, M.R.O.; investigation, M.R.O.; resources, M.R.O.; data curation, M.R.O.; writing—original draft preparation, M.R.O.; writing—review and editing, M.R.O., R.O. and N.C.; visualization, M.R.O.; supervision, R.O. and N.C.; project administration, M.R.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The ethic approval has been waived due to this study being a scoping review of published studies.

Informed Consent Statement

Patient consent was waived due to this study being a scoping review of published studies.

Data Availability Statement

A full list of the included and excluded studies is provided in the Supplementary Tables along with some descriptive details.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SHARESurvey of Health, Ageing and Retirement in Europe
SHARE-CogSHARE Cognitive Instrument
CSICognitive screening instrument
MCIMild cognitive impairment
HRSHealth and Retirement Study
HCAPHarmonized Cognitive Assessment Protocol
MMSEMini Mental State Examination
MoCAMontreal Cognitive Assessment
TICSTelephone Interview for Cognitive Status
CSI-DCommunity Screening Instrument for Dementia
PRISMAPreferred Reporting Items for Systematic reviews and Meta-Analyses
MO’DMark O’Donovan (author)
RO’CRónán O’Caoimh (author)
ACE-IIIAddenbrooke’s Cognitive Examination III
SDStandard deviation
ADLActivities of daily living
IADLInstrumental activities of daily living
DSMDiagnostic and Statistical Manual of Mental Disorders
ADAMSAging, Demographics, and Memory Study
OECDOrganisation for Economic Co-operation and Development
CICognitive impairment
CINDCognitive impairment no dementia
NCNicola Cornally (author)

References

  1. Pais, R.; Ruano, L.; Carvalho, O.P.; Barros, H. Global Cognitive Impairment Prevalence and Incidence in Community Dwelling Older Adults—A Systematic Review. Geriatrics 2020, 5, 84. [Google Scholar] [CrossRef] [PubMed]
  2. Bai, W.; Chen, P.; Cai, H.; Zhang, Q.; Su, Z.; Cheung, T.; Jackson, T.; Sha, S.; Xiang, Y.-T. Worldwide Prevalence of Mild Cognitive Impairment among Community Dwellers Aged 50 Years and Older: A Meta-Analysis and Systematic Review of Epidemiology Studies. Age Ageing 2022, 51, afac173. [Google Scholar] [CrossRef] [PubMed]
  3. Nichols, E.; Steinmetz, J.D.; Vollset, S.E.; Fukutaki, K.; Chalek, J.; Abd-Allah, F.; Abdoli, A.; Abualhasan, A.; Abu-Gharbieh, E.; Akram, T.T.; et al. Estimation of the Global Prevalence of Dementia in 2019 and Forecasted Prevalence in 2050: An Analysis for the Global Burden of Disease Study 2019. Lancet Public Health 2022, 7, e105–e125. [Google Scholar] [CrossRef] [PubMed]
  4. Karimi, L.; Mahboub–Ahari, A.; Jahangiry, L.; Sadeghi-Bazargani, H.; Farahbakhsh, M. A Systematic Review and Meta-Analysis of Studies on Screening for Mild Cognitive Impairment in Primary Healthcare. BMC Psychiatry 2022, 22, 97. [Google Scholar] [CrossRef]
  5. Roebuck-Spencer, T.M.; Glen, T.; Puente, A.E.; Denney, R.L.; Ruff, R.M.; Hostetter, G.; Bianchini, K.J. Cognitive Screening Tests Versus Comprehensive Neuropsychological Test Batteries: A National Academy of Neuropsychology Education Paper. Arch. Clin. Neuropsychol. 2017, 32, 491–498. [Google Scholar] [CrossRef]
  6. Brown, J. The Use and Misuse of Short Cognitive Tests in the Diagnosis of Dementia: Table 1. J. Neurol. Neurosurg. Psychiatry 2015, 86, 680–685. [Google Scholar] [CrossRef]
  7. Meijs, A.P.; Claassen, J.A.H.R.; Olde Rikkert, M.G.M.; Schalk, B.W.M.; Meulenbroek, O.; Kessels, R.P.C.; Melis, R.J.F. How Does Additional Diagnostic Testing Influence the Initial Diagnosis in Patients with Cognitive Complaints in a Memory Clinic Setting? Age Ageing 2015, 44, 72–77. [Google Scholar] [CrossRef][Green Version]
  8. Börsch-Supan, A.; Brandt, M.; Hunkler, C.; Kneip, T.; Korbmacher, J.; Malter, F.; Schaan, B.; Stuck, S.; Zuber, S. Data Resource Profile: The Survey of Health, Ageing and Retirement in Europe (SHARE). Int. J. Epidemiol. 2013, 42, 992–1001. [Google Scholar] [CrossRef]
  9. Juster, F.T.; Suzman, R. An Overview of the Health and Retirement Study. J. Hum. Resour. 1995, 30, S7. [Google Scholar] [CrossRef]
  10. Lee, J.; Phillips, D.; Wilkens, J. Gateway to Global Aging Data Team. Gateway to Global Aging Data: Resources for Cross-National Comparisons of Family, Social Environment, and Healthy Aging. J. Gerontol. Ser. B 2021, 76, S5–S16. [Google Scholar] [CrossRef]
  11. De Looze, C.; Feeney, J.; Seeher, K.M.; Amuthavalli Thiyagarajan, J.; Diaz, T.; Kenny, R.A. Assessing Cognitive Function in Longitudinal Studies of Ageing Worldwide: Some Practical Considerations. Age Ageing 2023, 52, iv13–iv25. [Google Scholar] [CrossRef]
  12. Langa, K.M.; Ryan, L.H.; McCammon, R.J.; Jones, R.N.; Manly, J.J.; Levine, D.A.; Sonnega, A.; Farron, M.; Weir, D.R. The Health and Retirement Study Harmonized Cognitive Assessment Protocol Project: Study Design and Methods. Neuroepidemiology 2020, 54, 64–74. [Google Scholar] [CrossRef]
  13. Börsch-Supan, A.; Douhou, S.; Fernández, I.; Otero, M.C.; Tawiah, B.B. Release Note 1.0 to SHARE-HCAP Data; MEA Discussion Paper 02/2025; MEA-SHARE gGmbH: Munich, Germany, 2025. [Google Scholar]
  14. Gruber, S.; Wagner, M.; Batta, F. Scales and Multi-Item Indicators; SHARE Berlin Institute: Berlin, Germany, 2024; pp. 1–53. [Google Scholar]
  15. Céline, D.L.; Feeney, J.; Kenny, R.A. The CANDID Initiative: Leveraging Cognitive Ageing Dementia Data from Around the World; The Irish Longitudinal Study on Ageing: Dublin, Ireland, 2021. [Google Scholar]
  16. Stefler, D.; Prina, M.; Wu, Y.-T.; Sánchez-Niubò, A.; Lu, W.; Haro, J.M.; Marmot, M.; Bobak, M. Socioeconomic Inequalities in Physical and Cognitive Functioning: Cross-Sectional Evidence from 37 Cohorts across 28 Countries in the ATHLOS Project. J. Epidemiol. Community Health 2021, 75, 980–986. [Google Scholar] [CrossRef]
  17. Folstein, M.F.; Folstein, S.E.; McHugh, P.R. “Mini-Mental State”. A Practical Method for Grading the Cognitive State of Patients for the Clinician. J. Psychiatr. Res. 1975, 12, 189–198. [Google Scholar] [CrossRef]
  18. Nasreddine, Z.S.; Phillips, N.A.; Bédirian, V.; Charbonneau, S.; Whitehead, V.; Collin, I.; Cummings, J.L.; Chertkow, H. The Montreal Cognitive Assessment, MoCA: A Brief Screening Tool for Mild Cognitive Impairment. J. Am. Geriatr. Soc. 2005, 53, 695–699. [Google Scholar] [CrossRef] [PubMed]
  19. Brandt, J.; Spencer, M.; Folstein, M. The Telephone Interview for Cognitive Status. Cogn. Behav. Neurol. 1988, 1, 111–117. [Google Scholar]
  20. Prince, M.; Acosta, D.; Ferri, C.P.; Guerra, M.; Huang, Y.; Jacob, K.S.; Llibre Rodriguez, J.J.; Salas, A.; Sosa, A.L.; Williams, J.D.; et al. A Brief Dementia Screener Suitable for Use by Non-specialists in Resource Poor Settings—The Cross-cultural Derivation and Validation of the Brief Community Screening Instrument for Dementia. Int. J. Geriat. Psychiatry 2011, 26, 899–907. [Google Scholar] [CrossRef] [PubMed]
  21. Cullen, B.; O’Neill, B.; Evans, J.J.; Coen, R.F.; Lawlor, B.A. A Review of Screening Tests for Cognitive Impairment. J. Neurol. Neurosurg. Psychiatry 2007, 78, 790–799. [Google Scholar] [CrossRef]
  22. De Roeck, E.E.; De Deyn, P.P.; Dierckx, E.; Engelborghs, S. Brief Cognitive Screening Instruments for Early Detection of Alzheimer’s Disease: A Systematic Review. Alzheimers Res. Ther. 2019, 11, 21. [Google Scholar] [CrossRef]
  23. Peters, M.D.J.; Godfrey, C.M.; Khalil, H.; McInerney, P.; Parker, D.; Soares, C.B. Guidance for Conducting Systematic Scoping Reviews. Int. J. Evid.-Based Healthc. 2015, 13, 141–146. [Google Scholar] [CrossRef]
  24. Tricco, A.C.; Lillie, E.; Zarin, W.; O’Brien, K.K.; Colquhoun, H.; Levac, D.; Moher, D.; Peters, M.D.J.; Horsley, T.; Weeks, L.; et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Ann. Intern Med. 2018, 169, 467–473. [Google Scholar] [CrossRef] [PubMed]
  25. Haddaway, N.R.; Page, M.J.; Pritchard, C.C.; McGuinness, L.A. PRISMA2020: An R Package and Shiny App for Producing PRISMA 2020-compliant Flow Diagrams, with Interactivity for Optimised Digital Transparency and Open Synthesis. Campbell Syst. Rev. 2022, 18, e1230. [Google Scholar] [CrossRef] [PubMed]
  26. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 Statement: An Updated Guideline for Reporting Systematic Reviews. BMJ 2021, n71. [Google Scholar] [CrossRef]
  27. Colsher, P.L.; Wallace, R.B. Data Quality and Age: Health and Psychobehavioral Correlates of Item Nonresponse and Inconsistent Responses. J. Gerontol. 1989, 44, P45–P52. [Google Scholar] [CrossRef] [PubMed]
  28. Jorm, A.F. A Short Form of the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE): Development and Cross-Validation. Psychol. Med. 1994, 24, 145–153. [Google Scholar] [CrossRef]
  29. SHARE-ERIC Conditions of Use. Available online: https://share-eric.eu/data/data-access/conditions-of-use (accessed on 23 July 2025).
  30. R Core Team. R: A Language and Environment for Statistical Computing; R Core Team: Vienna, Austria, 2024. [Google Scholar]
  31. Gross, A.L.; Khobragade, P.Y.; Meijer, E.; Saxton, J.A. Measurement and Structure of Cognition in the Longitudinal Aging Study in India–Diagnostic Assessment of Dementia. J. Am. Geriatr. Soc. 2020, 68, S11–S19. [Google Scholar] [CrossRef]
  32. Jones, R.N.; Manly, J.J.; Langa, K.M.; Ryan, L.H.; Levine, D.A.; McCammon, R.; Weir, D. Factor Structure of the Harmonized Cognitive Assessment Protocol Neuropsychological Battery in the Health and Retirement Study. J. Int. Neuropsychol. Soc. 2024, 30, 47–55. [Google Scholar] [CrossRef]
  33. Börsch-Supan, A.; Douhou, S.; Otero, M.C.; Tawiah, B.B. Harmonized Prevalence Estimates of Dementia in Europe Vary Strongly with Childhood Education. Sci. Rep. 2025, 15, 14024. [Google Scholar] [CrossRef]
  34. Hsieh, S.; Schubert, S.; Hoon, C.; Mioshi, E.; Hodges, J.R. Validation of the Addenbrooke’s Cognitive Examination III in Frontotemporal Dementia and Alzheimer’s Disease. Dement. Geriatr. Cogn. Disord. 2013, 36, 242–250. [Google Scholar] [CrossRef]
  35. Henley, N.M. A Psychological Study of the Semantics of Animal Terms. J. Verbal Learn. Verbal Behav. 1969, 8, 176–184. [Google Scholar] [CrossRef]
  36. Formanek, T.; Kagstrom, A.; Winkler, P.; Cermakova, P. Differences in Cognitive Performance and Cognitive Decline across European Regions: A Population-Based Prospective Cohort Study. Eur. Psychiatr. 2019, 58, 80–86. [Google Scholar] [CrossRef]
  37. Klee, M.; Langa, K.M.; Leist, A.K. Performance of Probable Dementia Classification in a European Multi-Country Survey. Sci. Rep. 2024, 14, 6657. [Google Scholar] [CrossRef] [PubMed]
  38. O’Donovan, M.R.; Cornally, N.; O’Caoimh, R. Validation of a Harmonised, Three-Item Cognitive Screening Instrument for the Survey of Health, Ageing and Retirement in Europe (SHARE-Cog). IJERPH 2023, 20, 6869. [Google Scholar] [CrossRef]
  39. Cheng, M.; Sommet, N.; Jopp, D.S.; Spini, D. Evolution of the Income-Related Gap in Health with Old Age: Evidence from 20 Countries in European and Chinese Panel Datasets. Eur. J. Ageing 2023, 20, 33. [Google Scholar] [CrossRef] [PubMed]
  40. Meda, N.; Zammarrelli, J.; Sambataro, F.; De Leo, D. Late-Life Suicide: Machine Learning Predictors from a Large European Longitudinal Cohort. Front. Psychiatry 2024, 15, 1455247. [Google Scholar] [CrossRef]
  41. Zheng, H.; Jia, C. Gender Differences in the Association of Depression Trajectories with Executive and Memory Functions: Evidence from the Longitudinal Study of the Survey of Health, Ageing and Retirement in Europe (2004–2017). J. Psychiatr. Res. 2022, 149, 177–184. [Google Scholar] [CrossRef] [PubMed]
  42. Crimmins, E.M.; Kim, J.K.; Langa, K.M.; Weir, D.R. Assessment of Cognition Using Surveys and Neuropsychological Assessment: The Health and Retirement Study and the Aging, Demographics, and Memory Study. J. Gerontol. Ser. B Psychol. Sci. Soc. Sci. 2011, 66B, i162–i171. [Google Scholar] [CrossRef]
  43. Petersen, R.C. Mild Cognitive Impairment as a Diagnostic Entity. J. Intern. Med. 2004, 256, 183–194. [Google Scholar] [CrossRef]
  44. Tetzner, J.; Schuth, M. Anxiety in Late Adulthood: Associations with Gender, Education, and Physical and Cognitive Functioning. Psychol. Aging 2016, 31, 532–544. [Google Scholar] [CrossRef]
  45. Ren, Z.; Xu, Y.; Sun, J.; Han, Y.; An, L.; Liu, J. Chronic Diseases and Multimorbidity Patterns, Their Recent Onset, and Risk of New-Onset Parkinson’s Disease and Related Functional Degeneration in Older Adults: A Prospective Cohort Study. eClinicalMedicine 2023, 65, 102265. [Google Scholar] [CrossRef]
  46. Cui, M.; Wang, J.; Deng, M.; Meng, H.; Fan, Y.; Ku, C.; Wang, R.; Wu, B.; Dai, M.; Ping, Z. Longitudinal Relationship between Grip Strength and Cognitive Function in a European Population Older than 50 Years: A Cross-Lagged Panel Model. Arch. Gerontol. Geriatr. 2024, 122, 105396. [Google Scholar] [CrossRef] [PubMed]
  47. Kouraki, A.; Bast, T.; Ferguson, E.; Valdes, A.M. The Association of Socio-Economic and Psychological Factors with Limitations in Day-to-Day Activity over 7 Years in Newly Diagnosed Osteoarthritis Patients. Sci. Rep. 2022, 12, 943. [Google Scholar] [CrossRef]
  48. Keenan, K.; Grundy, E. Fertility History and Physical and Mental Health Changes in European Older Adults. Eur. J. Popul. 2019, 35, 459–485. [Google Scholar] [CrossRef] [PubMed]
  49. Ayalon, L.; Litwin, H. What Cognitive Functions Are Associated with Passive Suicidal Ideation? Findings from a National Sample of Community Dwelling Israelis. Int. J. Geriat. Psychiatry 2009, 24, 472–478. [Google Scholar] [CrossRef] [PubMed]
  50. Tan, X.; Lebedeva, A.; Åkerstedt, T.; Wang, H.-X. Sleep Mediates the Association Between Stress at Work and Incident Dementia: Study from the Survey of Health, Ageing and Retirement in Europe. J. Gerontol. Ser. A 2023, 78, 447–453. [Google Scholar] [CrossRef]
  51. Yu, J.; Wang, P.; Xie, S.; Amin, J.; Mueller, C.; Hou, X.; Chen, X.; Underwood, B.R.; Tang, S.; Chen, S. Prevalence and Progress of Underdiagnosis of Probable Dementia: A Repeated Cross-Sectional Study in 19 European Countries. BMC Med. 2025, 23, 395. [Google Scholar] [CrossRef]
  52. Wang, Y.; Wu, Z.; Duan, L.; Liu, S.; Chen, R.; Sun, T.; Wang, J.; Zhou, J.; Wang, H.; Huang, P. Digital Exclusion and Cognitive Impairment in Older People: Findings from Five Longitudinal Studies. BMC Geriatr. 2024, 24, 406. [Google Scholar] [CrossRef]
  53. Jin, Y.; Liang, J.; Hong, C.; Liang, R.; Luo, Y. Cardiometabolic Multimorbidity, Lifestyle Behaviours, and Cognitive Function: A Multicohort Study. Lancet Healthy Longev. 2023, 4, e265–e273. [Google Scholar] [CrossRef]
  54. Fritze, T.; Doblhammer, G.; Van Den Berg, G.J. Can Individual Conditions during Childhood Mediate or Moderate the Long-Term Cognitive Effects of Poor Economic Environments at Birth? Soc. Sci. Med. 2014, 119, 240–248. [Google Scholar] [CrossRef]
  55. Doblhammer, G.; Van Den Berg, G.J.; Fritze, T. Economic Conditions at the Time of Birth and Cognitive Abilities Late in Life: Evidence from Ten European Countries. PLoS ONE 2013, 8, e74915. [Google Scholar] [CrossRef]
  56. Ice, E.; Ang, S.; Greenberg, K.; Burgard, S. Women’s Work-Family Histories and Cognitive Performance in Later Life. Am. J. Epidemiol. 2020, 189, 922–930. [Google Scholar] [CrossRef]
  57. Morris, Z.A.; Zaidi, A.; McGarity, S. The Extra Costs Associated with a Cognitive Impairment: Estimates from 15 OECD Countries. Eur. J. Public Health 2021, 31, 647–652. [Google Scholar] [CrossRef] [PubMed]
  58. Huang, Y.; Chen, H.; Gao, M.; Lv, X.; Pang, T.; Rong, S.; Xu, X.; Yuan, C. Self- and Interviewer-Reported Cognitive Problems in Relation to Cognitive Decline and Dementia: Results from Two Prospective Studies. BMC Med. 2024, 22, 23. [Google Scholar] [CrossRef]
  59. Zhou, Y. Development over Time in Cognitive Function among European 55-69-Year-Olds from 2006 to 2015, and Differences of Region, Gender, and Education. CPoS 2022, 47, 119–142. [Google Scholar] [CrossRef]
  60. Bertogg, A.; Leist, A.K. Gendered Life Courses and Cognitive Functioning in Later Life: The Role of Context-Specific Gender Norms and Lifetime Employment. Eur. J. Ageing 2023, 20, 7. [Google Scholar] [CrossRef] [PubMed]
  61. Conde-Sala, J.L.; Garre-Olmo, J.; Calvó-Perxas, L.; Turró-Garriga, O.; Vilalta-Franch, J.; López-Pousa, S. CAUSES, Mortality Rates and Risk Factors of Death in Community-Dwelling Europeans Aged 50 Years and over: Results from the Survey of Health, Ageing and Retirement in Europe 2013–2015. Arch. Gerontol. Geriatr. 2020, 89, 104035. [Google Scholar] [CrossRef] [PubMed]
  62. Portellano-Ortiz, C.; Conde-Sala, J.L. Cognition and Its Association with the Factors of the EURO-D: Suffering and Motivation. Findings from SHARE Wave 6. Int. J. Geriat. Psychiatry 2018, 33, 1645–1653. [Google Scholar] [CrossRef]
  63. Lewis, N.A.; Yoneda, T.; Melis, R.J.F.; Mroczek, D.K.; Hofer, S.M.; Muniz-Terrera, G. Availability of Cognitive Resources in Early Life Predicts Transitions Between Cognitive States in Middle and Older Adults From Europe. Innov. Aging 2023, 7, igad124. [Google Scholar] [CrossRef]
  64. Khalaila, R.; Vitman-Schorr, A.; Cohn-Schwartz, E. A Prospective Association between Tooth Status and Cognitive Performance among Older Adults in Europe. Aging Ment. Health 2022, 26, 499–506. [Google Scholar] [CrossRef]
  65. Yan, B.; Gao, S.; Dai, M.; Gill, T.M.; Chen, X. Early-Life Circumstances and Cross-Country Disparities in Cognition Among Older Populations—China, the US, and the EU, 2008–2018. China CDC Wkly. 2022, 4, 1013–1018. [Google Scholar] [CrossRef]
  66. Wang, T.; Wu, Y.; Li, W.; Li, S.; Sun, Y.; Li, S.; Zhang, D.; Tan, Q. Weak Grip Strength and Cognition Predict Functional Limitation in Older Europeans. J. Am. Geriatr. Soc. 2019, 67, 93–99. [Google Scholar] [CrossRef] [PubMed]
  67. Gannon, B.; Banks, J.; Nazroo, J.; Munford, L. An Econometric Analysis of Cognitive Impairment and Healthcare Utilization in the Ageing Population. Appl. Econ. 2018, 50, 5454–5463. [Google Scholar] [CrossRef]
  68. Olivera, J.; Andreoli, F.; Leist, A.K.; Chauvel, L. Inequality in Old Age Cognition across the World. Econ. Hum. Biol. 2018, 29, 179–188. [Google Scholar] [CrossRef] [PubMed]
  69. Fawaz, Y.; Mira, P. Social Isolation, Health Dynamics, and Mortality: Evidence across 21 European Countries. J. Popul. Econ. 2023, 36, 2483–2518. [Google Scholar] [CrossRef]
  70. Apolinario, D.; Lichtenthaler, D.G.; Magaldi, R.M.; Soares, A.T.; Busse, A.L.; das Gracas Amaral, J.R.; Jacob-Filho, W.; Brucki, S.M.D. Using Temporal Orientation, Category Fluency, and Word Recall for Detecting Cognitive Impairment: The 10-Point Cognitive Screener (10-CS). Int. J. Geriatr. Psychiatry 2016, 31, 4–12. [Google Scholar] [CrossRef]
  71. Borson, S.; Scanlan, J.; Brush, M.; Vitaliano, P.; Dokmak, A. The Mini-Cog: A Cognitive ‘Vital Signs’ Measure for Dementia Screening in Multi-Lingual Elderly. Int. J. Geriat. Psychiatry 2000, 15, 1021–1027. [Google Scholar] [CrossRef]
  72. Callahan, C.M.; Unverzagt, F.W.; Hui, S.L.; Perkins, A.J.; Hendrie, H.C. Six-Item Screener to Identify Cognitive Impairment Among Potential Subjects for Clinical Research. Med. Care 2002, 40, 771–781. [Google Scholar] [CrossRef]
  73. Belle, S.H.; Mendelsohn, A.B.; Seaberg, E.C.; Ratcliff, G. A Brief Cognitive Screening Battery for Dementia in the Community. Neuroepidemiology 2000, 19, 43–50. [Google Scholar] [CrossRef]
  74. Molloy, D.W.; Standish, T.I.; Lewis, D.L. Screening for Mild Cognitive Impairment: Comparing the SMMSE and the ABCS. Can. J. Psychiatry 2005, 50, 52–58. [Google Scholar] [CrossRef]
  75. Malmstrom, T.K.; Voss, V.B.; Cruz-Oliver, D.M.; Cummings-Vaughn, L.A.; Tumosa, N.; Grossberg, G.T.; Morley, J.E. The Rapid Cognitive Screen (RCS): A Point-of-Care Screening for Dementia and Mild Cognitive Impairment. J. Nutr. Health Aging 2015, 19, 741–744. [Google Scholar] [CrossRef]
  76. Inoue, M.; Jinbo, D.; Nakamura, Y.; Taniguchi, M.; Urakami, K. Development and Evaluation of a Computerized Test Battery for Alzheimer’s Disease Screening in Community-Based Settings. Am. J. Alzheimers Dis. Other Demen. 2009, 24, 129–135. [Google Scholar] [CrossRef] [PubMed]
  77. Dougherty, J.H.; Cannon, R.L.; Nicholas, C.R.; Hall, L.; Hare, F.; Carr, E.; Dougherty, A.; Janowitz, J.; Arunthamakun, J. The Computerized Self Test (CST): An Interactive, Internet Accessible Cognitive Screening Test for Dementia. JAD 2010, 20, 185–195. [Google Scholar] [CrossRef] [PubMed]
  78. Larner, A.J. Short Montreal Cognitive Assessment: Validation and Reproducibility. J. Geriatr. Psychiatry Neurol. 2017, 30, 104–108. [Google Scholar] [CrossRef]
  79. Artero, S.; Ritchie, K. The Detection of Mild Cognitive Impairment in the General Practice Setting. Aging Ment. Health 2003, 7, 251–258. [Google Scholar] [CrossRef] [PubMed]
  80. Kalbe, E.; Kessler, J.; Calabrese, P.; Smith, R.; Passmore, A.P.; Brand, M.; Bullock, R. DemTect: A New, Sensitive Cognitive Screening Test to Support the Diagnosis of Mild Cognitive Impairment and Early Dementia. Int. J. Geriat. Psychiatry 2004, 19, 136–143. [Google Scholar] [CrossRef]
  81. Mendiondo, M.S.; Ashford, J.W.; Kryscio, R.J.; Schmitt, F.A. Designing a Brief Alzheimer Screen (BAS). JAD 2003, 5, 391–398. [Google Scholar] [CrossRef]
  82. Mahoney, R.; Johnston, K.; Katona, C.; Maxmin, K.; Livingston, G. The TE4D-Cog: A New Test for Detecting Early Dementia in English-Speaking Populations. Int. J. Geriat. Psychiatry 2005, 20, 1172–1179. [Google Scholar] [CrossRef]
  83. Srinivasan, S. The Concise Cognitive Test for Dementia Screening: Reliability and Effects of Demographic Variables as Compared to the Mini Mental State Examination. Neurol. India 2010, 58, 702. [Google Scholar] [CrossRef]
  84. Yu, K.; Zhang, S.; Wang, Q.; Wang, X.; Qin, Y.; Wang, J.; Li, C.; Wu, Y.; Wang, W.; Lin, H. Development of a Computerized Tool for the Chinese Version of the Montreal Cognitive Assessment for Screening Mild Cognitive Impairment. Int. Psychogeriatr. 2015, 27, 213–219. [Google Scholar] [CrossRef]
  85. O’Caoimh, R.; Gao, Y.; McGlade, C.; Healy, L.; Gallagher, P.; Timmons, S.; Molloy, D.W. Comparison of the Quick Mild Cognitive Impairment (Qmci) Screen and the SMMSE in Screening for Mild Cognitive Impairment. Age Ageing 2012, 41, 624–629. [Google Scholar] [CrossRef]
  86. Kalbe, E.; Calabrese, P.; Schwalen, S.; Kessler, J. The Rapid Dementia Screening Test (RDST): A New Economical Tool for Detecting Possible Patients with Dementia. Dement. Geriatr. Cogn. Disord. 2003, 16, 193–199. [Google Scholar] [CrossRef]
  87. Storey, J.E.; Rowland, J.T.J.; Conforti, D.A.; Dickson, H.G. The Rowland Universal Dementia Assessment Scale (RUDAS): A Multicultural Cognitive Assessment Scale. Int. Psychogeriatr. 2004, 16, 13–31. [Google Scholar] [CrossRef] [PubMed]
  88. Hopkins, R.W.; Kilik, L.A. The Mini-Kingston Standardized Cognitive Assessment. Am. J. Alzheimers Dis. Other Demen. 2013, 28, 239–244. [Google Scholar] [CrossRef] [PubMed]
  89. Brandt, J.; Welsh, K.A.; Breitner, J.C.; Folstein, M.F.; Helms, M.; Christian, J.C. Hereditary Influences on Cognitive Functioning in Older Men. A Study of 4000 Twin Pairs. Arch. Neurol. 1993, 50, 599–603. [Google Scholar] [CrossRef] [PubMed]
  90. Hsieh, S.; McGrory, S.; Leslie, F.; Dawson, K.; Ahmed, S.; Butler, C.R.; Rowe, J.B.; Mioshi, E.; Hodges, J.R. The Mini-Addenbrooke’s Cognitive Examination: A New Assessment Tool for Dementia. Dement. Geriatr. Cogn. Disord. 2015, 39, 1–11. [Google Scholar] [CrossRef]
  91. Julayanont, P.; Tangwongchai, S.; Hemrungrojn, S.; Tunvirachaisakul, C.; Phanthumchinda, K.; Hongsawat, J.; Suwichanarakul, P.; Thanasirorat, S.; Nasreddine, Z.S. The Montreal Cognitive Assessment—Basic: A Screening Tool for Mild Cognitive Impairment in Illiterate and Low-Educated Elderly Adults. J. Am. Geriatr. Soc. 2015, 63, 2550–2554. [Google Scholar] [CrossRef]
  92. Andrade, C. Z Scores, Standard Scores, and Composite Test Scores Explained. Indian J. Psychol. Med. 2021, 43, 555–557. [Google Scholar] [CrossRef]
  93. Campbell, D. T Scores. In Encyclopedia of Autism Spectrum Disorders; Volkmar, F.R., Ed.; Springer International Publishing: Cham, Switzerland, 2021; p. 4729. [Google Scholar]
  94. Greenacre, M.; Groenen, P.J.F.; Hastie, T.; D’Enza, A.I.; Markos, A.; Tuzhilina, E. Principal Component Analysis. Nat. Rev. Methods Primers 2022, 2, 100. [Google Scholar] [CrossRef]
  95. Fries, J.; Pietschnig, J. An Intelligent Mind in a Healthy Body? Predicting Health by Cognitive Ability in a Large European Sample. Intelligence 2022, 93, 101666. [Google Scholar] [CrossRef]
  96. Lifshitz-Vahav, H.; Shrira, A.; Bodner, E. The Reciprocal Relationship between Participation in Leisure Activities and Cognitive Functioning: The Moderating Effect of Self-Rated Literacy Level. Aging Ment. Health 2017, 21, 524–531. [Google Scholar] [CrossRef]
  97. Orsholits, D.; Cullati, S.; Ghisletta, P.; Aartsen, M.J.; Oris, M.; Studer, M.; Maurer, J.; Perna, L.; Gouveia, É.R.; Gouveia, B.R.; et al. How Welfare Regimes Moderate the Associations Between Cognitive Aging, Education, and Occupation. J. Gerontol. Ser. B 2022, 77, 1615–1624. [Google Scholar] [CrossRef]
  98. Biau, G.; Scornet, E. A Random Forest Guided Tour. TEST 2016, 25, 197–227. [Google Scholar] [CrossRef]
  99. Seidel, D.; Brayne, C.; Jagger, C. Limitations in Physical Functioning among Older People as a Predictor of Subsequent Disability in Instrumental Activities of Daily Living. Age Ageing 2011, 40, 463–469. [Google Scholar] [CrossRef] [PubMed]
  100. Barbosa, R.; Midão, L.; Almada, M.; Costa, E. Cognitive Performance in Older Adults across Europe Based on the SHARE Database. Aging Neuropsychol. Cogn. 2021, 28, 584–599. [Google Scholar] [CrossRef] [PubMed]
  101. Sterniczuk, R.; Theou, O.; Rusak, B.; Rockwood, K. Cognitive Test Performance in Relation to Health and Function in 12 European Countries: The SHARE Study. Can. Geriatr. J. 2015, 18, 144–151. [Google Scholar] [CrossRef] [PubMed]
  102. Rikos, N.; Linardakis, M.; Smpokos, E.; Spiridaki, E.; Symvoulakis, E.K.; Tsiligianni, I.; Philalithis, A. Assessment of Cognitive Function in European Adults Aged 50+in Relation to Their Handgrip Strength and Physical Inactivity: The SHARE Study During 2019-2020. J. Res. Health Sci. 2024, 24, e00611. [Google Scholar] [CrossRef]
  103. Sterniczuk, R.; Theou, O.; Rusak, B.; Rockwood, K. Sleep Disturbance Is Associated with Incident Dementia and Mortality. CAR 2013, 10, 767–775. [Google Scholar] [CrossRef]
  104. Bourassa, K.J.; Memel, M.; Woolverton, C.; Sbarra, D.A. Social Participation Predicts Cognitive Functioning in Aging Adults over Time: Comparisons with Physical Health, Depression, and Physical Activity. Aging Ment. Health 2017, 21, 133–146. [Google Scholar] [CrossRef]
  105. Zhang, X.; Zeng, R.; Zhu, A.; Xie, F.; Ye, D.; Chen, L.; Xiao, Y.; Zhu, K.; Fan, T.; Zhu, W.; et al. Association between Sensory Impairment and Cognitive Frailty among Older People: Evidence from Four Nationwide Cohort Studies. J. Nutr. Health Aging 2025, 29, 100590. [Google Scholar] [CrossRef]
  106. Godin, J.; Armstrong, J.J.; Rockwood, K.; Andrew, M.K. Dynamics of Frailty and Cognition After Age 50: Why It Matters That Cognitive Decline Is Mostly Seen in Old Age. JAD 2017, 58, 231–242. [Google Scholar] [CrossRef]
  107. Godin, J.; Armstrong, J.J.; Wallace, L.; Rockwood, K.; Andrew, M.K. The Impact of Frailty and Cognitive Impairment on Quality of Life: Employment and Social Context Matter. Int. Psychogeriatr. 2019, 31, 789–797. [Google Scholar] [CrossRef] [PubMed]
  108. Hernandez, R.; Jin, H.; Lee, P.-J.; Schneider, S.; Junghaenel, D.U.; Stone, A.A.; Meijer, E.; Gao, H.; Maupin, D.; Zelinski, E.M. Attrition from Longitudinal Ageing Studies and Performance across Domains of Cognitive Functioning: An Individual Participant Data Meta-Analysis. BMJ Open 2024, 14, e079241. [Google Scholar] [CrossRef] [PubMed]
  109. Mitnitski, A.B.; Mogilner, A.J.; Rockwood, K. Accumulation of Deficits as a Proxy Measure of Aging. Sci. World J. 2001, 1, 323–336. [Google Scholar] [CrossRef] [PubMed]
  110. Jiang, Y.; Ding, Y.; Cao, Q.; Wu, X.; Li, X.; Xu, Y.; Zhao, Z.; Xu, M.; Lu, J.; Wang, T.; et al. Trajectories of Muscle Strength and Physical Performance Preceding Dementia in Older US and European Populations. J. Prev. Alzheimer’s Dis. 2025, 12, 100296. [Google Scholar] [CrossRef]
  111. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders: DSM-5, 5th ed.; American Psychiatric Association: Washington, DC, USA, 2013. [Google Scholar]
  112. Du, M.; Liu, M.; Liu, J. Effects of Physical and Psychological Multimorbidity on the Risk of Dementia: Multinational Prospective Cohorts and a Meta-Analysis. BMC Med. 2024, 22, 423. [Google Scholar] [CrossRef]
  113. Liu, G.; Hong, C.; Xu, S.; Huang, Y.; Zheng, F.; Gao, Y.; Luo, Y. Association of Sarcopenia with Parkinson’s Disease and Related Functional Degeneration among Older Adults: A Prospective Cohort Study in Europe. J. Affect. Disord. 2025, 374, 553–562. [Google Scholar] [CrossRef]
  114. Han, F.-F.; Wang, H.-X.; Wu, J.-J.; Yao, W.; Hao, C.-F.; Pei, J.-J. Depressive Symptoms and Cognitive Impairment: A 10-Year Follow-up Study from the Survey of Health, Ageing and Retirement in Europe. Eur. Psychiatr. 2021, 64, e55. [Google Scholar] [CrossRef]
  115. Werneck, A.O.; Araujo, R.H.O.; Silva, D.R.; Vancampfort, D. Handgrip Strength, Physical Activity and Incident Mild Cognitive Impairment and Dementia. Maturitas 2023, 176, 107789. [Google Scholar] [CrossRef]
  116. Meier, C.; Wieczorek, M.; Aschwanden, D.; Ihle, A.; Kliegel, M.; Maurer, J. Physical Activity Partially Mediates the Association between Health Literacy and Mild Cognitive Impairment in Older Adults: Cross-Sectional Evidence from Switzerland. Eur. J. Public Health 2025, 35, 134–140. [Google Scholar] [CrossRef]
  117. Luchetti, M.; Terracciano, A.; Aschwanden, D.; Lee, J.H.; Stephan, Y.; Sutin, A.R. Loneliness Is Associated with Risk of Cognitive Impairment in the Survey of Health, Ageing and Retirement in Europe. Int. J. Geriat. Psychiatry 2020, 35, 794–801. [Google Scholar] [CrossRef]
  118. You, Y.; Wu, X.; Zhang, Z.; Zhao, Z.; Lv, D.; Xie, F.; Lin, Y.; Xie, W.; Shang, Q.; Meng, X.; et al. Impact of Early-Life Deprivation and Threat on Physical, Psychological, and Cognitive Multimorbidity: Evidence from Multinational Prospective Cohorts. J. Affect. Disord. 2025, 391, 119877. [Google Scholar] [CrossRef]
  119. Ni, Y.; Zhou, Y.; Kivimäki, M.; Cai, Y.; Carrillo-Larco, R.M.; Xu, X.; Dai, X.; Xu, X. Socioeconomic Inequalities in Physical, Psychological, and Cognitive Multimorbidity in Middle-Aged and Older Adults in 33 Countries: A Cross-Sectional Study. Lancet Healthy Longev. 2023, 4, e618–e628. [Google Scholar] [CrossRef]
  120. Sutin, A.R.; Luchetti, M.; Stephan, Y.; Terracciano, A. Meaning in Life and Risk of Cognitive Impairment: A 9-Year Prospective Study in 14 Countries. Arch. Gerontol. Geriatr. 2020, 88, 104033. [Google Scholar] [CrossRef]
  121. Lugo-Palacios, D.G.; Gannon, B. Health Care Utilisation amongst Older Adults with Sensory and Cognitive Impairments in Europe. Health Econ. Rev. 2017, 7, 44. [Google Scholar] [CrossRef]
  122. Yan, R.; Liu, X.; Xue, R.; Duan, X.; Li, L.; He, X.; Cui, F.; Zhao, J. Association between Internet Exclusion and Depressive Symptoms among Older Adults: Panel Data Analysis of Five Longitudinal Cohort Studies. eClinicalMedicine 2024, 75, 102767. [Google Scholar] [CrossRef]
  123. Seblova, D.; Brayne, C.; Machů, V.; Kuklová, M.; Kopecek, M.; Cermakova, P. Changes in Cognitive Impairment in the Czech Republic. JAD 2019, 72, 693–701. [Google Scholar] [CrossRef] [PubMed]
  124. Zhou, Y.; Kivimäki, M.; Yan, L.L.; Carrillo-Larco, R.M.; Zhang, Y.; Cheng, Y.; Wang, H.; Zhou, M.; Xu, X. Associations between Socioeconomic Inequalities and Progression to Psychological and Cognitive Multimorbidities after Onset of a Physical Condition: A Multicohort Study. eClinicalMedicine 2024, 74, 102739. [Google Scholar] [CrossRef] [PubMed]
  125. Han, B.; Zeng, Z.; Wen, Y.; Chen, C.; Cheng, D.; Li, Y.; Huang, N.; Ruan, J.; Zhao, D.; Xue, Q. Cumulative Handgrip Strength and Longitudinal Changes in Cognitive Function and Daily Functioning among People Aged 50 Years and Older: Evidence from Two Longitudinal Cohort Studies. Arch. Public Health 2025, 83, 150. [Google Scholar] [CrossRef] [PubMed]
  126. Hofbauer, L.M.; Rodriguez, F.S. The Role of Social Deprivation and Depression in Dementia Risk: Findings from the Longitudinal Survey of Health, Ageing and Retirement in Europe. Epidemiol. Psychiatr. Sci. 2023, 32, e10. [Google Scholar] [CrossRef]
  127. Franse, C.B.; Rietjens, J.A.; Burdorf, A.; van Grieken, A.; Korfage, I.J.; van der Heide, A.; Raso, F.M.; van Beeck, E.; Raat, H. A Prospective Study on the Variation in Falling and Fall Risk among Community-Dwelling Older Citizens in 12 European Countries. BMJ Open 2017, 7, e015827. [Google Scholar] [CrossRef]
  128. Nielsen, C.R.; Ahrenfeldt, L.J.; Jeune, B.; Christensen, K.; Lindahl-Jacobsen, R. Development in Life Expectancy with Good and Poor Cognitive Function in the Elderly European Population from 2004-05 to 2015. Eur. J. Epidemiol. 2022, 37, 495–502. [Google Scholar] [CrossRef] [PubMed]
  129. Cleret De Langavant, L.; Bayen, E.; Yaffe, K. Unsupervised Machine Learning to Identify High Likelihood of Dementia in Population-Based Surveys: Development and Validation Study. J. Med. Internet Res. 2018, 20, e10493. [Google Scholar] [CrossRef] [PubMed]
  130. Cleret De Langavant, L.; Bayen, E.; Bachoud-Lévi, A.; Yaffe, K. Approximating Dementia Prevalence in Population-based Surveys of Aging Worldwide: An Unsupervised Machine Learning Approach. AD Transl. Res. Clin. Interv. 2020, 6, e12074. [Google Scholar] [CrossRef] [PubMed]
  131. Gharbi-Meliani, A.; Husson, F.; Vandendriessche, H.; Bayen, E.; Yaffe, K.; Bachoud-Lévi, A.-C.; Cleret De Langavant, L. Identification of High Likelihood of Dementia in Population-Based Surveys Using Unsupervised Clustering: A Longitudinal Analysis. Alzheimers Res. Ther. 2023, 15, 209. [Google Scholar] [CrossRef]
  132. Molloy, D.W.; Standish, T.I. A Guide to the Standardized Mini-Mental State Examination. Int. Psychogeriatr. 1997, 9, 87–94. [Google Scholar] [CrossRef]
  133. Hu, Y.; Ruiz, M.; Bobak, M.; Martikainen, P. Four-Year Trajectories of Episodic Memory Decline in Mid-Late Life by Living Arrangements: A Cross-National Comparison between China and England. J. Epidemiol. Community Health 2021, 75, 881–889. [Google Scholar] [CrossRef]
  134. Manly, J.J.; Jones, R.N.; Langa, K.M.; Ryan, L.H.; Levine, D.A.; McCammon, R.; Heeringa, S.G.; Weir, D. Estimating the Prevalence of Dementia and Mild Cognitive Impairment in the US: The 2016 Health and Retirement Study Harmonized Cognitive Assessment Protocol Project. JAMA Neurol. 2022, 79, 1242. [Google Scholar] [CrossRef]
  135. Piccininni, M.; Rohmann, J.L.; Wechsung, M.; Logroscino, G.; Kurth, T. Should Cognitive Screening Tests Be Corrected for Age and Education? Insights From a Causal Perspective. Am. J. Epidemiol. 2023, 192, 93–101. [Google Scholar] [CrossRef]
  136. Padovani, A.; Benussi, A.; Cantoni, V.; Dell’Era, V.; Cotelli, M.S.; Caratozzolo, S.; Turrone, R.; Rozzini, L.; Alberici, A.; Altomare, D.; et al. Diagnosis of Mild Cognitive Impairment Due to Alzheimer’s Disease with Transcranial Magnetic Stimulation. JAD 2018, 65, 221–230. [Google Scholar] [CrossRef]
  137. Frisoni, G.B.; Boccardi, M.; Barkhof, F.; Blennow, K.; Cappa, S.; Chiotis, K.; Démonet, J.-F.; Garibotto, V.; Giannakopoulos, P.; Gietl, A.; et al. Strategic Roadmap for an Early Diagnosis of Alzheimer’s Disease Based on Biomarkers. Lancet Neurol. 2017, 16, 661–676. [Google Scholar] [CrossRef]
  138. Antonioni, A.; Raho, E.M.; Granieri, E.; Koch, G. Frontotemporal Dementia. How to Deal with Its Diagnostic Complexity? Expert Rev. Neurother. 2025, 25, 323–357. [Google Scholar] [CrossRef]
Figure 1. Methodological overview of the steps applied in this scoping review and mapping study with the primary goals being to review how cognition has been assessed in SHARE studies to date and to identify a comprehensive list of brief cognitive screening instruments (CSIs) for use in future SHARE studies.
Figure 1. Methodological overview of the steps applied in this scoping review and mapping study with the primary goals being to review how cognition has been assessed in SHARE studies to date and to identify a comprehensive list of brief cognitive screening instruments (CSIs) for use in future SHARE studies.
Jal 06 00030 g001
Figure 2. PRISMA flow diagrams outlining the following: (a) the 234 SHARE studies identified from a register of SHARE studies and PubMed; (b) the 61 other cognitive screening instruments identified from pre-existing reviews by Cullen 2007 [21] and De Roeck 2019 [22].
Figure 2. PRISMA flow diagrams outlining the following: (a) the 234 SHARE studies identified from a register of SHARE studies and PubMed; (b) the 61 other cognitive screening instruments identified from pre-existing reviews by Cullen 2007 [21] and De Roeck 2019 [22].
Jal 06 00030 g002
Figure 3. Overview of the methods applied for assessing cognition in the identified SHARE studies including both measures of cognitive function (continuous scores, mostly presented the mean score) and cognitive impairment (categorical or probabilistic status). * One study also included in CSIs [36]. ** Two studies also included in CSIs [33,37]. A full list of all 234 studies is provided in the Supplementary Material Table S2.
Figure 3. Overview of the methods applied for assessing cognition in the identified SHARE studies including both measures of cognitive function (continuous scores, mostly presented the mean score) and cognitive impairment (categorical or probabilistic status). * One study also included in CSIs [36]. ** Two studies also included in CSIs [33,37]. A full list of all 234 studies is provided in the Supplementary Material Table S2.
Jal 06 00030 g003
Table 1. Overview of subtests available in the Survey of Health, Ageing and Retirement in Europe (SHARE) including which waves (w) they are available in.
Table 1. Overview of subtests available in the Survey of Health, Ageing and Retirement in Europe (SHARE) including which waves (w) they are available in.
Alternative Names/DetailsSubtest DescriptionOriginal
Instrument
w1w2w3w4w5w6w7w8w9
10-word registration (immediate recall or verbal learning)The participant is read a list of 10 words and asked to recall as many as possible. One attempt is given, but the need to remember the words is explained before starting. One of 4 random word lists is selected.TICSX
10-word recall (delayed recall)The participant is asked to recall as many of the words as they can from the 10-word registration task earlier.TICSX
Verbal fluency (semantic)Animal naming in exactly one minute (timed). All creatures within the animal kingdom, mythical, species, breeds, as well as male, female, infant name variations were accepted (max 100). Excluded repetitions and proper nouns.Henley et al. 1969 [35]X*
Orientation
(4 items)
Four temporal orientation questions were asked: date, month, year, day of the week.MMSEX***
Serial 7s (calculation)Numerical subtraction question: “One hundred minus 7 equals what? And 7 from that, And 7 from that, And 7 from that, And 7 from that.” Scored from 0 to 5.TICS
MMSE
XXX*
Numeracy
(5 items)
Numerical percentage questions scored from 1 to 5 [note the score may also be recoded as 0 to 4 in SHARE publications].Unique to SHAREX******
Counting backwardsThe participant is asked to count backward as quickly as possible starting at 20. Correct if counted from 19 to 10 or from 20 to 11 without error. Option for a second attempt was given.TICSXXXXXXX60+60+
Copying infinity loopThe participant is asked to draw a copy of an infinity loop diagramprovided to them. A second attempt was allowed.ACE-IIIXXXXXXX*60+*60+
Copying cubeThe participant is asked to draw a copy of a cube diagram provided to them. A second attempt was allowed.ACE-IIIXXXXXXX*60+*60+
Clock drawing testThe participant is asked to draw a clock face (second attempt allowed). If the contour or numbers were drawn correctly then the patient was asked to add hands pointing to ten past five.ACE-IIIXXXXXXX*60+*60+
Object naming
(3 items)
This is a language rather than a visual test. The three objects to be named were as follows: scissors: “What do people usually use to cut paper?”, cactus: “What do you call the kind of prickly plant that grows in the desert?”, pharmacy: “Where do people usually go to buy medicine?”. Synonyms and specific cactus names were accepted.TICS (scissors, cactus) and CSI-D (pharmacy)XXXXXXX*65+*65+
denotes that an item was assessed in that wave. * Denotes that an item was assessed for a subsample of SHARE participants in that wave. X denotes that an item was not assessed at that wave. For example, wave 3 (SHARELIFE) did not include any cognitive assessments as it focused on childhood conditions; and wave 7 includes a combination of SHARELIFE and the main questionnaire and had fewer cognitive assessments. Some items were limited to those aged ≥60 years or ≥65 years. ACE-III: Addenbrooke’s Cognitive Examination III [34]; CSI-D: Community Screening Instrument for Dementia [20]; MMSE: Mini-Mental State Examination [17]; TICS: Telephone Interview for Cognitive Status [19].
Table 2. Overview of the cognitive domains and subtests applied in published SHARE studies.
Table 2. Overview of the cognitive domains and subtests applied in published SHARE studies.
Cognitive Domain
Cognitive Test
All (N = 234)Subtest Only (n = 94)Composite Scores (n = 140)CSIs
(n = 56) *
Standardised Scores (n = 50) *Statistical Modelling (n = 17) *Cut-Offs
(n = 15)
Other Methods
(n = 5)
Memory223 (95%)85 (90%)138 (99%)56 (100%)49 (98%)17 (100%)14 (93%)5 (100%)
Registration (immediate word recall)193 (82%)56 (60%)137 (98%)56 (100%)48 (96%)17 (100%)14 (93%)5 (100%)
Delayed word recall217 (93%)79 (84%)138 (99%)56 (100%)49 (98%)17 (100%)14 (93%)5 (100%)
Language/fluency180 (77%)68 (72%)112 (80%)33 (59%)50 (100%)14 (82%)13 (87%)4 (80%)
Verbal fluency180 (77%)68 (72%)112 (80%)33 (59%)50 (100%)14 (82%)13 (87%)4 (80%)
Naming descriptions1 (<1%)-1 (1%)1 (2%)----
Orientation60 (26%)14 (15%)46 (33%)15 (27%)18 (36%)2 (12%)7 (47%)4 (80%)
Date/day60 (26%)14 (15%)46 (33%)15 (27%)18 (36%)2 (12%)7 (47%)4 (80%)
Executive functioning99 (42%)28 (30%)71 (51%)32 (57%)23 (46%)10 (59%)4 (27%)4 (80%)
Clock drawing1 (<1%)-1 (1%)1 (2%)----
Serial 753 (23%)11 (12%)42 (30%)23 (41%)11 (22%)4 (24%)2 (13%)4 (80%)
Numeracy51 (22%)18 (19%)33 (24%)10 (18%)12 (24%)8 (47%)2 (13%)1 (20%)
Counting backwards2 (1%) 2 (1%)2 (4%)-1 (6%)--
Visuospatial1 (<1%) 1 (1%)1 (2%)----
Copy cube1 (<1%) 1 (1%)1 (2%)----
Copy infinity loop1 (<1%) 1 (1%)1 (2%)----
* Three studies were included in more than one category including two studies with CSIs and statistical modelling [33,37] and another one included in both CSIs and standardised scores [36]. Note this table includes all subtests used in a study not necessarily how they were used, for example, one study includes both the SHARE-Cog and a non-amnestic CSI [38] so the subtests from both are included in the count under CSIs. Bold text denotes a cognitive domain and the regular text describes the subtests within that domain.
Table 3. Overview of cognitive screening instruments for the SHARE including their availability and cognitive domains assessed.
Table 3. Overview of cognitive screening instruments for the SHARE including their availability and cognitive domains assessed.
Cognitive Screening Instruments (CSIs) in SHARE Studies (n = 22)Number of Studies
(n = 56)
Number of Waves Fully AvailableWave 1Wave 2Wave 4Wave 5Wave 6Wave 7Wave 8Wave 9Number Cognitive Domains MemoryLanguageOrientationExecutive FunctionVisuospatial
Validated internally/externally (n = 2)
SHARE-Cog (0–45) [38]1 **7*2XXX
Langa–Weir criteria (0–27) [33]10XXXXXX**2XXX
Used in multiple studies (n = 5)
CSI 0–125 (serial 7s) [44,45,46,47]45XX*3XX
CSI 0–129 [48,49]22******4X
CSI 0–25 [50,51] 25XX*2XXX
CSI 0–29 (serial 7s) [52,53]23XX***3XX
DemTect modified (0–20) [54,55]22******4X
Used in a single study (n = 13)
Recall and fluency (0–65) [58]17*2XXX
Recall and fluency (0–20) [59]17*2XXX
Recall and fluency (0–20 weighted) [60]17*2XXX
Recall and fluency (0–30 weighted) [36]17*2XXX
CSI 0–35 [61]15XX*3XX
Langa–Weir modified (0–26) [57]15XX*3XX
CSI 0–39 [62]13XX***4X
CSI 0–34 [63]15***3XX
CSI 0–70 [64]15XX*3XX
CSI 0–29 (numeracy) [65]12******3XX
CSI 1–125 (numeracy) [66]12******3XX
DemTect modified (0–19) [56] 12******4X
Non-amnestic battery (0–16) [38]1 **2XXXXXX4X
Monodomain—2 tests (n = 2)
20-point word recall test2781XXXX
10-point average recall [67,68,69]381XXXX
X not available. * available for a subset of participants. available for all. ** One study had two CSIs [38]. Bold text describes key features of the CSIs relating to their validity.
Table 4. Overview of additional potential cognitive screening instruments with at least 50% fidelity in the SHARE including their longitudinal availability and the subtest domains assessed in the SHARE (n = 24).
Table 4. Overview of additional potential cognitive screening instruments with at least 50% fidelity in the SHARE including their longitudinal availability and the subtest domains assessed in the SHARE (n = 24).
Cognitive Screening Instrument
(Potentially Available But Not Used in SHARE Yet)
Fidelity 1 in the SHARENumber of Waves Fully AvailableWave 1Wave 2Wave 4Wave 5Wave 6Wave 7Wave 8Wave 9Number Cognitive DomainsMemory (SHARE)Language (SHARE)Orientation (SHARE)Executive function (SHARE)Visuospatial (SHARE)
10-point Cognitive Screener (10-CS) [70]100%5***3XX
Mini-Cog [71]100%0XXXXXX**2XXX
Six Item Screener (SIS) [72]100%5***2XXX
AB Cognitive Screen (ABCS) 135 [74]96%0XXXXXX**4X
Rapid Cognitive Screen [75]90%0XXXXXX**2XXX
Computer test battery by Inoue et al. [76]87%5***2XXX
Computer Self-Test [77]83%0XXXXXX**4X
Short MoCA [78]81%0XXXXXX**4X
Montpellier Screen [79]80%0XXXXXX**2XXX
DemTect [80]75%7*3XX
Brief Alzheimer Screen [81]73%5***3XX
Test for the Early Detection of Dementia from Depression [82]73%0XXXXXX**4X
Concise Cognitive Test [83]70%0XXXXXX**4X
Montreal Cognitive Assessment (MoCA) [18]67%0XXXXXX**5
MoCA computer tool [84]67%0XXXXXX**5
Quick Mild Cognitive Impairment (Qmci) Screen [85]67%0XXXXXX**4X
Rapid Dementia Screening Test [86] 267%7*1XXXX
Rowland Universal Dementia Assessment Scale (RUDAS) [87]63%0XXXXXX**2XX
Mini-Mental State Examination (MMSE) [17]60%0XXXXXX**5
Mini-Revised Kingston Standardized Cognitive Assessment [88]55%0XXXXXX**4X
Telephone Interview of Cognitive Status (TICS)—Modified [89] 54%0XXXXXX**4X
M-ACE [90]53%0XXXXXX**3XX
MoCA Basic Version [91]53%0XXXXXX**4X
Short and Sweet Screening Instrument [73]53%0XXXXXX**5
1 Represents how much of the total score could be replicated. 2 Only verbal fluency was available. X not available. * available for a subset of participants. available for all.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

O’Donovan, M.R.; Cornally, N.; O’Caoimh, R. Measuring Cognition and Cognitive Impairment in the Survey of Health, Ageing and Retirement in Europe (SHARE): A Scoping Review and Instrument Mapping Study. J. Ageing Longev. 2026, 6, 30. https://doi.org/10.3390/jal6010030

AMA Style

O’Donovan MR, Cornally N, O’Caoimh R. Measuring Cognition and Cognitive Impairment in the Survey of Health, Ageing and Retirement in Europe (SHARE): A Scoping Review and Instrument Mapping Study. Journal of Ageing and Longevity. 2026; 6(1):30. https://doi.org/10.3390/jal6010030

Chicago/Turabian Style

O’Donovan, Mark R., Nicola Cornally, and Rónán O’Caoimh. 2026. "Measuring Cognition and Cognitive Impairment in the Survey of Health, Ageing and Retirement in Europe (SHARE): A Scoping Review and Instrument Mapping Study" Journal of Ageing and Longevity 6, no. 1: 30. https://doi.org/10.3390/jal6010030

APA Style

O’Donovan, M. R., Cornally, N., & O’Caoimh, R. (2026). Measuring Cognition and Cognitive Impairment in the Survey of Health, Ageing and Retirement in Europe (SHARE): A Scoping Review and Instrument Mapping Study. Journal of Ageing and Longevity, 6(1), 30. https://doi.org/10.3390/jal6010030

Article Metrics

Back to TopTop