Next Article in Journal
Study Protocol for the Residents’ Mental Health Investigation, a Dynamic Longitudinal Study in Italy (ReMInDIt)
Previous Article in Journal
Caregiver Burden, Parenting Stress and Coping Strategies: The Experience of Parents of Children and Adolescents with Osteogenesis Imperfecta
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Efficacy of a Metacognitive Training Program in Amnestic Mild Cognitive Impairment: A 6-Month Follow-Up Clinical Study

by
Grigoria Bampa
1,2,*,
Despina Moraitou
1,2,
Panagiota Metallidou
1,
Elvira Masoura
1,
Georgia Papantoniou
3,4,
Maria Sofologi
3,4,
Georgios A. Kougioumtzis
5,6 and
Magdalini Tsolaki
2,7
1
Laboratory of Psychology, Department of Cognition, Brain and Behavior, School of Psychology, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
2
Laboratory of Neurodegenerative Diseases, Center of Interdisciplinary Research and Innovation (CIRI–AUTH), Balcan Center, Buildings A & B, 10th km Thessaloniki-Thermi, 54124 Thessaloniki, Greece
3
Laboratory of Psychology, Department of Early Childhood Education, School of Education, University of Ioannina, 45110 Ioannina, Greece
4
Institute of Humanities and Social Sciences, University Research Centre of Ioannina (URCI), 45110 Ioannina, Greece
5
Department of Turkish and Modern Asian Studies, National and Kapodistrian University of Athens, 15772 Athens, Greece
6
Department of Psychology, Neapolis University Pafos, 8042 Pafos, Cyprus
7
Greek Association of Alzheimer’s Disease and Related Disorders (GAADRD), 54643 Thessaloniki, Greece
*
Author to whom correspondence should be addressed.
Healthcare 2024, 12(10), 1019; https://doi.org/10.3390/healthcare12101019
Submission received: 28 February 2024 / Revised: 23 April 2024 / Accepted: 30 April 2024 / Published: 14 May 2024

Abstract

:
This study was conducted in response to the increasing prevalence of Alzheimer’s disease (AD) dementia and the significant risk faced by individuals with amnestic mild cognitive impairment with multiple-domain deficits (aMCI-md). Given the promising effects of MTPs, the primary aim of this study was to further explore their impact by assessing the maintenance of their benefits. Thus, 45 participants were randomly allocated in two groups: the Experimental group (n = 22), which received the metacognitive training program (MTP), and the Control group (n = 23) that received the cognitive exercises program (CEP). The training programs—the MTP and the CEP—included 10 individual sessions of a one-hour duration and took place once per week. To test the efficacy of the MTP, cognitive and metacognitive outcomes were compared between two groups—Experimental (EG) and Control (CG)—at four distinct time points: before–after–3 months–6 months after intervention. Based on this study’s findings, the positive effects of the MTP were evident over a six-month period. Specifically, already three months post-training, the CG began to show a decline in training-related gains. In contrast, the EG’s performance consistently improved, highlighting the superior efficacy of the MTP. Gains attributed to the MTP were detected in cognitive measures: cognitive flexibility and immediate visual recall, as well as in metacognitive measures: metacognitive control, improved metacognitive beliefs of attention, and an increased use of cognitive strategies. In conclusion, the results demonstrated the sustained effects of the MTP in cognitive and metacognitive measures over a period of six months, providing novel insight into the application and efficacy of the MTP in individuals with MCI.

1. Introduction

Mild cognitive impairment (MCI) represents a transitional stage in the cognitive decline continuum, where individuals exhibit noticeable cognitive impairments beyond those expected for their age and education level but do not meet the criteria for dementia [1]. MCI is generally divided into two primary subtypes: amnestic MCI (aMCI), which is mainly characterized by memory deterioration, and non-amnestic MCI, where memory is preserved, but other cognitive functions (such as language, attention, or visuospatial skills) may be impaired [1]. People with aMCI and particularly those with multiple deficits beyond memory ones (aMCI-md) have an increased risk of progressing to Alzheimer’s disease (AD) [2,3]. The impact of MCI on a person’s life is multifaceted. The struggle with cognitive challenges often leads to the emergence of anxiety and depressive symptoms. These emotional disturbances can significantly affect individuals’ relationships and social life, thereby adversely impacting their overall quality of life [4,5]. Hence, the need for effective interventions to delay the course of cognitive deterioration but also to enhance the overall quality of life is of primary importance.
Several reviews and meta-analyses have been conducted in order to evaluate the efficacy of cognitive training programs designed for this population [6,7,8,9,10,11,12]. The range of these programs is diverse, including both domain-specific and multiple-domain training programs (targeting various aspects like cognition, nutrition, physical activity, etc.). While results support the efficacy of multi-domain training with modest effects on cognitive performance and more significant effects on well-being [6,7,8,9], there is also an increasing interest in modern technology-based interventions. These include computerized [11] and virtual reality training programs [12,13] which have demonstrated promising effects on cognitive performance. However, questions remain regarding their generalizability and the long-term maintenance of effects.
Over the past few decades, there has been an increased scientific interest in metacognition and its role in MCI. This trend is driven by the critical role metacognition plays in controlling and regulating cognitive processes [14]. Metacognitive regulation draws from an individual’s accumulated knowledge about cognitive principles and their personal appraisals of their cognitive skills (metacognitive knowledge) [14,15,16]. It is also influenced by monitoring processes that mirror ongoing cognitive activities (metacognitive monitoring) and assist with decision-making and adjustments in support of these cognitive processes (metacognitive control) [15,17,18,19].
Findings have revealed that individuals with MCI are fairly accurate when assessing their general cognitive status [20,21,22]. Yet, compared to cognitively healthy older adults, they display less accuracy when monitoring their performance during task engagement [23,24,25,26]. Accordingly, in a previous work by our team, we found that people with MCI although they were aware of their struggles during task engagement, as reflected by their confidence ratings, they were less accurate in their decisions to choose between wrong and right responses compared to cognitively healthy individuals [27]. Further, previous evidence about the difficulties that people with MCI have with prospective and not retrospective memory self-monitoring abilities [23] shows the need for training specific aspects of metamemory in MCI. Specifically, accurate Feeling of Knowing (FOK) ratings for future memory performance seem to be important not only for trying to access memory material but for employing memory strategies as well. FOK accuracy was found to be an important metamemory ability for episodic memory performance not only in individuals with MCI but in patients with Parkinson’s disease with akinetic and rigidity-dominant motor symptoms as well [28].
Accurate monitoring has a significant impact on cognitive performance. As previous studies have highlighted, the over/underestimation of one’s cognitive skills has negative consequences on the management of their deficits. Such inaccuracies can influence their motivation to undertake cognitively challenging situations, the effort they invest, and their application of compensatory strategies [29,30,31]. However, the ability to accurately appraise their cognitive performance does not translate to effective deficit management for individuals with MCI. Studies have demonstrated that even though individuals with MCI may gain some degree of monitoring accuracy and acknowledge their cognitive difficulties, they often lack the necessary skills or knowledge to properly manage these difficulties [31,32,33]. Similarly, based on a previous work by our team, it was found that individuals with MCI scored lower in metacognitive beliefs of efficacy particularly in daily life scenarios involving memory and attention (divided and shifted). Despite these lower scores, they did not report using cognitive strategies more frequently than individuals with intact cognitive status [34]. Consequently, they need to acquire explicit knowledge on how to improve self-regulatory skills like the effective allocation of time, the control of attention, the proper use of strategies or external aids, effort and time investment to learn new information, etc. [35,36,37,38]. This underscores the significance of examining the impact of cognitive training programs based on metacognitive principles, in individuals with MCI.
Hertzog et al. [39] presented a compelling approach that focuses on self-regulation with regards to everyday memory. Their findings emphasize the importance of personalized interventions, wherein everyone’s self-regulation skills are assessed to identify specific needs. Furthermore, the intervention should teach memory strategies and provide explicit knowledge on how, why, and when to utilize them effectively. Adequate feedback and encouragement are critical components to promote the successful transfer and application of these memory skills to everyday life. By adhering to these guidelines, memory enhancement interventions can have a more profound impact on older adults’ cognitive abilities, leading to improved overall well-being and autonomy.
Cognitive training programs that incorporate metacognitive aspects have been implemented across various populations, yielding significant benefits. In a systematic review conducted by our team [40], we investigated the effects of metacognitive training programs (MTPs) on adults, including older individuals and those with neurological conditions presenting a total of nine studies that utilized a randomized controlled design (RCT). Regarding older adults, existing research has shown that MTPs focusing on metamemory and the utilization of cognitive strategies can provide substantial benefits, including an enhanced understanding of how, why, and when to deploy cognitive strategies, along with improved self-regulatory processes, such as self-testing and study time allocation [33,35]. Furthermore, the role of self-efficacy—an individual’s confidence in their ability to perform cognitive tasks [41]—in learning how to effectively cope with cognitive difficulties cannot be overstated. Enhancing self-efficacy can motivate older adults to learn and apply more advanced cognitive strategies and improve their overall well-being [42,43]. Additional evidence supporting the metacognitive approach’s effectiveness comes from a new meta-analysis conducted by Sella et al. (2023) on metacognitive interventions in older adults. Overall, the findings showed improvements in cognitive performance, particularly in memory, as well as positive effects on memory strategy use and subjective beliefs about memory efficacy. However, the long-term effects of MTPs still require more in-depth investigation. Future studies should incorporate follow-ups to determine whether the observed training-related gains persist over time [40,44].
Available studies testing the efficacy of cognitive training programs with metacognitive components in individuals with MCI are very limited, though they have demonstrated potential benefits. Moro et al. [45] conducted a study assessing the effects of a personalized cognitive training program focusing on executive functions’ enhancement. The training lasted six months and included metacognitive components such as metacognitive knowledge about cognitive strategies and bolstering metacognitive monitoring and regulation during homework assignments. The findings revealed significant improvements in memory and overall cognitive status, which lasted over a period of six months. Likewise, Youn et al. [46] applied a ten-session metamemory training (MMT) program using an RCT design in participants with MCI. The results showed positive effects of MMT on verbal memory and verbal fluency performance, along with a reduction in subjective memory complaints compared to a passive control group (waiting list). To our knowledge, this is the only study that tested the efficacy of an MTP designed for MCI using an RCT design. Nevertheless, no follow-up assessments were conducted to test the maintenance of these gains over time.
In conclusion, people with MCI face cognitive challenges that intrude upon their daily life, often leading to social and psychological distress and negatively impacting their overall well-being. Therefore, cognitive training programs need to be multidimensional, addressing all these areas of concern. Although emerging evidence highlights the potential of metacognitive-based training programs to address these diverse needs, studies focusing specifically on this population are limited. Moreover, an important question that remains to be addressed is the sustainability of these benefits over time. Hence, the present study aimed to assess the efficacy of a ten-session MTP for adults with aMCI-md over a period of six months, using an RCT design.

Aim and Hypotheses

This study was conducted in response to the increasing prevalence of AD dementia and the significant risk faced by individuals with aMCI-md. Given the promising effects of MTPs, the primary aim of this study was to further explore their impact by assessing the duration and maintenance of their benefits. Therefore, follow-up assessments were implemented at 3 and 6 months post-training. The impact of the MTP was tested on cognitive and metacognitive outcomes. To the best of our knowledge, this is the first study that investigates the effects of an MTP using online metacognitive measures. An RCT was employed that included an Experimental group (EG), which followed the metacognitive intervention, and an active Control group (CG), which followed a cognitive exercise program (CEP).
Based on the expected benefits of metacognitive training, we anticipated that the advantages derived from metacognitive training in the EG would sustain over follow-up periods (3 and 6 months post-training), while any initially observed benefits in the CG were anticipated to fade over the same timeline.

2. Materials and Methods

2.1. Design

This study utilized a randomized longitudinal controlled design to compare two groups of MCI patients, matched in age, educational level, and gender: the Experimental group which followed the metacognitive intervention and the Control group which followed a cognitive intervention. Repeated measures were retrieved at four time points: (1) before the training (pre-training), (2) immediately after the training (post-training), (3) three months after the training (3-month follow-up), and (4) six months after the training (6-month follow-up). The main purpose of the repeated measures design was to assess whether metacognitive training vs. cognitive training can lead to longitudinal gains in terms of cognitive functioning improvement in MCI patients.

2.2. Participants

Based on a power analysis that was conducted, using G*Power [47], a total sample size of 62 participants was recommended to detect an effect size of η2 = 0.15, with an alpha of 0.05 and to achieve a power of 0.80. In the current study, 50 aMCI patients agreed to participate in the training program. They were randomly divided into two equally sized groups, EG and CG, each containing 25 participants. However, attrition resulted in the completion of training sessions by 45 participants, as 5 individuals dropped out (3 from the EG and 2 from the CG). However, due to the prolonged lockdown measures taken to suppress COVID-19, it was not feasible to enroll more participants. Consequently, our final sample comprised 15 men and 30 women with a mean age of 62.78 (SD = 6.24) years and a mean education of 13.11 (SD = 3.45) years. To participate in the study, individuals were required to be native Greek speakers, over the age of 50, and have a minimum of six years of education. They underwent an extended neuropsychological assessment in accordance with Petersen’s diagnostic criteria [3] and DSM-V [48].
The neuropsychological evaluation was conducted at the Greek Association of Alzheimer’s Disease and Related Disorders by trained psychologists and included several tools including clinical scales and neuropsychological tests. Clinical Tools: the Geriatric Depression Scale [49,50], the Beck Depression Inventory [51], the Beck Anxiety Inventory [52], the Short Anxiety Screening Test [53,54], and the Neuropsychiatric Inventory [55,56] were used to test for affective disorders and/or neuropsychiatric symptoms. Cognitive tools: for general cognitive status screening, the Mini Mental State Examination [57,58] and the Montreal Cognitive Assessment [59,60] were implemented. The Functional Cognitive Assessment [61] was used to measure executive functions in six daily activities. Standardized cognitive tests were also performed to assess memory, attention, executive functions, and language skills. Tsolaki et al. [62] provide a detailed description of all the neuropsychological tests employed. The Global Deterioration Scale (GDS) [63] was used to define the stage of the participants’ cognitive decline. Thus, based on the GDS, participants with no cognitive decline or impairments were assigned stage 1, while those with mild cognitive impairment (MCI) were designated stage 3. The present study was particularly focused on participants who displayed the amnestic subtype of MCI characterized by multiple deficits. Hence, if an individual’s memory and one or more additional cognitive domains were observed to be significantly below average (1.5 standard deviations below their age norm), they were classified as having aMCI-md [64].
The exclusion criteria were as follows: (a) a history of psychiatric disorder; (b) substance abuse or alcoholism; (c) a history of traumatic brain injury; (d) a history of neurological disorders (brain tumor, epilepsy, encephalitis, Parkinson’s disease, multiple sclerosis); (e) diabetes (type I and II); (f) cardiovascular diseases; (g) sensorimotor deficits that could interfere with study procedures; (h) vitamin B12 deficiency; and (i) a lack of memory deficits (naMCI).

2.3. Procedure

Participants were recruited from the “Agia Eleni” day care center of the Greek Association of Alzheimer’s Disease and Related Disorders and via Aristotle University of Thessaloniki. The recruitment process was assisted by undergraduate psychology students as part of their clinical internships. Potential participants who met the study’s eligibility criteria were invited to volunteer.
Upon receiving consent, the study’s neuropsychologist (first author) assessed the eligibility of each participant, providing them with an overview of the study, its objectives, and the procedures involved. Participants were informed that the testing process would be divided into two separate morning sessions, designed to eliminate the effects of fatigue. Each session was limited to one hour. During the initial meeting, participants were presented with written consent forms that explicitly stated the study’s objectives, assuring them of the strict confidentiality and privacy of their personal data in accordance with data protection guidelines. Participants were also informed about a potential 10-session training program that would be arranged at their convenience after the initial testing sessions, which would be followed by two post-training assessments (at three and six months after training’s completion) following the same format as the initial sessions. The participants did not receive details about the specific content of each training program, adhering to a single-blind procedure.
Due to extended lockdown restrictions for COVID-19 prevention, all procedures were conducted virtually through online platforms such as Skype, Zoom, Messenger, and Viber. As such, only participants with access to desktops, laptops, or tablets were eligible for this study, as mobile phones were deemed unsuitable due to their small screens. Participants in the EG were also asked to provide an email address, either their own or a relative’s, in order to receive the training-related materials. This information would be treated with the same strict confidentiality and privacy measures and used solely for the purpose of the study.
In addressing external factors, participants were instructed to be in a quiet room without external distractions, ensuring no other individuals were present and that televisions were turned off while phones were set to silent mode. These arrangements were thoroughly discussed with participants in advance of the testing procedure and programs’ initiation, with provisions made for assistance from a relative if needed. While the control of these external factors ultimately rested with the participants, it was assumed that they would adhere to the prescribed arrangements. Overall, compliance with these guidelines was largely observed, with only minor deviations noted.

2.4. Cognitive Measures

2.4.1. Wisconsin Card Sorting Test—64 Card Version (WCST-64)

The WCST-64 [65] is a compact version of the original test [66,67], consisting of 64 sorting cards based on color, shape, and number, compared to the original’s 128 cards. This test evaluates cognitive flexibility, the ability to switch cognitive sets, and the capacity to utilize feedback in problem-solving tasks. Despite its abbreviated length, the WCST-64 preserves the principal structure and administration guidelines of the original, prompting participants to match cards according to undisclosed, changing rules and offering feedback to influence their responses. The WCST-64 maintains robust psychometric characteristics, such as significant test–retest reliability [68,69,70] and construct validity, demonstrated by its ability to identify frontal lobe dysfunction [71] and its correlations with other executive functioning measures [72]. The WCST-64 serves as an efficient alternative to the full-length WCST, maintaining its diagnostic worth and adaptability across different population groups and clinical settings [73].
Some of the key scores derived from the WCST include the following [74]: (1) Total Correct: The total number of correct responses given by the participant throughout the test. A higher score indicates better performance and cognitive flexibility; (2) Total Errors: The total number of incorrect responses given by the participant throughout the test. A lower score indicates better performance and fewer mistakes made; (3) Perseverative Responses: The number of times the participant sticks to a previously correct sorting rule even after it has become invalid. A lower score indicates better cognitive adaptability and readiness to alter rules; (4) Perseverative Errors: The number of errors made by the participant due to continuously applying an incorrect rule or strategy, even after receiving feedback that it is no longer applicable. A lower score indicates better cognitive adaptability and responsiveness to new information; (5) Non-Perseverative Errors: The number of incorrect responses that are not perseverative in nature. A lower score indicates better performance in terms of adaptability and problem-solving; (6) Categories Completed: The number of categories (out of a possible six) that the participant successfully completed during the test. A higher score indicates better cognitive flexibility and abstract reasoning; (7) Trials to Complete First Category: The number of trials needed for the participant to successfully complete the first category. A lower score indicates a quicker understanding of the sorting rules and more efficient problem-solving skills; and (8) Failure to Maintain Set: The number of times the participant failed to maintain a correct sorting rule after successfully applying it for a few consecutive trials. A lower score indicates better cognitive stability and consistency in applying learned rules.

2.4.2. Doors and People

Doors and People [75] is a test designed to assess memory function and includes four subtests, each of which assesses different aspects of memory: People, Doors, Figures, and Names. This test has been adapted and validated in the Greek population [76]. Ιt is a reliable tool with ecological validity and satisfactory internal validity (Cronbach’s α = 0.80).
The People subtest evaluates verbal recall, both immediate (three trials) and delayed, by presenting a list of names and later asking the participant to recall them. The stimuli specifically consist of photos of four individuals, with their names and professions displayed below. Each picture is displayed for 3 s, and the character’s name and job are spoken aloud [e.g., This is a doctor. His name is Hλίας Τσακίρης (Elias Tsakiris)]. This process is carried out until all four names are accurately remembered (with a maximum of three attempts). Participants are then asked to recall this information immediately after the presentation and again after a 5–10 min delay. A single point is awarded for each correctly remembered first and last name, with an additional point given for each correct pairing. The total score is computed by adding the scores from each trial (score range: 0–36) [77].
The Doors subtest evaluates visual recognition by presenting images of doors and subsequently asking the participant to pick out the previously viewed doors from a set of new ones. Specifically, participants are exposed to 24 images of doors, divided into two sets (one easy set and one hard set). Following the presentation, they are asked to identify the previously seen door from a set of four doors (three distractors and the target door). In the first set (Part A), the distractors are varied types of doors in comparison to the target door (for instance, a garage door, a German door, a front door), whereas in the second set (Part B), the distractors are from the same category of doors as the target (for example, all stable doors). One point is given for each correct answer, and the total score is computed by adding the scores in each set (score range: 0–24) [77].
The Figures subtest evaluates immediate and delayed visual recall. This is accomplished by presenting a set of figures to the participants and subsequently asking them to draw as many as they can recall. Participants are shown four line-drawn crosses and asked to draw them immediately after the presentation and again after a 5–10 min delay. The figures are presented until the participant can accurately reproduce them (with a maximum of three attempts). Each accurately drawn shape earns three points, and the total score is computed by adding the scores from each trial (score range: 0–36) [77].
Finally, the Names subtest evaluates verbal recognition by displaying a list of names and then asking the participant to recall which names were shown previously. Participants are given twenty-four names (including both first and last names), separated into two groups (an easy set and a hard set). Each name is presented for 3 s, and participants are instructed to read them out loud. After the presentation, they are asked to identify the previously shown name from a set of four names (three distractors and the target name). The second set (Part B) consists of names where distractors closely resemble the target name. One point is given for each correct identification, and the total score is computed by adding the scores in each set (score range: 0–24) [77].

2.5. Metacognitive Measures

For the aim of the present study, the two cognitive tests—the Wisconsin Card Sorting Test and the Doors and People test—were used in a metacognitive version [78,79]. Following each response, participants were instructed to answer two questions: (1) “What is your degree of confidence in the correctness of this answer?” (feeling of confidence appraisals), and (2) “Would you like your response to be included in the total score?” (decisions on whether to volunteer or not a response to maximize final score performance as a measure of metacognitive control). Participants replied on a 4-point Likert scale (1 = not at all certain, 4 = completely certain) for the first question and on a Yes/No format for the second question. It was specified that a correct “yes” answer would increase their score by one point, an incorrect “yes” answer would subtract a point, and a “no” answer, regardless of its accuracy, would leave their score unchanged.
Based on the two metacognitive questions, four metacognitive variables were extracted: (1) the relative confidence ratings (on a scale from 1 to 4), (2) the accuracy score (calculated as the ratio of correct volunteered responses, i.e., correct “yes,” compared to total volunteered responses, i.e., total “yes.” This variable reflects the reliability of a person’s responses and depends on monitoring and control procedures), (3) global monitoring (this refers to an individual’s ability to estimate their overall performance on a task. It was calculated as the difference between the total correct responses, i.e., actual performance, and the total volunteered responses, i.e., total “yes.” Scores below zero indicate overconfidence, while scores above zero suggest underconfidence), and (4) wrong “yes” (the quantity of incorrect volunteered responses, where fewer numbers suggest a more careful decision-making approach, and larger numbers indicate a riskier approach).
Metacognitive ability was determined using the ratio of the mean feeling of confidence to the cognitive score. This ratio establishes the relationship between reported confidence and actual performance. The feeling of confidence ranges from 1 (lowest) to 4 (highest), whereas the cognitive score (the number of correct responses divided by total items) ranges from 0 (no correct answers) to 1 (all answers correct). Thus, a calibration score of 4 signifies perfect alignment between confidence and performance. Scores below 4 suggest underconfidence, while scores above 4 indicate overconfidence.

2.5.1. Metacognitive Knowledge for Everyday Memory (MKEM)

MKEM [80] is a 12-item self-report scale developed to assess older adults’ metacognitive beliefs about everyday life scenarios related to memory function. For each scenario, participants were requested to report their degree of efficacy on a 4-point Likert scale, ranging from 1 “not at all” to 4 “very well” (example: Imagine that you want to tell a story that you read earlier in a book or in a newspaper. How well do you manage to remember the details of that story, such as names, the place, and time?). The scale has a one-factor structure and high internal reliability (α = 0.88).

2.5.2. Metacognitive Knowledge for Everyday Attention (MKEA)

MKEA [80] is a 12-item self-report scale developed to assess older adults’ metacognitive beliefs about everyday life scenarios related to attention. For each scenario, participants were requested to report their degree of efficacy on a 4-point Likert scale, ranging from 1 “not at all” to 4 “very well” (example: Imagine that you are at the bank, and you are waiting for your number to appear on the announcement table. How well do you manage to stay focused so that you do not lose your turn when your number appears?). The scale has a two-factor structure, with factors reflecting “Divided and Shifted Attention” (α = 0.74) and “Concentration” (α = 0.75).

2.5.3. Metacognitive Knowledge for Everyday Executive Functions (MKEEFs)

MKEEFs [80] is a 10-item self-report scale developed to assess older adults’ metacognitive beliefs about everyday life scenarios related to executive functions. For each scenario, participants were requested to report their degree of efficacy on a 4-point Likert scale, ranging from 1 “not at all” to 4 “very well” (example: Imagine that you have planned to go on a walk with a friend, but it starts raining. How well do you manage to think of an alternative plan considering the weather (i.e., sit in a cafeteria)?). The scale has a two-factor structure, with factors reflecting “Planning” (α = 0.70) and “Inhibition” (α = 0.65).

2.5.4. Multifactorial Memory Questionnaire—Strategies Subscale (MMQ)

The MMQ [81] is a self-report scale comprising 57 items, developed to evaluate metamemory in older adults. It incorporates three subscales: (1) Contentment, which measures one’s sentiment towards their memory; (2) Ability, which measures an individual’s assessment of their memory skills; and (3) Strategies, which measures the frequency of an individual’s use of cognitive strategies. This tool exhibits robust psychometric characteristics. A Greek version of the scale, translated and slightly modified, is made available by Evdokia Emmanouilidou, Nikoleta Fratzi, and Despina Moraitou. The document can be found at https://www.baycrest.org/Baycrest_Centre/media/content/MMQ-Greek.pdf (accessed on 10 January 2023). For the purposes of the present study, we only used the Strategies subscale, which is a 19-item self-report questionnaire. The responses were given on a 5-point Likert scale ranging from 0 (never) to 4 (always) with higher scores indicating a more frequent use of strategies (example: How often do you make a list, such as a grocery list or a list of things to do?).
The scale has a two-factor structure (structural validity and internal reliability previously tested by our team using the responses of 100 participants). The MMQ—Strategies subscale was assessed in a previous work of ours, and a two-factor solution emerged as the best fit. The first factor “Simple Strategies” (α = 0.78) reflected the utilization of external aids or the implementation of simple cognitive regulatory processes, such as organizing information. The second factor “Complex Strategies” (α = 0.73) reflected more intricate and demanding cognitive strategies that necessitate greater effort and involve complex information processing, such as mental imagery and story creation.

2.6. Interventions

2.6.1. Metacognitive Training Program (MTP)

The content and structure of the MTP were designed based on existing research concerning metacognition and aging, as well as on training programs specifically targeted at improving metacognition in older adults [35,37,38,82]. Recognizing the documented deficits in metacognitive accuracy and the need for effective cognitive strategies in MCI, we developed a program aimed at enhancing metacognitive knowledge and the processes of metacognitive monitoring and control [40,44]. Moreover, given that MCI, particularly aMCI-md, impacts not just memory but also other cognitive domains, our MTP does not solely focus on memory [1]. It includes information and practical exercises related to attention and executive functions. Despite the impairment in other cognitive domains, it is vital for participants to gain knowledge about these areas to identify their own cognitive strengths and weaknesses, considering the interrelated and non-independent nature of cognitive functions. Before launching the study, the MTP was pilot-tested with five volunteers diagnosed with MCI to address practical considerations such as session duration, the clarity and comprehension of the content and exercises, the number of exercises, and the level of difficulty. The feedback from this pilot application was used to make the necessary adjustments to the program.
The MTP included 10 individual sessions that were carried out on a weekly basis. The duration of each session was approximately one hour. The context of each session varied including the following topics: memory, attention, executive functions, and cognitive strategies (see Table 1 for the detailed presentation of each session). Regarding cognitive strategies, four sessions were conducted in order to gain practical experience and guidance on how to use them in different tasks in daily life. Thus, an introductory session was conducted in which various strategies were presented and discussed, emphasizing the importance of using such strategies. Furthermore, the significance of evaluating a strategy’s effectiveness and making necessary adjustments was highlighted based on self-testing [35,37]. Then, three sessions followed focusing on the application and practice of four specific strategies: mental imagery, story creation, paired association (verbal and visual), and categorization. These strategies were chosen based on existing scientific evidence supporting their effectiveness and their applicability across various contexts and types of information [83,84,85,86].
The training material was delivered through PowerPoint presentations, employing clear, simple language while still including specific terminology such as episodic memory, short-term memory, and executive functions. This approach allowed for a comprehensive coverage of the topic at hand. The presentations were shared with participants through screen-sharing, allowing the study’s neuropsychologist to simultaneously explain the relevant material. Participants were actively encouraged to engage with the content by asking questions and sharing personal experiences related to the material. This interactive approach helps to promote a deeper understanding of the various concepts being discussed [84]. Through this methodology, we ensured that participants were not only recipients of knowledge but also active contributors to the learning process.
Between the sessions, participants were provided with homework assignments relevant to the topic of the preceding session, which were delivered via email. The purpose of these assignments was to reinforce metacognitive monitoring and regulatory skills through practical exercises [35,45,84]. At the beginning of each subsequent session, time was allocated to review the completed assignments, address any remaining questions, and facilitate a thorough understanding of the material. Then, the material of the new session was presented and discussed. At the end of each session, the new homework assignments were presented and explained to ensure that participants understood what they have to do.
The homework assignments were carefully designed to reflect the topic of each session and to integrate real-world tasks. For example, participants might be given a list of words or a story to learn and later recall, as a homework assignment related to the session on memory. In the session on attention, they might be tasked with counting how many advertisements are shown during a TV program. In the session addressing executive functions, participants were given a map of Athens’ public transportation and were instructed to navigate it to solve a series of tasks. To give an example, “Imagine you are in Ampelokipoi and you want to go to Larissa Station. However, the M2 metro line (red line) is closed from Neos Kosmos to Metaxourgio. Describe what you would do to reach your destination.”, and so on. These exercises incorporated real-world tasks to make the training as practical and applicable to daily life as possible.
To reinforce self-monitoring and self-regulation, after each exercise, participants were instructed to self-test by noting down their score and responding to certain questions [35,45]. Examples of these questions include the following:
  • “How easy was it for you to maintain your attention on the task at hand?”
  • “Did you had any difficulties in maintaining your attention? If yes, what was it?”
  • “Write down what you did in order to learn the story.”
  • “Of the two strategies you used, which one was more effective?”
  • “Think and describe other situations in which you could use the strategy that helped you the most. Provide 2 or 3 examples.”

2.6.2. Cognitive Exercises Program (CEP)

A training program of equal duration, also conducted through individual sessions, was provided to the CG. In this program, participants were given a variety of vocabulary and verbal fluency exercises for cognitive practice. These exercises were displayed via screen-sharing. Instructions were clearly explained, and at the end of each session, participants were provided with the correct answers to cross-check their responses. Participants were advised to have paper and a pen/pencil ready to solve the exercises. Unlike the MTP, the program provided to the CG did not involve any discussion about cognitive functions, age-related deficits, or the use of strategies. Their sessions were purely focused on completing cognitive exercises.

2.7. Statistical Analysis

The statistical analysis was conducted using IBM SPSS Statistics Version 27 (IBM Corp. Released 2020. IBM SPSS Statistics for Macintosh, Version 27.0. Armonk, NY: IBM Corp). The analyses carried out were a (a) mixed measures ANOVA and (b) Kruskal–Wallis non-parametric test. The aim of these analyses was to compare the performance between the two groups as well as the performance of each group at different time points. The Partial eta-squared (η2) measure was used to estimate the effect size. The threshold for statistical significance was set at p < 0.05.

2.8. Ethics

This study’s purpose was both verbally and in written form communicated to the participants, and they were assured that their data would be kept confidential. They gave their written consent, acknowledging their participation was voluntary, and they could opt out at any moment. Demographic information, such as age, gender, and education, was collected, adhering to the law of the European Union since 28 May 2018, which allows the use of sensitive personal data for research purposes. Participants were told and consented to that, upon a written request, their data could be removed from the online database. The research protocol was approved by the Scientific and Ethics Committee of the Greek Association of Alzheimer’s Disease and Related Disorders (Approval Code: 29/15-02-2017), in compliance with the guidelines of the Helsinki Declaration.

3. Results

A univariate analysis of variance (ANOVA) was conducted to examine whether the two groups differed in age (in years) and years of education. The statistical analysis revealed no significant differences between the two groups for age, F(1, 43) = 0.26, p = 0.61, or for years of education, F(1, 43) = 0.83, p = 0.37. In addition, a chi-square analysis regarding gender distribution showed that there were also no statistically significant differences between the groups, χ2 (1) = 0.18, p = 0.67. Therefore, the two groups were matched in age, education, and gender distribution (see Table 2).
Mixed-design 4 × 2 ANOVAs (representing four time measurements × two groups, EG, CG) were conducted to examine the impact of the metacognitive training program on each of the cognitive performance variables and metacognitive outcomes. The study involved a total of 45 participants. However, four participants (two from each group) did not complete the post-training phase. Also, one participant from the CG withdrew after the post-training measurement for health reasons. The scores of these individuals were excluded in the ANOVAs. Thus, the mixed-design ANOVA analyses were conducted with 39 participants.

3.1. Effects of Metacognitive Training on Cognitive Performance

Cognitive performance was assessed through eight variables for the WCST: total correct responses, total errors, perseverative errors, non-perseverative errors, perseverative responses, categories completed, trials to complete the first category, and failure to maintain category, along with six variables for the Doors and People test: people, doors, figures, names, delayed verbal recall, and delayed visual recall. The within-subjects factor in all the mixed-design ANOVAs was the measurement time point, with four levels: pre-training, post-training, and follow-ups at 3 and 6 months; while the between-subjects factor was the group, with two levels: Experimental and Control groups (EG, CG).
For the WCST scores, the results revealed a significant main effect of measurement on the total correct responses score, F(1.96, 72.38) = 7.79, p = 0.002, η2 = 0.15, total errors, F(1.96, 72.33) = 7.87, p < 0.001, η2 = 0.18, non-perseverative errors, F(1.84, 67.98) = 15.19, p < 0.001, η2 = 0.29, and categories completed, F(2.39, 88.53) = 4.85, p = 0.007, η2 = 0.12, indicating improvement in these variables across the four time points. However, the main effect of the group and the measurement x group interaction were not significant for these variables, suggesting no significant differences between the two groups. However, as one can indicatively see in Figure 1, a clear trend emerges in the third and fourth measurement. Specifically, in Figure 1a, one can observe consistent improvement in the EG’s performance over time, whereas the CG’s performance stays stable. In Figure 1b, the performance of the EG continues to rise as the number of perseverative responses decreases, while the CG’s performance consistently declines as the number of perseverative responses grows. These observations suggest lasting effects of the MTP.
A significant interaction effect was only found on perseverative errors, F(2.40, 88.85) = 3.54, p = 0.03, η2 = 0.09, suggesting that the patterns of change in perseverative errors over time differed between the two groups. However, the effect size was very small (η2 = 0.09), and no significant main effect was found for measurement, F(2.40, 88.85) = 0.26, p = 0.81, or group, F(1, 37) = 1.50, p = 0.23. Yet, as shown in Figure 2, the EG’s performance consistently improved over time by decreasing perseverative errors, while the CG exhibits a progressive decline (as increasing errors) in performance on this measure. Again, the discrepancy between the two groups becomes more noticeable at the 6-month follow-up.
Bonferroni pairwise comparisons revealed that both—Experimental and Control— groups performed better at the 6-month follow-up compared to baseline (pre-training) in the following scores: total correct, total errors, non-perseverative errors, and categories (a detailed presentation of these results can be found in Table 3).
Moving on to the Doors and People scores, the results from the respective mixed-measures ANOVAs revealed a significant main effect of the time of measurement on people, F(2.67, 98.68) = 17.60, p < 0.001, η2 = 0.32, doors, F(2.88, 106.69) = 22.96, p < 0.001, η2 = 0.38, figures, F(2.05, 67.98) = 6.57, p = 0.002, η2 = 0.15, names, F(2.61, 96.50) = 9.12, p < 0.001, η2 = 0.20, and delayed verbal recall, F(2.76, 102.05) = 8.21, p < 0.001, η2 = 0.18, indicating an improvement in these variables across the four time points. A main effect of group or an interaction effect (measurement x group) was not detected in all these analyses, suggesting no significant differences between the two groups. However, once again, Figure 3 indicatively illustrates a trend favoring the EG in immediate verbal recall. Specifically, the EG’s ascending performance persists at the 6-month follow-up, while the CG begins to exhibit a decline after the 3-month follow-up.
Pairwise comparisons revealed that both—Experimental and Control—groups performed better after the interventions (at post-training, at 3-month follow-up, and at 6-month follow-up) compared to baseline (pre-training) in immediate verbal (People) and visual (Figures) recall and in visual recognition (Doors). Also, significant improvements for the two groups were detected in verbal recognition (Names) and in delayed verbal recall (People-Delayed) at the 3- and 6-month follow-up compared to pre-training (a detailed presentation of these results can be found in Table 3).

3.2. Effects of Metacognitive Training on Metacognitive Outcomes

Metacognition was assessed through four online metacognitive variables: monitoring accuracy, global monitoring, wrong yes, and the feeling of confidence, along with seven offline metacognitive variables: MMQ—Complex Strategies; MMQ—Simple Strategies; MKEM; MKEA—Divided and Shifted Attention; MKEA—Concentration; MKEEFs—Inhibition; and MKEEFs—Planning. For each time point, a mean score for each aspect of the offline metacognitive measures was calculated. Also, an average score was calculated for each online metacognitive measure (monitoring accuracy, global monitoring, ‘wrong yes’, and the feeling of confidence). Thus, each resulting score represents the mean value across all tasks at that time point, providing an overall view of participants’ metacognitive performance over time. Again, as regards the mixed measures ANOVAs, the within-subjects factor was the measurement time point, with four levels: pre-training, post-training, and follow-ups at 3 and 6 months; while the between-subjects factor was the group, with two levels: Experimental and Control.
For the online metacognitive measures, the results revealed a significant main effect of the time of measurement on all three metacognitive control variables: monitoring accuracy, F(2.47, 91.33) = 10.77, p < 0.001, η2 = 23, global monitoring, F(2.24, 82.85) = 8.57, p < 0.001, η2 = 0.20, wrong yes, F(2.35, 86.90) = 20.01, p < 0.001, η2 = 0.35, and on metacognitive ability, F(2.15, 79.44) = 5.92, p = 0.003, η2 = 0.14, indicating an improvement in these variables across the four time points for both groups. A main effect of group or an interaction effect (measurement x group) was not detected, suggesting no significant differences between the two groups. However, as indicatively shown in Figure 4, the EG’s ascending performance regarding overall global monitoring continues at the 6-month follow-up, while the CG begins to exhibit a decline three months after training, suggesting the long-term impact of the MTP on these measures as well.
Likewise, as illustrated in Figure 5, participants exhibited an improved metacognitive ability score following the training programs, leading towards a better alignment between their confidence and actual performance; a trend emerges after the 3-month follow-up. Specifically, the EG showed a consistent pattern of improvement, gradually nearing a score of 4 which signifies a perfect alignment between confidence and actual performance, whereas the CG displayed a rising trend (above 4) indicative of increasing overconfidence.
Pairwise comparisons for online metacognitive measures revealed significant differences between pre-training and the subsequent three measures (post-training, 3-month follow-up, and 6-month follow-up) concerning metacognitive control measures. Additionally, significant variations were observed between the initial assessment and the 3-month as well as 6-month follow-up assessments for metacognitive ability. These results indicated that improvements occurred in both groups (Experimental and Control) following the interventions (a detailed presentation of these results can be found in Table 3).
Regarding the offline metacognitive measures, the results revealed a significant main effect of the time of measurement on MMQ—Complex Strategies, F(2.71, 100.11) = 10.23, p < 0.001, η2 = 0.22, and on MKEM, F(2.93, 108.66) = 4.31, p = 0.007, η2 = 0.10. These results indicate significant changes in these measures across the four time points. A main effect of group or an interaction effect (measurement x group) was not detected, suggesting no significant differences between the two groups.
Pairwise comparisons for the MMQ—Complex Strategies showed that the EG and CG reported an increased use of complex strategies at post-training and at the 3-month and 6-month follow-up compared to pre-training, while for MKEM, they did not reveal any significant difference between specific time points (a detailed presentation of these results can be found in Table 3).
Nevertheless, a significant main effect of the time of measurement, F(2.61, 96.56) = 3.61, p = 0.021, η2 = 0.09, and a significant measurement x group interaction effect, F(2.61, 96.56) = 4.57, p = 0.007, η2 = 0.11, were detected on MKEA—Divided and Shifted Attention, indicating that the patterns of change over time varied between the two groups. However, no significant main effect of the group variable was detected, F(1, 37) = 0.48, p = 0.50. Also, pairwise comparisons did not reveal any significant result between specific time points.
Given the inherent problem with power in the study, and the small n of participants in each group as well as the clear trends as described in the figures, it becomes obvious that we were not able to detect what exactly happens in the performance of each group in the third and fourth measurements (for additional graphs, see Appendix A). Hence, to examine this, we proceeded to non-parametric Kruskal–Wallis tests to ascertain any potential differences between the two groups in the different times of measurement. The subsequent results are presented below.

3.3. Group Differences at Each Time of Measurement after the Training Sessions

The Kruskal–Wallis test, a non-parametric approach, was utilized to compare the Experimental and Control groups at post-training and 3-month and 6-month follow-up measurements. The group was identified as the independent variable, while each cognitive and metacognitive outcome was defined as a dependent variable. The Kruskal–Wallis test can be applied separately to each time point. This means that even if a participant’s data are missing for some time points, their data from other time points can still be used in the analysis. Given its compatibility with small sample sizes, the Kruskal–Wallis test was considered suitable for our study.

3.3.1. Post-Training

Following the training, it was revealed that the EG used compensatory strategies more frequently than the CG. Specifically, there was a statistically significant difference in MMQ—Complex Strategies, H(1, 42) = 9.66, p = 0.002, between the rank scores of the EG (27.36) and the CG (15.64). Also, a significant difference was detected in MMQ—Simple Strategies, H(1, 42) = 4.38, p = 0.036, between the rank scores of the EG (25.45) and the CG (17.55) (see Figure 6). No other significant differences between the groups were found post-training.

3.3.2. Three-Month Follow-Up

At the 3-month follow-up, the EG demonstrated superior performance to the CG in terms of the WCST’s score for perseverative responses, H(1, 43) = 4.33, p = 0.038. Group differences were also significant in metacognitive outcomes. The EG demonstrated superior monitoring accuracy in DnP—Doors in comparison to the CG, H(1, 43) = 4.43, p = 0.035. Additionally, the EG reported a higher frequency of complex strategy use, H(1, 43) = 6.12, p = 0.013, and improved metacognitive beliefs of efficacy regarding attention compared to the CG: MKEA—Divided and Shifted Attention, H(1, 43) = 6.85, p = 0.009, and MKEA—Concentration, H(1, 43) = 6.36, p = 0.012. No other significant group differences were detected at the 3-month follow-up. To facilitate a comprehensive review of these findings, the complete dataset is available in Table 4, and for a visual representation of the difference, refer to Graphs a-e (Appendix B).

3.3.3. Six-Month Follow-Up

Six months after training, the EG showed significant improvement over the CG. This was evident in the cognitive outcomes, such as the WCST scores for total correct, H(1, 43) = 7.19, p = 0.007, perseverative errors, H(1, 43) = 7.69, p = 0.006, and perseverative responses, H(1, 43) = 10.75, p = 0.001, as well as the DnP results for immediate figure recall, H(1, 43) = 4.05, p = 0.044.
Similarly, there were significant differences in the metacognitive control outcomes between the two groups. Upon an initial examination of the overall scores for each variable for metacognitive control, as described in the mixed ANOVAs, we identified significant differences between groups. These were evident in the overall monitoring accuracy, H(1, 43) = 4.89, p = 0.027, overall global monitoring, H(1, 43) = 7.42, p = 0.006, and overall ‘wrong yes’ responses, H(1, 43) = 9.70, p = 0.002. Then, we conducted a further exploration of online metacognitive variables for each test. Thus, upon closer examination, we found that the EG demonstrated significantly better outcomes than the CG in online metacognitive measures associated with the WCST: monitoring accuracy, H(1, 43) = 11.05, p < 0.001, global monitoring, H(1, 43) = 9.12, p = 0.003, and ‘wrong yes’ responses, H(1, 43) = 11.59, p < 0.001. Furthermore, a significant difference was detected regarding the frequency of simple strategy use, where the EG reported a higher frequency compared to the CG, H(1, 43) = 4.10, p = 0.043. To facilitate a comprehensive review of these findings, the complete dataset is available in Table 5, while the differences are visually depicted in Graphs a–k (Appendix C).

4. Discussion

The aim of the present study was to longitudinally assess the efficacy of an MTP in people with aMCI. To do so, cognitive and metacognitive outcomes were compared between two groups—Experimental and Control—at four different time points—before–after–3 months–6 months after intervention. Additionally, based on recent findings indicating that individuals with MCI often struggle to accurately monitor their performance during active cognitive processes, we aimed to determine whether the MTP could yield sustained positive effects on online metacognitive outcomes over time.
Regarding cognitive outcomes, the results revealed that both the CG and the EG showed improvements after the training sessions in the memory and the executive functions tasks. However, significant differences emerged between the two groups at the 3-month follow-up, with the EG making fewer perseverative responses on the WCST compared to the CG. The differences became more pronounced at the 6-month follow-up, where the EG outperformed the CG in the WCST scores for total correct responses, perseverative errors, and perseverative responses indicating that the applied MTP bolsters cognitive flexibility and adaptability to new information over time. Additionally, superior performance was observed in the EG regarding immediate visual recall, as measured by the DnP Figures test. Therefore, while both—cognitive and metacognitive—training programs showed post-training benefits in cognition, the improvements for the CG began to wane after three months of the training’s completion. In contrast, participants in the EG not only preserved their gains but continued to show improvement, as indicated by the observed trends.
Notably, the MTP exhibited specific benefits on cognitive flexibility, an area where individuals with MCI also show deficiencies [87,88,89]. As a core aspect of executive functions, cognitive flexibility plays an essential role in metacognitive regulation [90]. It describes an individual’s ability to recognize when their current cognitive strategies are not leading to the intended cognitive outcome. Instead of persistently utilizing ineffective methods, cognitive flexibility facilitates a shift from the current strategy, enabling the adoption of alternative approaches to optimize learning outcomes [72]. The observed enhancement in cognitive flexibility following the MTP may be largely attributed to its emphasis on enhancing metacognitive monitoring skills. Through consistent training, participants seemingly developed a heightened self-awareness of their ongoing strategies. When confronted with inefficacy, they were primed to recognize and adjust. This is in line with [91] which underscores the synergy between cognitive flexibility and the metacognitive monitoring processes.
Similarly, regarding the online metacognitive outcomes following the interventions, both groups showed improvements in their online metacognitive scores compared to pre-training. Specifically, improvements were detected in metacognitive control as participants in both groups became more precise in deciding which responses to submit. These improvements may have been influenced by the design of the testing sessions, where participants were asked to report their confidence in each response and determine whether to include each response in their final score. Previous studies have shown that older adults without cognitive deterioration [92,93] and people with MCI [20] can improve their metacognitive accuracy through task-related experience. This could offer an additional explanation as to why the CG also demonstrated improvements in online metacognitive measures.
Nonetheless, although at post-training the EG and CG showed improvements, a different course of the trend emerged at the 3-month follow-up between the two groups. Particularly, at the 3-month follow-up, the EG showed significantly higher monitoring accuracy in DnP—Doors compared to the CG. Furthermore, at the 6-month follow-up, these gains were reflected in improved scores for monitoring accuracy, global monitoring, and “wrong yes” in the WCST for the EG compared to the CG. These results emphasize the beneficial effects of MTP on metacognitive control. More specifically, the EG demonstrated enhanced precision in volunteering correct responses and withholding incorrect ones, compared to those in the CG. Considering the previously discussed connection between cognitive flexibility and metacognition, it is not surprising that these enhancements were more noticeable in the WCST. Therefore, the MTP promotes metacognitive control in individuals with MCI and the related gains are maintained over time.
Concerning metacognitive monitoring, participants from both groups exhibited improvements in their metacognitive ability scores over time. While no significant differences were detected between the two groups, a trend was also evident for this measure. This trend highlighted that individuals in the EG consistently displayed greater alignment between their confidence ratings and actual performance, whereas the CG showed a slight decline in this measure six months after training (refer to Figure 5).
The positive impact of the MTP was also detected in the offline metacognitive measures. Specifically, at the 3-month follow-up, participants in the EG reported a more frequent use of complex cognitive strategies and at the 6-month follow-up, a more frequent use of simple cognitive strategies than those in the CG. Interestingly, as time passed, EG participants demonstrated a preference for simple cognitive strategies over complex ones. Figure 6 graphically represents the declining trend in the use of complex strategies. Meanwhile, the upward trend for simple strategies is depicted in Graph (g) (see Appendix A for a visual representation). Although complex strategies, such as mental imagery, paired associations, and story creation, are effective in enhancing long-term memory due to their in-depth processing of information, they also require a greater investment of time and effort, making them potentially less practical for daily use. However, this finding is not necessarily a limitation. Throughout the training, even though the efficacy of these complex strategies was emphasized, we did not mandate their exclusive use. Instead, the purpose was to provide participants with new cognitive tools for use at their convenience. The primary focus of the training was to improve a more effective approach to cognitive tasks, underscoring the importance of investing time and effort to process information, recruit monitoring and control processes, and make necessary adjustments to achieve the intended outcome. We believe the MTP effectively served this purpose.
Furthermore, improvements in self-efficacy were also detected. Individuals in the EG at the 3-month follow-up reported higher beliefs of efficacy regarding daily life scenarios related to attention compared to the CG. Memory self-efficacy also saw a subtle rise, interestingly, in both groups. However, no changes were observed in metacognitive beliefs about everyday executive functions. This outcome was anticipated given the already high scores both groups reported on this scale. The improved metacognitive beliefs concerning attention align with the principles of the MTP, as explained in the previous paragraph. Since the EG was trained in using and enhancing self-regulatory skills, with attention being pivotal in these processes, participants were specifically instructed on the significance of shifting and focusing their attention on the information of interest. This guidance likely fostered the better control and monitoring of attention processes, eventually leading to higher self-perceived efficacy in this aspect.
Overall, the results confirmed our initial hypothesis suggesting the sustained effects of the MTP in cognitive and metacognitive measures over a period of six months. Already, three months post-training, the CG began to show a decline in training-related gains. In contrast, the EG’s performance consistently improved, highlighting the superior efficacy of the MTP. Even though the effects observed at six months post-training might be seen as medium-term rather than long-term, we believe our findings are a significant contribution to the research on MTP efficacy. Remarkably, given the brief nature of the MTP—just ten sessions—it is impressive that individuals with MCI were not only able to gain benefits but also sustain progress over six months, especially considering their challenges in acquiring new information. However, future research should certainly consider incorporating assessments at longer intervals, such as beyond a year, to effectively capture these extended effects. We believe that the individualized format of the MTP further enhanced these positive outcomes since an individualized approach tailors each participant’s needs. This enables a more targeted intervention [39,45]. The clinical implications of this type of training are wide-ranging, as it can be tailored to the specific needs of various clinical populations. Accordingly, it would be valuable for clinical practice if future studies attempt to establish a specific protocol for designing and implementing an MTP, which clinicians can then adjust to the specific needs of each clinical population.
Furthermore, the current study introduces an innovative approach by pioneering the exploration of the effects of the MTP on online metacognitive measures—a subject yet to be thoroughly investigated. This exploration is crucial, as prior studies have indicated that deficiencies in metacognitive skills during cognitive tasks can lead to poorer outcomes and ineffective strategies when dealing with cognitive challenges [23,24,25,26]. Based on our findings, participants in the MTP showed improvement in their monitoring and metacognitive control skills. Specifically, they made more accurate decisions regarding which answers to volunteer and which to withhold. They also increased the use of compensatory cognitive strategies. It would be very informative for future studies to integrate neuroimaging data to understand how these changes manifest at the neural level. For example, they might observe increased functional connectivity within the frontoparietal network, which is pivotal for self-regulatory and control processes [94].

5. Limitations

Certainly, this study has some important limitations that should be acknowledged. First, the small sample size limited the statistical power of the findings. To mitigate this effect, we conducted a very thorough statistical analysis, which is extensively presented in the present paper. Second, the study’s design was single-blind, meaning the researcher conducting the testing and training sessions was aware of the group to which each participant was allocated. To minimize this potential bias, the researcher was dedicated to fully complying with and strictly applying the randomization procedure. Although the researcher maintained a strict level of professionalism and impartiality, the possibility for unintended bias cannot be completely excluded. Furthermore, the research was conducted during strict lockdown measures in order to deal with the COVID-19 pandemic. Under these conditions, the entire procedure had to be carried out online, and this may have compromised the integrity of the testing procedures. In addition, due to the imposed lockdown, participants had limited opportunities to apply cognitive strategies in everyday life. Also, existing research has highlighted the negative cognitive effects of prolonged lockdown [95,96]. Thus, the observed benefits might have been more significant under normal circumstances. Moreover, although the measurements and follow-up time employed in this study hold significance, it is crucial to track these data for a minimum period of 3 to 5 years to determine the long-term effects or to examine the impact of the MTP on MCI from diverse pathologies and subtypes [28,97].
Future studies should aim to overcome the aforementioned limitations. Additionally, researchers could explore several avenues for further investigation. Firstly, the integration of neuroimaging data could provide invaluable insights into the neural impact of metacognitive training. Secondly, the use of sensitive neuropsychological tests may reveal clearer patterns or differentiation between healthy individuals and those with MCI, addressing concerns about the limited sensitivity and specificity of the WCST [98]. Lastly, it would be beneficial to investigate the potential necessity for booster sessions to sustain training gains over time, including determining their optimal frequency.

6. Conclusions

Beside the limitations that this study holds, it provided novel insight into the application and efficacy of the MTP in addressing cognitive decline, particularly in individuals with MCI. The sustained effects observed over a six-month period underscore the potential of the MTP to significantly impact cognitive and metacognitive functioning, offering promising avenues for intervention in clinical settings. Given these findings, future research endeavors should prioritize investigations aimed at elucidating the mechanisms underlying the observed effects and expanding upon the MTP’s potential applications in diverse clinical contexts.

Author Contributions

Conceptualization, G.B.; formal analysis, G.B. and D.M.; investigation, G.B.; methodology, G.B., D.M. and E.M.; resources, G.P., M.S. and G.A.K.; supervision, M.T., D.M. and P.M.; writing—original draft, G.B.; writing—review and editing, G.B. and D.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research is cofinanced by Greece and the European Union (European Social Fund—ESF) through the Operational Program “Human Resources Development, Education and Lifelong Learning” in the context of the project “Strengthening Human Resources Research Potential via Doctorate Research” (MIS-5000432), implemented by the State Scholarships Foundation (ΙΚΥ).

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Scientific Committee of ‘Alzheimer Hellas’ (Approval Code: 29/15-02-2017).

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

The data of this study are available at DOI: 10.17632/s24bb5p42t.1.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Additional graphs are presented below that offer a visualization of the observed trend between the Experimental and the Control group in variables that did not reveal statistically significant differences between the two groups.
Healthcare 12 01019 g0a1

Appendix B

These plots offer a visual demonstration of the differences between the Experimental and the Control group at the 3-month follow-up.
Healthcare 12 01019 g0a2

Appendix C

These plots offer a visual demonstration of the differences between the Experimental and the Control group at the 6-month follow-up.
Healthcare 12 01019 g0a3aHealthcare 12 01019 g0a3bHealthcare 12 01019 g0a3c

References

  1. Petersen, R.C. Mild cognitive impairment as a diagnostic entity. J. Intern. Med. 2004, 256, 183–194. [Google Scholar] [CrossRef]
  2. Gauthier, S.; Reisberg, B.; Zaudig, M.; Petersen, R.C.; Ritchie, K.; Broich, K.; Belleville, S.; Brodaty, H.; Bennett, D.; Chertkow, H.; et al. Mild cognitive impairment. Lancet 2006, 367, 1262–1270. [Google Scholar] [CrossRef]
  3. Petersen, R.C.; Caracciolo, B.; Brayne, C.; Gauthier, S.; Jelic, V.; Fratiglioni, L. Mild cognitive impairment: A concept in evolution. J. Intern. Med. 2014, 275, 214–228. [Google Scholar] [CrossRef]
  4. Monastero, R.; Mangialasche, F.; Camarda, C.; Ercolani, S.; Camarda, R. A Systematic Review of Neuropsychiatric Symptoms in Mild Cognitive Impairment. J. Alzheimer’s Dis. 2009, 18, 11–30. [Google Scholar] [CrossRef]
  5. Rickenbach, E.H.; Condeelis, K.L.; Haley, W.E. Daily stressors and emotional reactivity in individuals with mild cognitive impairment and cognitively healthy controls. Psychol. Aging 2015, 30, 420–431. [Google Scholar] [CrossRef]
  6. Butler, M.; McCreedy, E.; Nelson, V.A.; Desai, P.; Ratner, E.; Fink, H.A.; Hemmy, L.S.; McCarten, J.R.; Barclay, T.R.; Brasure, M.; et al. Does Cognitive Training Prevent Cognitive Decline?: A Systematic Review. Ann. Intern. Med. 2018, 168, 63. [Google Scholar] [CrossRef]
  7. Chandler, M.J.; Parks, A.C.; Marsiske, M.; Rotblatt, L.J.; Smith, G.E. Everyday Impact of Cognitive Interventions in Mild Cognitive Impairment: A Systematic Review and Meta-Analysis. Neuropsychol. Rev. 2016, 26, 225–251. [Google Scholar] [CrossRef]
  8. Salzman, T.; Sarquis-Adamson, Y.; Son, S.; Montero-Odasso, M.; Fraser, S. Associations of Multidomain Interventions With Improvements in Cognition in Mild Cognitive Impairment: A Systematic Review and Meta-analysis. JAMA Netw. Open 2022, 5, e226744. [Google Scholar] [CrossRef]
  9. Sherman, D.S.; Mauser, J.; Nuno, M.; Sherzai, D. The Efficacy of Cognitive Intervention in Mild Cognitive Impairment (MCI): A Meta-Analysis of Outcomes on Neuropsychological Measures. Neuropsychol. Rev. 2017, 27, 440–484. [Google Scholar] [CrossRef]
  10. Sherman, D.S.; Durbin, K.A.; Ross, D.M. Meta-Analysis of Memory-Focused Training and Multidomain Interventions in Mild Cognitive Impairment. J. Alzheimer’s Dis. 2020, 76, 399–421. [Google Scholar] [CrossRef]
  11. Zhang, H.; Huntley, J.; Bhome, R.; Holmes, B.; Cahill, J.; Gould, R.L.; Wang, H.; Yu, X.; Howard, R. Effect of computerised cognitive training on cognitive outcomes in mild cognitive impairment: A systematic review and meta-analysis. BMJ Open 2019, 9, e027062. [Google Scholar] [CrossRef]
  12. Zhong, D.; Chen, L.; Feng, Y.; Song, R.; Huang, L.; Liu, J.; Zhang, L. Effects of virtual reality cognitive training in individuals with mild cognitive impairment: A systematic review and meta-analysis. Int. J. Geriatr. Psychiatry 2021, 36, 1829–1847. [Google Scholar] [CrossRef]
  13. Bapka, V.; Bika, I.; Kavouras, C.; Savvidis, T.; Konstantinidis, E.; Bamidis, P.; Papantoniou, G.; Masoura, E.; Moraitou, D. Brain Plasticity in Older Adults: Could It Be Better Enhanced by Cognitive Training via an Adaptation of the Virtual Reality Platform FitForAll or via a Commercial Video Game? In Interactive Mobile Communication Technologies and Learning; Auer, M.E., Tsiatsos, T., Eds.; Springer International Publishing: Cham, Switzerland, 2018; Volume 725, pp. 728–742. [Google Scholar] [CrossRef]
  14. Flavell, J.H. Metacognition and cognitive monitoring: A new area of cognitive–developmental inquiry. Am. Psychol. 1979, 34, 906–911. [Google Scholar] [CrossRef]
  15. Efklides, A. Interactions of Metacognition With Motivation and Affect in Self-Regulated Learning: The MASRL Model. Educ. Psychol. 2011, 46, 6–25. [Google Scholar] [CrossRef]
  16. Schraw, G.; Moshman, D. Metacognitive Theories. Educ. Psychol. Rev. 1995, 7, 351–371. [Google Scholar] [CrossRef]
  17. Dunlosky, J.; Metcalfe, J. Metacognition; SAGE: Thousand Oaks, CA, USA, 2008. [Google Scholar]
  18. Efklides, A. Metacognition and affect: What can metacognitive experiences tell us about the learning process? Educ. Res. Rev. 2006, 1, 3–14. [Google Scholar] [CrossRef]
  19. Nelson, T. Metamemory: A theoretical framework and new findings. Psychol. Learn. Motiv. 1990, 26, 125–173. [Google Scholar] [CrossRef]
  20. Chudoba, L.A.; Schmitter-Edgecombe, M. Insight into Memory and Functional Abilities in Individuals with Amnestic Mild Cognitive Impairment. J. Clin. Exp. Neuropsychol. 2020, 42, 822–833. [Google Scholar] [CrossRef]
  21. Clare, L.; Whitaker, C.J.; Roberts, J.L.; Nelis, S.M.; Martyr, A.; Marková, I.S.; Roth, I.; Woods, R.T.; Morris, R.G. Memory Awareness Profiles Differentiate Mild Cognitive Impairment from Early-Stage Dementia: Evidence from Assessments of Performance Monitoring and Evaluative Judgement. Dement. Geriatr. Cogn. Disord. 2013, 35, 266–279. [Google Scholar] [CrossRef]
  22. Seelye, A.M.; Schmitter-Edgecombe, M.; Flores, J. Episodic memory predictions in persons with amnestic and nonamnestic mild cognitive impairment. J. Clin. Exp. Neuropsychol. 2010, 32, 433–441. [Google Scholar] [CrossRef]
  23. Anderson, J.W.; Schmitter-Edgecombe, M. Mild cognitive impairment and feeling-of-knowing in episodic memory. J. Clin. Exp. Neuropsychol. 2010, 32, 505–514. [Google Scholar] [CrossRef] [PubMed]
  24. Chi, S.Y.; Chua, E.F.; Kieschnick, D.W.; Rabin, L.A. Retrospective metamemory monitoring of semantic memory in community-dwelling older adults with subjective cognitive decline and mild cognitive impairment. Neuropsychol. Rehabil. 2022, 32, 429–463. [Google Scholar] [CrossRef] [PubMed]
  25. Perrotin, A.; Belleville, S.; Isingrini, M. Metamemory monitoring in mild cognitive impairment: Evidence of a less accurate episodic feeling-of-knowing. Neuropsychologia 2007, 45, 2811–2826. [Google Scholar] [CrossRef] [PubMed]
  26. Ryals, A.J.; O’Neil, J.T.; Mesulam, M.-M.; Weintraub, S.; Voss, J.L. Memory awareness disruptions in amnestic mild cognitive impairment: Comparison of multiple awareness types for verbal and visuospatial material. Aging Neuropsychol. Cogn. 2019, 26, 577–598. [Google Scholar] [CrossRef] [PubMed]
  27. Bampa, G.; Moraitou, D.; Metallidou, P.; Masoura, E.; Mintziviri, M.; Paparis, K.; Tsourou, D.; Papantoniou, G.; Sofologi, M.; Papaliagkas, V.; et al. Metacognitive Differences in Amnestic Mild Cognitive Impairment and Healthy Cognition: A Cross-Sectional Study Employing Online Measures. J. Intell. 2023, 11, 184. [Google Scholar] [CrossRef] [PubMed]
  28. Yu, R.L.; Wu, R.M. Mild cognitive impairment in patients with Parkinson’s disease: An updated mini-review and future outlook. Front. Aging Neurosci. 2022, 14, 943438. [Google Scholar] [CrossRef]
  29. Beaudoin, M. Memory performance in older adults: Experimental evidence for the indirect effect of memory self-efficacy on processing efficiency through worry. Motiv. Emot. 2018, 42, 885–895. [Google Scholar] [CrossRef]
  30. Cherry, K.E.; Lyon, B.A.; Boudreaux, E.O.; Blanchard, A.B.; Hicks, J.L.; Elliott, E.M.; Myers, L.; Kim, S.; Jazwinski, S.M. Memory Self-Efficacy and Beliefs about Memory and Aging in Oldest-Old Adults in the Louisiana Healthy Aging Study (LHAS). Exp. Aging Res. 2019, 45, 28–40. [Google Scholar] [CrossRef]
  31. Farias, S.T.; Schmitter-Edgecombe, M.; Weakley, A.; Harvey, D.; Denny, K.G.; Barba, C.; Gravano, J.T.; Giovannetti, T.; Willis, S. Compensation Strategies in Older Adults: Association With Cognition and Everyday Function. Am. J. Alzheimer’s Dis. Other Dement. 2018, 33, 184–191. [Google Scholar] [CrossRef]
  32. Froger, C.; Sacher, M.; Gaudouen, M.-S.; Isingrini, M.; Taconnat, L. Metamemory judgments and study time allocation in young and older adults: Dissociative effects of a generation task. Can. J. Exp. Psychol./Rev. Can. Psychol. Expérimentale 2011, 65, 269–276. [Google Scholar] [CrossRef]
  33. Hertzog, C.; Price, J.; Dunlosky, J. Age differences in the effects of experimenter-instructed versus self-generated strategy use. Exp. Aging Res. 2012, 38, 42–62. [Google Scholar] [CrossRef]
  34. Bampa, G.; Moraitou, D.; Metallidou, P.; Masoura, E.; Papantoniou, G.; Sofologi, M.; Kougioumtzis, G.; Papatzikis, E.; Tsolaki, M. Metacognitive Beliefs of Efficacy about Daily Life Situations and Use of Cognitive Strategies in Amnestic Mild Cognitive Impairment: A Cross-Sectional Study. Front. Psychol. 2024, 15, 1275678. [Google Scholar] [CrossRef] [PubMed]
  35. Bailey, H.; Dunlosky, J.; Hertzog, C. Metacognitive training at home: Does it improve older adults’ learning? Gerontology 2010, 56, 414–420. [Google Scholar] [CrossRef] [PubMed]
  36. Bottiroli, S.; Cavallini, E.; Dunlosky, J.; Vecchi, T.; Hertzog, C. The importance of training strategy adaptation: A learner-oriented approach for improving older adults’ memory and transfer. J. Exp. Psychol. Appl. 2013, 19, 205–218. [Google Scholar] [CrossRef] [PubMed]
  37. Bottiroli, S.; Cavallini, E.; Dunlosky, J.; Vecchi, T.; Hertzog, C. Self-guided strategy-adaption training for older adults: Transfer effects to everyday tasks. Arch. Gerontol. Geriatr. 2017, 72, 91–98. [Google Scholar] [CrossRef]
  38. Hertzog, C.; Dunlosky, J. Metacognition in Later Adulthood: Spared Monitoring Can Benefit Older Adults’ Self-regulation. Curr. Dir. Psychol. Sci. 2011, 20, 167–173. [Google Scholar] [CrossRef] [PubMed]
  39. Hertzog, C.; Pearman, A.; Lustig, E.; Hughes, M. Fostering Self-Management of Everyday Memory in Older Adults: A New Intervention Approach. Front. Psychol. 2021, 11, 560056. [Google Scholar] [CrossRef] [PubMed]
  40. Bampa, G.; Moraitou, D.; Metallidou, P. Metacognition in cognitive rehabilitation in adults: A systematic review. In Trends and Prospects in Metacognition Research across the Life Span: A Tribute to Anastasia Efklides; Springer: Cham, Switzerland, 2021. [Google Scholar] [CrossRef]
  41. Bandura, A. Self-efficacy: Toward a unifying theory of behavioral change. Psychol. Rev. 1977, 84, 191–215. [Google Scholar] [CrossRef]
  42. McDougall, G.J.; Becker, H.; Pituch, K.; Acee, T.W.; Vaughan, P.W.; Delville, C.L. The SeniorWISE Study: Improving Everyday Memory in Older Adults. Arch. Psychiatr. Nurs. 2010, 24, 291–306. [Google Scholar] [CrossRef]
  43. Wiegand, M.A.; Troyer, A.K.; Gojmerac, C.; Murphy, K.J. Facilitating change in health-related behaviors and intentions: A randomized controlled trial of a multidimensional memory program for older adults. Aging Ment. Health 2013, 17, 806–815. [Google Scholar] [CrossRef]
  44. Sella, E.; Carbone, E.; Vincenzi, M.; Toffalini, E.; Borella, E. Efficacy of memory training interventions targeting metacognition for older adults: A systematic review and meta-analysis. Aging Ment. Health 2023, 27, 674–694. [Google Scholar] [CrossRef] [PubMed]
  45. Moro, V.; Condoleo, M.T.; Valbusa, V.; Broggio, E.; Moretto, G.; Gambina, G. Cognitive stimulation of executive functions in mild cognitive impairment: Specific efficacy and impact in memory. Am. J. Alzheimer’s Dis. Other Dement. 2015, 30, 153–164. [Google Scholar] [CrossRef] [PubMed]
  46. Youn, J.-H.; Park, S.; Lee, J.-Y.; Cho, S.-J.; Kim, J.; Ryu, S.-H. Cognitive Improvement in Older Adults with Mild Cognitive Impairment: Evidence from a Multi-Strategic Metamemory Training. J. Clin. Med. 2020, 9, 362. [Google Scholar] [CrossRef] [PubMed]
  47. Faul, F.; Erdfelder, E.; Lang, A.-G.; Buchner, A. G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav. Res. Methods 2007, 39, 175–191. [Google Scholar] [CrossRef]
  48. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders, 5th ed.; American Psychiatric Association: Arlington, TX, USA, 2013. [Google Scholar] [CrossRef]
  49. Fountoulakis, K.N.; Tsolaki, M.; Iacovides, A.; Yesavage, J.; O’Hara, R.; Kazis, A.; Ierodiakonou, C. The validation of the short form of the Geriatric Depression Scale (GDS) in Greece. Aging Clin. Exp. Res. 1999, 11, 367–372. [Google Scholar] [CrossRef] [PubMed]
  50. Yesavage, J.A.; Brink, T.L.; Rose, T.L.; Lum, O.; Huang, V.; Adey, M.; Leirer, V.O. Development and validation of a geriatric depression screening scale: A preliminary report. J. Psychiatr. Res. 1982, 17, 37–49. [Google Scholar] [CrossRef] [PubMed]
  51. Beck, A.T. An Inventory for Measuring Depression. Arch. Gen. Psychiatry 1961, 4, 561–571. [Google Scholar] [CrossRef] [PubMed]
  52. Beck, A.T.; Epstein, N.; Brown, G.; Steer, R.A. An inventory for measuring clinical anxiety: Psychometric properties. J. Consult. Clin. Psychol. 1988, 56, 893–897. [Google Scholar] [CrossRef] [PubMed]
  53. Grammatikopoulos, I.A.; Sinoff, G.; Alegakis, A.; Kounalakis, D.; Antonopoulou, M.; Lionis, C. The Short Anxiety Screening Test in Greek: Translation and validation. Ann. Gen. Psychiatry 2010, 9, 1. [Google Scholar] [CrossRef]
  54. Sinoff, G.; Liora, O.; Zlotogorsky, D.; Tamir, A. Short Anxiety Screening Test–a brief instrument for detecting anxiety in the elderly. Int. J. Geriatr. Psychiatry 1999, 14, 1062–1071. [Google Scholar] [CrossRef]
  55. Cummings, J.L.; Mega, M.; Gray, K.; Rosenberg-Thompson, S.; Carusi, D.A.; Gornbein, J. The Neuropsychiatric Inventory: Comprehensive assessment of psychopathology in dementia. Neurology 1994, 44, 2308. [Google Scholar] [CrossRef]
  56. Politis, A.M.; Mayer, L.S.; Passa, M.; Maillis, A.; Lyketsos, C.G. Validity and reliablity of the newly translated Hellenic Neuropsychiatric Inventory (H-NPI) applied to Greek outpatients with Alzheimer’s disease: A study of disturbing behaviors among referrals to a memory clinic. Int. J. Geriatr. Psychiatry 2004, 19, 203–208. [Google Scholar] [CrossRef]
  57. Folstein, M.F.; Folstein, S.E.; McHugh, P.R. “Mini-mental state”. A practical method for grading the cognitive state of patients for the clinician. J. Psychiatr. Res. 1975, 12, 189–198. [Google Scholar] [CrossRef]
  58. Fountoulakis, K.N.; Tsolaki, M.; Chantzi, H.; Kazis, A. Mini Mental State Examination (MMSE): A validation study in Greece. Am. J. Alzheimer’s Dis. Other Dement. 2000, 15, 342–345. [Google Scholar] [CrossRef]
  59. Nasreddine, Z.S.; Phillips, N.A.; Bédirian, V.; Charbonneau, S.; Whitehead, V.; Collin, I.; Cummings, J.L.; Chertkow, H. The Montreal Cognitive Assessment, MoCA: A brief screening tool for mild cognitive impairment. J. Am. Geriatr. Soc. 2005, 53, 695–699. [Google Scholar] [CrossRef]
  60. Poptsi, E.; Moraitou, D.; Eleftheriou, M.; Kounti-Zafeiropoulou, F.; Papasozomenou, C.; Agogiatou, C.; Bakoglidou, E.; Batsila, G.; Liapi, D.; Markou, N.; et al. Normative Data for the Montreal Cognitive Assessment in Greek Older Adults With Subjective Cognitive Decline, Mild Cognitive Impairment and Dementia. J. Geriatr. Psychiatry Neurol. 2019, 32, 265–274. [Google Scholar] [CrossRef]
  61. Kounti, F.; Tsolaki, M.; Kiosseoglou, G. Functional cognitive assessment scale (FUCAS): A new scale to assess executive cognitive function in daily life activities in patients with dementia and mild cognitive impairment. Hum. Psychopharmacol. 2006, 21, 305–311. [Google Scholar] [CrossRef]
  62. Tsolaki, M.; Poptsi, E.; Aggogiatou, C.; Markou, N.; Zafeiropoulos, S. Computer-Based Cognitive Training Versus Paper and Pencil Training: Which is more Effective? A Randomized Controlled Trial in People with Mild Cognitive Impairment. JSM Alzheimer’s Dis. Relat. Dement. 2017, 4, 1032. [Google Scholar]
  63. Reisberg, B.; Ferris, S.H.; Leon, M.J.D.; Crook, T. The Global Deterioration Scale for assessment of primary degenerative dementia. Am. J. Psychiatry 1982, 139, 1136–1139. [Google Scholar] [CrossRef]
  64. Winblad, B.; Palmer, K.; Kivipelto, M.; Jelic, V.; Fratiglioni, L.; Wahlund, L.O.; Nordberg, A.; Bäckman, L.; Albert, M.; Almkvist, O.; et al. Mild cognitive impairment—Beyond controversies, towards a consensus: Report of the International Working Group on Mild Cognitive Impairment. J. Intern. Med. 2004, 256, 240–246. [Google Scholar] [CrossRef]
  65. Kongs, S.K.; Thompson, L.L.; Iverson, G.L.; Heaton, R.K. Wisconsin Card Sorting Test-, 64 Card Version: WCST-64; PAR: Lutz, FL, USA, 2000. [Google Scholar]
  66. Berg, E.A. A Simple Objective Technique for Measuring Flexibility in Thinking. J. Gen. Psychol. 1948, 39, 15–22. [Google Scholar] [CrossRef] [PubMed]
  67. Grant, D.A.; Berg, E. A behavioral analysis of degree of reinforcement and ease of shifting to new responses in a Weigl-type card-sorting problem. J. Exp. Psychol. 1948, 38, 404–411. [Google Scholar] [CrossRef]
  68. Axelrod, B.N.; Goldman, R.S.; Woodard, J.L. Interrater reliability in scoring the Wisconsin card sorting test. Clin. Neuropsychol. 1992, 6, 143–155. [Google Scholar] [CrossRef] [PubMed]
  69. Chiu, E.-C.; Lee, S.-C. Test–retest reliability of the Wisconsin Card Sorting Test in people with schizophrenia. Disabil. Rehabil. 2021, 43, 996–1000. [Google Scholar] [CrossRef] [PubMed]
  70. Greve, K.W.; Love, J.M.; Sherwin, E.; Mathias, C.W.; Ramzinski, P.; Levy, J. Wisconsin Card Sorting Test in chronic severe traumatic brain injury: Factor structure and performance subgroups. Brain Inj. 2002, 16, 29–40. [Google Scholar] [CrossRef]
  71. Nyhus, E.; Barceló, F. The Wisconsin Card Sorting Test and the cognitive assessment of prefrontal executive functions: A critical update. Brain Cogn. 2009, 71, 437–451. [Google Scholar] [CrossRef] [PubMed]
  72. Miyake, A.; Friedman, N.P.; Emerson, M.J.; Witzki, A.H.; Howerter, A.; Wager, T.D. The unity and diversity of executive functions and their contributions to complex “Frontal Lobe” tasks: A latent variable analysis. Cogn. Psychol. 2000, 41, 49–100. [Google Scholar] [CrossRef]
  73. Axelrod, B.N. Are Normative Data From the 64-Card Version of the WCST Comparable to the Full WCST? Clin. Neuropsychol. 2002, 16, 7–11. [Google Scholar] [CrossRef] [PubMed]
  74. Robert, K.H.; Chelune, C.; Talley, J.; Gary, G.K.; Curtiss, G. Wisconsin Card Sorting Test Manual—Revised and Expanded; Psychological Assessment Resources: Odessa, FL, USA, 1993. [Google Scholar]
  75. Baddeley, A.D.; Emslie, H.; Nimmo-Smith, I. Doors and People: A Test of Visual and Verbal Recall and Recognition; [Manual]; Thames Valley Test Company: Bury-St-Edmunds, UK, 1994. [Google Scholar]
  76. Arabatzi, X.; Masoura, E. Episodic Memory and Norms’ Development for the Battery “Doors and People” in the Greek Population. Master’s Thesis, Aristotle University of Thessaloniki, Thessaloniki, Greece, 2012. [Google Scholar]
  77. Hess, R.S. Book Review: Doors and People: A Test of Visual and Verbal Recall and Recognition. J. Psychoeduc. Assess. 1999, 17, 175–180. [Google Scholar] [CrossRef]
  78. Koren, D.; Seidman, L.J.; Poyurovsky, M.; Goldsmith, M.; Viksman, P.; Zichel, S.; Klein, E. The neuropsychological basis of insight in first-episode schizophrenia: A pilot metacognitive study. Schizophr. Res. 2004, 70, 195–202. [Google Scholar] [CrossRef]
  79. Koriat, A.; Goldsmith, M. Monitoring and Control Processes in the Strategic Regulation of Memory Accuracy. Psychol. Rev. 1996, 103, 490–517. [Google Scholar] [CrossRef] [PubMed]
  80. Bampa, G.; Kouroglou, D.; Metallidou, P.; Tsolaki, M.; Kougioumtzis, G.; Papantoniou, G.; Sofologi, M.; Moraitou, D. Metacognitive Scales: Assessing Metacognitive Knowledge in Older Adults Using Everyday Life Scenarios. Diagnostics 2022, 12, 2410. [Google Scholar] [CrossRef] [PubMed]
  81. Troyer, A.K.; Rich, J.B. Psychometric Properties of a New Metamemory Questionnaire for Older Adults. J. Gerontol. Ser. B Psychol. Sci. Soc. Sci. 2002, 57, P19–P27. [Google Scholar] [CrossRef] [PubMed]
  82. Moro, V.; Condoleo, M.T.; Sala, F.; Pernigo, S.; Moretto, G.; Gambina, G. Cognitive stimulation in a-MCI: An experimental study. Am. J. Alzheimer’s Dis. Other Dement. 2012, 27, 121–130. [Google Scholar] [CrossRef]
  83. Lim, M.H.X.; Liu, K.P.Y.; Cheung, G.S.F.; Kuo, M.C.C.; Li, R.; Tong, C.-Y. Effectiveness of a Multifaceted Cognitive Training Programme for People with Mild Cognitive Impairment: A One-Group Pre- and Posttest Design. Hong Kong J. Occup. Ther. 2012, 22, 3–8. [Google Scholar] [CrossRef]
  84. Vrani, A.; Špani, A.M.; Carretti, B.; Borella, E. The efficacy of a multifactorial memory training in older adults living in residential care settings. Int. Psychogeriatr. C Int. Psychogeriatr. Assoc. 2013, 25, 1885–1897. [Google Scholar] [CrossRef] [PubMed]
  85. Vranic, A.; Martincevic, M.; Borella, E. Mental imagery training in older adults: Which are benefits and individual predictors? Int. J. Geriatr. Psychiatry 2021, 36, 334–341. [Google Scholar] [CrossRef] [PubMed]
  86. Wenisch, E.; Cantegreil-Kallen, I.; De Rotrou, J.; Garrigue, P.; Moulin, F.; Batouche, F.; Richard, A.; De Sant’Anna, M.; Rigaud, A.S. Cognitive stimulation intervention for elders with mild cognitive impairment compared with normal aged subjects: Preliminary results. Aging Clin. Exp. Res. 2007, 19, 316–322. [Google Scholar] [CrossRef]
  87. Corbo, I.; Casagrande, M. Higher-Level Executive Functions in Healthy Elderly and Mild Cognitive Impairment: A Systematic Review. J. Clin. Med. 2022, 11, 1204. [Google Scholar] [CrossRef]
  88. Gonçalves, A.P.B.; Tarrasconi, M.A.; Holz, M.R.; Kochhann, R.; Fonseca, R.P. Cognitive flexibility and inhibition in single-versus multiple-domain mild cognitive impairment: A comparative and discriminative analysis. Psychol. Neurosci. 2019, 12, 209–223. [Google Scholar] [CrossRef]
  89. Rattanavichit, Y.; Chaikeeree, N.; Boonsinsukh, R.; Kitiyanant, K. The age differences and effect of mild cognitive impairment on perceptual-motor and executive functions. Front. Psychol. 2022, 13, 906898. [Google Scholar] [CrossRef]
  90. Fernandez-Duque, D.; Baird, J.A.; Posner, M.I. Executive Attention and Metacognitive Regulation. Conscious. Cogn. 2000, 9, 288–307. [Google Scholar] [CrossRef]
  91. Pennequin, V. Metacognition and Flexibility: What are the Theoretical Links and What Links have been Observed? In Cognitive Flexibility: The Cornerstone of Learning, 1st ed.; Wiley: Hoboken, NJ, USA, 2022. [Google Scholar] [CrossRef]
  92. McGillivray, S.; Castel, A.D. Older and Younger Adults’ Strategic Control of Metacognitive Monitoring: The Role of Consequences, Task Experience and Prior Knowledge. Exp. Aging Res. 2017, 43, 233–256. [Google Scholar] [CrossRef]
  93. Siegel, A.L.M.; Castel, A.D. Age-related differences in metacognition for memory capacity and selectivity. Memory 2019, 27, 1236–1249. [Google Scholar] [CrossRef]
  94. Fleur, D.S.; Bredeweg, B.; Bos, W. van den Metacognition: Ideas and insights from neuro- and educational sciences. Npj Sci. Learn. 2021, 6, 13. [Google Scholar] [CrossRef]
  95. Chen, Z.-C.; Liu, S.; Gan, J.; Ma, L.; Du, X.; Zhu, H.; Han, J.; Xu, J.; Wu, H.; Fei, M.; et al. The Impact of the COVID-19 Pandemic and Lockdown on Mild Cognitive Impairment, Alzheimer’s Disease and Dementia With Lewy Bodies in China: A 1-Year Follow-Up Study. Front. Psychiatry 2021, 12, 711658. [Google Scholar] [CrossRef]
  96. Ingram, J.; Hand, C.J.; Maciejewski, G. Social isolation during COVID-19 lockdown impairs cognitive function. Appl. Cogn. Psychol. 2021, 35, 935–947. [Google Scholar] [CrossRef]
  97. Yeung, M.K.; Chau, A.K.; Chiu, J.Y.; Shek, J.T.; Leung, J.P.; Wong, T.C. Differential and subtype-specific neuroimaging abnormalities in amnestic and nonamnestic mild cognitive impairment: A systematic review and meta-analysis. Ageing Res. Rev. 2022, 80, 101675. [Google Scholar] [CrossRef]
  98. Hammers, D.; Ramirez, G.; Persad, C.; Heidebrink, J.; Barbas, N.; Giordani, B. Diagnostic Profiles of Patients Differentially Failing Executive Functioning Measures. Am. J. Alzheimer’s Dis. Other Dement. 2016, 31, 214–222. [Google Scholar] [CrossRef]
Figure 1. The figure displays the performance of the Experimental and Control group on (a) the WCST: total correct and (b) WCST: perseverative responses at four time points.
Figure 1. The figure displays the performance of the Experimental and Control group on (a) the WCST: total correct and (b) WCST: perseverative responses at four time points.
Healthcare 12 01019 g001
Figure 2. Performance of Experimental and Control group on WCST: perseverative errors at four time points.
Figure 2. Performance of Experimental and Control group on WCST: perseverative errors at four time points.
Healthcare 12 01019 g002
Figure 3. Performance of Experimental and Control group on immediate verbal recall at four time points.
Figure 3. Performance of Experimental and Control group on immediate verbal recall at four time points.
Healthcare 12 01019 g003
Figure 4. Overall global monitoring for the Experimental and Control group at four time points.
Figure 4. Overall global monitoring for the Experimental and Control group at four time points.
Healthcare 12 01019 g004
Figure 5. Overall metacognitive ability for the Experimental and Control group at four time points.
Figure 5. Overall metacognitive ability for the Experimental and Control group at four time points.
Healthcare 12 01019 g005
Figure 6. Experimental group vs. Control group and use of strategies post-training.
Figure 6. Experimental group vs. Control group and use of strategies post-training.
Healthcare 12 01019 g006
Table 1. Metacognitive training program: overview of each session’s specific content and homework assignments.
Table 1. Metacognitive training program: overview of each session’s specific content and homework assignments.
SessionContentHomework
IntroductionThe aim of the metacognitive training program (MTP).
Introduction to cognitive functions.
An exploration of their basic domains.
Discussion on factors that affect cognitive functions.
The importance of understanding these processes.
To answer the following questions:
  • “How often do I forget to do something?”
  • “What can I do to remember better (think about what we’ve said: factors that affect cognitive functions, cognitive functions affect each other)?”
  • “Do I use any strategy or aid?”
  • “If yes, which one(s)?”
Memory IIntroduction to memory function and subsystems:
 Memory stages;
 Short-term memory;
 Long-term memory.
A set of exercises with lists to learn.
Immediate recall and delayed recall.
Instructions for self-testing.
Answering questions such as giving a brief description of the techniques used to learn the required material or suggesting what could have been performed differently to enhance performance.
Memory IIMemory subsystems II:
 Episodic and autobiographical memory;
 Prospective memory;
 Semantic memory;
 Procedural memory.
A set of exercises with different types of material to learn (story, lists, numbers) including immediate and delayed recall.
Instructions for self-testing.
Answering to questions such as giving a brief description of the techniques used to learn the required material or suggesting what could have been performed differently to enhance performance.
AttentionDivisions of attention:
 Selective attention;
 Sustained attention;
 Divided attention;
 Shifted attention.
Stress and attention
A set of exercises targeting the different dimensions of attention that mirror real-world situations.
Responding to questions about the perceived level of difficulty encountered during the exercise’s completion, and if any difficulties were faced, what those challenges were.
Executive FunctionsExecutive functions and their role in everyday life.
Aspects of executive functions:
 Decision-making and inhibition;
 Problem-solving and flexible thinking;
 Planning.
A set of exercises targeting the planning of daily activities and problem-solving tasks that mirror real-world situations.
Responding to questions about the perceived level of difficulty encountered during the exercise’s completion, and if any difficulties were faced, what those challenges were.
AgingAspects of aging: cognitive, social, physical, and psychological.
Positive and negative consequences.
Mild cognitive impairment: myths and truths.
A small essay based on 3 questions:
  • What aspects of my life are better now compared to 20 or 30 years ago?
  • Based on my current understanding of cognitive functions, where do I identify challenges? Provide specific examples from your everyday life.
  • What changes can I make in my life to improve my cognitive functions? Provide specific examples from your everyday life.
Cognitive Strategies: IntroductionIntroduction to different types of cognitive strategies: external and internal.
Understanding the significance of using cognitive strategies.
Showcase and training on strategies: categorization, verbal and visual association, story creation, and mental imagery.
The provision of diverse exercises to practice cognitive strategies, covering different types of information: lists of words or pictures, stories, a program of daily activities, or numbers.
Encouragement to use any chosen strategy from the presented options, with the stipulation that each strategy should be used at least once.
An evaluation of the effectiveness of the implemented strategies.
Encouragement to consider alternate daily situations where each strategy could be effectively applied.
The distribution of a “strategy-diary” for participants to record when and what strategy they applied in their daily life during the week.
Cognitive Strategies IQ&A about cognitive strategies and the related homework.
Further practice.
Cognitive Strategies IIQ&A about cognitive strategies and the related homework.
Further practice.
Cognitive Strategies: ClosingQ&A about cognitive strategies and the related homework.
Further practice.
Overall discussion and encouragement to keep using cognitive strategies.
Table 2. Participants’ demographic characteristics.
Table 2. Participants’ demographic characteristics.
EG a (n = 22)CG b (n = 23)
MeanSD cMeanSDFp d
Demographics
Age63.275.6262.306.880.266n.s. e
Education13.593.7612.653.140.828n.s.
Gender (f/m)14/8 16/7 χ2n.s.
Note. a EG = Experimental group received metacognitive training. b CG = Control group received cognitive training. c SD = standard deviation. d p < 0.05. e n.s. = non-significant.
Table 3. Cognitive and metacognitive outcomes between Control and Experimental groups at different time points.
Table 3. Cognitive and metacognitive outcomes between Control and Experimental groups at different time points.
Mean DifferenceStandard ErrorSig.95% Confidence Interval for Difference
Lower BoundUpper Bound
WCST: a
Total Correct
Pre-trainingPost-training−2.601.61n.s. b−7.091.89
3m follow-up−4.401.50n.s.−8.57−0.23
6m follow-up−5.231.490.01−9.38−1.08
WCST:
Total Errors
Pre-trainingPost-training2.761.62n.s.−1.767.27
3m follow-up4.711.52n.s.0.488.95
6m follow-up5.671.490.001.539.82
WCST:
Non-Perseverative Errors
Pre-trainingPost-training3.041.17n.s.−0.236.31
3m follow-up4.361.150.001.177.55
6m follow-up5.911.090.002.898.94
WCST:
Categories
Pre-trainingPost-training−0.440.25n.s.−1.140.27
3m follow-up−0.630.27n.s.−1.370.11
6m follow-up−0.820.260.02−1.53−0.10
DnP c—People: Immediate Verbal RecallPre-trainingPost-training−3.811.000.00−6.60−1.02
3m follow-up−6.201.040.00−9.10−3.30
6m follow-up−6.200.990.00−8.95−3.45
DnP—Figures: Immediate Visual RecallPre-trainingPost-training−1.610.530.02−3.08−0.15
3m follow-up−1.820.560.01−3.39−0.26
6m follow-up−2.240.740.03−4.32−0.17
DnP—Doors: Visual RecognitionPre-trainingPost-training−1.810.370.00−2.83−0.78
3m follow-up−2.570.360.00−3.58−1.56
6m follow-up−2.930.430.00−4.11−1.74
DnP—Names: Verbal RecognitionPre-trainingPost-training−1.290.51n.s.−2.720.14
3m follow-up−1.720.510.01−3.15−0.29
6m follow-up−2.250.440.00−3.48−1.03
DnP—People: Delayed Verbal RecallPre-trainingPost-training−0.520.19n.s.−1.050.02
3m follow-up−0.650.170.00−1.13−0.16
6m follow-up−0.840.200.00−1.40−0.29
Monitoring AccuracyPre-trainingPost-training−0.080.020.01−0.14−0.01
3m follow-up−0.100.020.00−0.15−0.05
6m follow-up−0.110.020.00−0.15−0.06
Global MonitoringPre-trainingPost-training−0.700.37n.s.−1.730.34
3m follow-up−1.370.380.01−2.43−0.31
6m follow-up−1.340.350.00−2.31−0.36
Wrong YesPre-trainingPost-training0.910.280.010.131.69
3m follow-up1.450.230.000.812.09
6m follow-up1.600.270.000.842.37
Feeling of ConfidencePre-trainingPost-training−0.250.050.00−0.38−0.11
3m follow-up−0.280.040.00−0.39−0.16
6m follow-up−0.310.050.00−0.43−0.18
MMQ d: Complex StrategiesPre-trainingPost-training−0.530.130.00−0.89−0.17
3m follow-up−0.450.100.00−0.72−0.17
6m follow-up−0.490.110.00−0.78−0.19
Note. a WCST = Wisconsin Card Sorting Test. b n.s. = non-significant. c DnP = Doors and People. d MMQ = Multifactorial Metamemory Questionnaire.
Table 4. Group differences at 3-month follow-up.
Table 4. Group differences at 3-month follow-up.
OutcomeGroup
EG a (n = 22) vs. CG b (n = 21)
Mean Rankχ2p-Value
WCST c: Perseverative ResponsesExperimental Group264.330.038
Control Group18.18
Monitoring Accuracy: DnP d—DoorsExperimental Group25.934.430.035
Control Group17.88
MMQ e—Complex StrategiesExperimental Group26.616.120.013
Control Group17.17
MKEA f: Divided and Shifted AttentionExperimental Group26.846.850.009
Control Group16.93
MKEA: ConcentrationExperimental Group26.666.360.012
Control Group17.12
Note. a EG = Experimental group. b CG = Control group. c WCST = Wisconsin Card Sorting Test. d DnP = Doors and People. e MMQ = Multifactorial Metamemory Questionnaire. f MKEA = Metacognitive Knowledge of Everyday Attention.
Table 5. Group differences at 6-month follow-up.
Table 5. Group differences at 6-month follow-up.
OutcomeGroup
EG a (n = 21) vs. CG b (n = 22)
Mean Rankχ2p-Value
WCST c: Total CorrectExperimental Group27.217.190.007
Control Group17.02
WCST: Perseverative ErrorsExperimental Group16.627.690.006
Control Group27.14
WCST: Perseverative ResponsesExperimental Group15.7610.750.001
Control Group27.95
DnP d—Figures I: Immediate Visual RecallExperimental Group25.484.050.044
Control Group18.68
Overall Monitoring AccuracyExperimental Group26.334.890.027
Control Group17.86
Overall Global MonitoringExperimental Group27.337.420.006
Control Group16.91
Overall Wrong YesExperimental Group27.829.700.002
Control Group15.90
Monitoring Accuracy: WCSTExperimental Group28.5011.05<0.001
Control Group15.80
Global Monitoring: WCSTExperimental Group27.909.120.003
Control Group16.36
Wrong Yes: WCSTExperimental Group15.3811.59<0.001
Control Group28.32
MMQ e—Simple StrategiesExperimental Group25.954.100.043
Control Group18.23
Note. a EG = Experimental group. b CG = Control group. c WCST = Wisconsin Card Sorting Test. d DnP = Doors and People. e MMQ = Multifactorial Metamemory Questionnaire.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bampa, G.; Moraitou, D.; Metallidou, P.; Masoura, E.; Papantoniou, G.; Sofologi, M.; Kougioumtzis, G.A.; Tsolaki, M. The Efficacy of a Metacognitive Training Program in Amnestic Mild Cognitive Impairment: A 6-Month Follow-Up Clinical Study. Healthcare 2024, 12, 1019. https://doi.org/10.3390/healthcare12101019

AMA Style

Bampa G, Moraitou D, Metallidou P, Masoura E, Papantoniou G, Sofologi M, Kougioumtzis GA, Tsolaki M. The Efficacy of a Metacognitive Training Program in Amnestic Mild Cognitive Impairment: A 6-Month Follow-Up Clinical Study. Healthcare. 2024; 12(10):1019. https://doi.org/10.3390/healthcare12101019

Chicago/Turabian Style

Bampa, Grigoria, Despina Moraitou, Panagiota Metallidou, Elvira Masoura, Georgia Papantoniou, Maria Sofologi, Georgios A. Kougioumtzis, and Magdalini Tsolaki. 2024. "The Efficacy of a Metacognitive Training Program in Amnestic Mild Cognitive Impairment: A 6-Month Follow-Up Clinical Study" Healthcare 12, no. 10: 1019. https://doi.org/10.3390/healthcare12101019

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop