Abstract
Stroke survivors often experience psychological difficulties, yet specialist provision is limited. The Wellbeing After Stroke (WAterS) study co-developed a nine-week, online Acceptance and Commitment Therapy (ACT) group programme designed to be delivered by non-specialist practitioners using structured, script-informed session clinical protocols. This study explored whether fidelity of delivery could be assessed, both to the clinical protocol (what was delivered) and to ACT therapeutic processes (how it was delivered). Eight practitioners were trained, and four delivered the intervention to three groups of four stroke survivors. Fidelity was assessed using a bespoke WAterS checklist, completed by practitioners after each session, and the ACT-Fidelity Measure (ACT-FM), completed by researchers rating a sub-set of recorded sessions. Practitioners delivered 92–100% of planned content, indicating high fidelity to protocol. ACT-FM ratings suggested some consistency with ACT processes, though there was variability across practitioners. These findings provide preliminary, proof-of-principle evidence that non-specialists can deliver a structured ACT-based group intervention with fidelity to protocol, and that both self-completed and observer-rated methods can feasibly assess fidelity. However, the small sample size means these results should be considered exploratory. The study highlights the potential value of these methods for informing training and fidelity assessment in future research.
1. Introduction
Stroke survivors frequently experience mental health difficulties, and many services cannot fully meet their needs (Campbell Burton et al., 2013; Hackett & Pickles, 2014; Stroke Association, 2018). Psychological interventions for those with mild-to-moderate needs can be delivered by non-specialist staff, with appropriate training and supervision (Intercollegiate Stroke Working Party, 2016; NHS Improvement-Stroke, 2011). One such intervention is Acceptance and Commitment Therapy (ACT) (Hayes, 2016), a trans-diagnostic, third-wave, cognitive behavioural therapy, with growing evidence for its use for psychological difficulties post-stroke (Graham et al., 2015; Majumdar & Morris, 2019; Niu et al., 2022; Rauwenhoff et al., 2022; Sathananthan et al., 2022).
The Wellbeing After Stroke study (WAterS) developed a nine-week, online, highly protocolised (script-informed) ACT-based intervention, and demonstrated the feasibility of delivering it to groups of stroke survivors (Patchwood et al., 2024). An adjunct training programme was designed to upskill practitioners without previous expertise in ACT to deliver the intervention under the weekly supervision of a clinical neuropsychologist. Each intervention group was run by two trained practitioners: a lead and a support. Intervention content covered ACT components including mindfulness meditations, values clarification, and goal setting, and was adapted to stroke with relevant examples and provision of audio-visual resources and a client handbook. See Supplementary Material (S1) for a more detailed intervention and training description. Findings of WAterS are reported elsewhere, including feasibility (Patchwood et al., 2024), acceptability to stroke survivors (Foote et al., 2025a), and acceptability to practitioners (Foote et al., 2025b). The current paper focuses on intervention fidelity: a multi-dimensional construct referring to the extent to which an intervention is delivered and received as intended and which can affect outcomes (Bellg et al., 2004; Carroll et al., 2007).
Previous studies investigating ACT interventions for people with acquired brain injury (including stroke) have measured fidelity of delivery to intervention content using checklists, either completed by the qualified therapist delivering the session (Rauwenhoff et al., 2022), or by an independent neuropsychologist rating a sub-set of sessions (Sathananthan et al., 2022). While fidelity studies like these are well established, ours is novel in examining a protocolised, online group-based ACT intervention delivered by non-specialists through workforce upskilling. Similar task-shifting and digitally delivered models are being explored internationally to extend access to psychological care in low-resource and post-acute rehabilitation settings (WHO, 2007; Patel et al., 2018).
The Acceptance and Commitment Therapy-Fidelity Measure (ACT-FM) (O’Neill et al., 2019) was developed as a concise tool to measure deliverer fidelity to ACT processes, by capturing observable ACT consistent and inconsistent behaviours. This tool was developed to be useable across therapy contexts; however, the tool was field tested and validated on a 1:1 ACT session delivered by an ACT expert (O’Neill et al., 2019). To our knowledge, the ACT-FM has not previously been used for a post-stroke intervention.
The present paper reports a proof-of-principle exploration of fidelity of delivery in WAterS. Firstly, a bespoke checklist was developed to examine what the practitioners delivered, i.e., whether the components of the clinical protocol were covered. Secondly, the study piloted the use of the ACT-FM to consider how the intervention was delivered (i.e., whether practitioners’ behaviours aligned with ACT processes), in the novel context of a protocolised online, group intervention for stroke survivors. Research objectives were to explore the following:
- The reliability of practitioners self-monitoring their fidelity;
- Whether the intervention was delivered to protocol and reasons for any deviations;
- Whether the intervention showed indications of being delivered in a manner consistent with ACT processes.
2. Materials and Methods
Research ethics approval was secured from the University of Manchester (ref 2021-11134-18220; awarded 19 March 2021). Participant recruitment occurred between June 2021 and March 2022, with intervention delivery between April and October 2022. Analysis was completed in 2023.
Participants were those already recruited and trained as lead and support practitioners in the broader WAterS study (Patchwood et al., 2024). The eligibility criteria were as follows: employed as frontline practitioners by Stroke Association (a UK national charity specialising in stroke) for at least 6 months; capacity and willingness to participate with clearance and support from their line manager; experience and knowledge of facilitating groups of stroke survivors.
Two tools were used to meet the research objectives. The WAterS fidelity tool (see Supplementary Material S2 for example) was a bespoke tool developed by the authors to monitor delivery of components of the clinical protocol. Data collected included the following: (1) Whether practitioners delivered each content component of each session (112 components across all nine sessions). Scoring was a binary yes/no rating, with ‘yes’ indicating that they had fully delivered a component and ‘no’ indicating that they had partially or not delivered a component. (2) Date, time, and attendance at each session, and practitioner judgement on session length (‘too short’, ‘about right’, or ‘too long’). (3) Practitioners’ reasons for not fully delivering any component (free-text question). (4) Any other comments on the session (free-text question). The Acceptance and Commitment Therapy-Fidelity Measure (ACT-FM) (O’Neill et al., 2019) is a published tool, designed to explore practitioner fidelity to ACT processes. The ACT-FM explores four areas of Acceptance and Commitment Therapy delivery: Therapist Stance, Open Response Style, Aware Response Style, and Engaged Response style. Each area is scored from 0 to 9 for consistency and 0 to 9 for inconsistency, giving a total consistency score and a total inconsistency score, each from 0 to 36. Two researchers (HF and EP) completed training on use of the tool and score calibration prior to use (training plans agreed with Acceptance and Commitment Therapy-Fidelity Measure authors Lucy O’Neill and Christopher Graham, via emails exchanged in February 2021).
The WAterS intervention consisted of nine weekly sessions (Patchwood et al., 2024). Three intervention groups ran with four stroke survivors in each group (N = 12 total) and one person’s spouse also attended in one group (Groups A, B, and C). Details on stroke survivor eligibility and baseline characteristics are published elsewhere (Patchwood et al., 2024), but to summarise: stroke survivors were at least 4 months post-stroke; living in community settings; and self-reporting some levels of psychological distress/challenges adjusting. Each group was delivered by two practitioners (one lead and one support). All intervention sessions were video recorded.
To test self-completion of the new WAterS fidelity tool, lead and support practitioners were trained on its use and asked to independently complete the tool immediately following each session (to aid recall), and email this to the research team within 24 h of each session finishing. They were prompted by an email reminder within this time period. This was carried out for all sessions for all three groups.
The researchers used the video recordings of the sessions to collect data from a sub-set of nine sessions using both fidelity tools. Sessions were recorded via the video-conferencing platform. Recordings were stored on secure, encrypted servers accessible only to named researchers for fidelity analysis, in line with ethical approval. All participants gave informed consent for recordings. Table 1 summarises how the two fidelity tools were used to address each research objective. Due to time and resource limitations and the exploratory nature of the study, we felt it sufficient to include sessions from Groups A and B only. When two independent researchers rated sessions on the ACT-FM, they gave a score for each session as a whole, based on the combined behaviour of both the lead and the support practitioners within the session. There was not an individual rating per practitioner in the session.
Table 1.
Summary of how the tools were used to answer each research objective.
To explore the reliability of practitioners’ self-monitoring fidelity, inter-rater reliability on completion of the WAterS fidelity tool was calculated, first between the lead and support practitioners, and secondly, between the researcher and lead practitioner, calculated using the Prevalence-Adjusted and Bias-Adjusted Kappa (PABAK) statistic (Chen et al., 2009). Sufficient reliability was set a priori as being at least good agreement (PABAK of 0.61 or higher) (Altman, 1991).
To explore if the intervention was delivered to protocol and summarise reasons for protocol deviations, data from the WAterS fidelity tool were analysed and summarised in several ways: the percentage of content items delivered (per group and totalled for all three groups); frequency and duration of sessions, including summarising practitioners’ judgement on session length; and the total percentage of sessions attended by stroke survivors across all three groups. For the free-text questions, both the lead and support practitioners’ data were analysed to add insight to the quantitative data.
Data analysis used the researcher ratings available for the sub-set of nine sessions, and, if inter-rater reliability on the tool had been established, used the lead practitioner ratings for other sessions.
To explore whether the WAterS intervention showed indications of being delivered in a manner consistent with ACT therapeutic processes, the ACT-FM (O’Neill et al., 2019) was rated by the researchers observing videos of the sub-set of nine intervention sessions (as per Table 1). Published guidelines for this measure do not provide cut-off scores as to what are adequate level of in/consistency or guidance on how to combine the scores. Therefore, no a priori level to be reached was set for this proof-of-principle study. Scores were calculated for each group: both overall (total and mean) and per area.
3. Results
3.1. Practitioners and Delivery
Eight practitioners were trained and recruited as part of the broader WAterS study; five were lead and three were support. In addition to the eligibility criteria, all lead practitioners had a level four counselling qualification. All eight trained practitioners were female, with a median age of 54.5 years (range 34 to 61). The mean number of years working for the Stroke Association was 5.3 years (range 1 to15 years). We deliberately trained more practitioners than were needed, to ensure contingency plans in case of sickness or absence. However, almost all sessions across Groups A, B, and C were delivered by two of the five trained lead practitioners alongside two of the three support practitioners. There were only three practitioner absences across the three groups (27 sessions); each covered by an alternative trained practitioner. Overall, seven of the eight trained practitioners delivered at least one intervention session, and all these seven practitioners contributed data to the present study.
3.2. Objective 1: Reliability of Practitioners Self-Monitoring Their Fidelity
The inter-rater reliability of the use of the WAterS fidelity tool suggests that agreement in Group A was ‘very good’ between lead and support practitioners (0.89 PABAK; 95% CIs 0.81 to 0.98), and between lead and research (0.90 PABAK; 95% CIs 0.79 to 1). Agreement in Group C for lead and support was ‘perfect’ (i.e., ratings did not vary) and not calculated for lead and researchers (as Group C ratings were not included in the researcher sub-set). Agreement in Group B was ‘good’ for lead and support practitioners (0.80 PABAK; 95% CIs 0.69 to 0.01) and lead and researcher (0.65 PABAK; 95% CIs 0.44 to 0.77). Practitioners were able to reliably complete with at least ‘good’ agreement in all cases, and therefore analyses of WAterS fidelity tool data for objective two could be completed as planned.
3.3. Objective 2: Intervention Delivered to Protocol and Reasons for Any Deviations
In consideration of whether the intervention was delivered to protocol, analysis of the WAterS fidelity tool found that Group A delivered 92% (103/112) of all protocolised components, Group B delivered 96% (108/112), and Group C delivered 100% (112/112). In total, 96% (323/336) of all protocol components were delivered across the three groups.
Free-text practitioner comments stated that having two practitioners per group and having the group supervision sessions (held weekly with a clinical neuropsychologist) supported the successful delivery of the intervention. Technological issues were occasionally reported as negatively impacting (but not preventing) the delivery of components, e.g., ‘my internet dropped at the end of the session and [support practitioner] took over’ [ID05]. Suggestions to improve the clinical protocol were given, such as moving the order of activities to better suit timings.
All sessions happened weekly as planned, with 98% attendance across the three groups (two stroke survivors missed one session each; one due to being away and one due to a power outage).
The sessions were all planned to be 120 min duration. Overall, the mean length of delivered sessions was 128 min (min: 60 min, max: 150 min), but actual duration varied according to session number (from 95 to 145 min) (see Table 2). The mean session length was shorter than planned for sessions one and three, and longer than planned in all other sessions.
Table 2.
Actual intervention session duration across all groups (planned to be 120 min).
Of the twenty-seven lead practitioner ratings of duration (nine sessions for each of the three groups), twenty sessions were rated as ‘about right’, six as ‘too short’ and one (Group 3, session 1) as ‘too long’. The free-text responses give some insight into why practitioners had rated the time available for a session as ‘about right’ despite a longer duration time than anticipated. For example, ‘we overran by half an hour but there was quite a lot of reflecting and talking. I don’t think the session is too short.’ [ID01].
When the practitioners did rate a session as ‘too short’, comments indicate that practitioners were concerned that components were not sufficiently explained or discussed, e.g., ‘we rushed through the home practice for next week’ [ID07] and ‘more time could be given to such an emotive topic’ [ID05]. Lack of time was the most common reason the practitioners gave for partially/not delivering an intervention component, indicating that the practitioners were mitigating for even longer session durations. For example, ‘I was mindful of the time and although I invited feedback and checked for understanding, I felt that I skipped over this quite quickly.’ [ID05].
Other comments made by the practitioners related to whether the intervention was received with fidelity by the stroke survivors. Most of these comments were positive, for example, ‘[stroke survivors] engaged well in the session both with the facilitators [practitioners] and each other’ [ID01], and reported the stroke survivors engaging in the homework, e.g., ‘all [stroke survivors] are engaging well in the home practice’ [ID07]. Conversely, some comments noted that stroke survivors struggled with understanding ‘one [stroke survivor] could not grasp the concept’ [ID01] and lack of homework engagement ‘I don’t feel that many [stroke survivors] are actively engaging with much of the home practice’ [ID05]. Post-stroke difficulties presented a barrier to engagement on occasion, e.g., one stroke survivor ‘could physically write but found it difficult to organise [their] thoughts and language to record’ [ID06].
3.4. Objective 3: Intervention Delivered in a Manner Consistent with ACT Processes
In relation to whether the WAterS intervention showed indications of being delivered in a manner consistent with ACT processes (objective three), Table 3 shows researcher scores on the ACT-FM (O’Neill et al., 2019). The data are limited but the total in/consistency scores suggest that both Group A and B practitioners may have been more ACT consistent than they were inconsistent. However, there is a different profile of scores, with Group A having both higher consistency and lower inconsistency scores than Group B. Group B practitioners were observed to stick less closely to the scripts given in the protocol and scored as less consistent with ACT during the unscripted sections of the protocol.
Table 3.
Acceptance and Commitment Therapy in/consistency scores for Groups A and B.
The Group A results are more ACT consistent than inconsistent across all subscales, as are the Group B results, with the exception of the ‘Open’ subscale. This subscale also has the highest inconsistency score in Group A results.
4. Discussion
This proof-of-principle study provides insights to strengthen the evidence base for measuring fidelity of delivery to protocolised ACT interventions within acquired brain injury populations. Non-psychologist practitioners, trained and supervised by a clinical neuropsychologist, were reliable in self-monitoring fidelity. They delivered an online, ACT-based, script-informed group intervention to stroke survivors according to clinical protocol, meeting the 80% threshold recommended by Borrelli (2011) and comparing favourably to previous findings from a single group intervention (Sathananthan et al., 2022) and four individual cases (Rauwenhoff et al., 2022). Intervention dose was as planned, and attendance was high. Almost all intervention components were delivered, but most sessions were slightly longer than planned. In both groups, the total consistency scores exceeded the inconsistency scores, suggesting some fidelity to ACT therapeutic processes, but there was variability in fidelity across practitioners. As a proof-of-principle study, these findings are based on a deliberately small and tightly defined sample and data. As such, these results should be seen as exploratory and hypothesis-generating rather than definitive.
The ACT-FM (O’Neill et al., 2019) was selected as, to the best of our knowledge, it is the only validated tool developed by experts to explore fidelity in ACT interventions. However, it requires further psychometric evaluation and does not have guidance on score interpretation. It was found to be useable in a novel context (a protocolised online, group intervention for stroke survivors), but adaptations could improve fit to context. For example, certain WAterS protocolized sessions had a specific focus, and so opportunities for scoring on all aspects of the measure were limited. Tailoring the measure to different intervention contexts may be beneficial and has been carried out in studies with different populations by adapting scoring to reflect didactic delivery (Shepherd et al., 2022) and weighting scores differently per session to reflect the session’s focus (Reeve et al., 2021).
The guidelines for the ACT-FM (O’Neill et al., 2019) do not provide cut-off scores as to what is adequate consistency with ACT processes. A recently published trial protocol of ACT for a multiple sclerosis population proposed cut-off scores for low/high fidelity, which have yet to be implemented or tested (Giovannetti et al., 2022). When applying these exploratory cut-offs to the present study’s results, practitioners in Group B would be considered to have ‘low fidelity’ and practitioners in Group A ‘high fidelity’. This may be due to variability in how closely the practitioners stuck to the scripts provided in the practitioner handbook, with Group B practitioners using the scripts less in general. This indicates that the WAterS training may require development to enable all practitioners to deliver the intervention with adequate fidelity to ACT processes and to gain more comfort with the use of scripts. However, this variation did not have a clear impact on how the groups were received, with participating stroke survivors reporting all groups as acceptable and valuable (Foote et al., 2025a).
The WAterS fidelity tool was developed and used for the first time in this study. It was developed specifically for the content, session structure, and therapeutic elements of this particular online ACT intervention, so there is no off-the-shelf tool that would have captured our protocol nuances (Ginsburg et al., 2021). Practitioner comments were included descriptively in this paper to provide useful illustrative insight into practical delivery issues (e.g., time pressures and technology use), but systematic qualitative analysis was beyond the study’s scope and resources.
The fidelity tool is self-completed by practitioners, which can be less resource-intensive than external rating and may be practicable for real-world implementation. However, we acknowledge that self-completed fidelity checklists carry the risk of response bias or recall error, particularly when used without external verification (Breitenstein et al., 2010). To mitigate against issues, researchers rated a sub-sample of the sessions and reliability was found to be good. Tailored fidelity assessment tools can inform training, supervision, and service-level quality assurance, providing a practical link to clinical service delivery (Breitenstein et al., 2010) but this should be explored robustly in future research.
Researcher ratings of the practitioners were collected from two of the three groups, due to limited resources. Practitioners were delivering the intervention for the first time in these groups and so we do not know if levels of fidelity will change over time. Lead practitioners all had a counselling qualification (although this was not an eligibility criteria), which may have impacted on their levels of fidelity. This study focused primarily on fidelity of delivery, with only incidental data as to fidelity of receiving the intervention. However, other publications demonstrate that the WAterS intervention was feasible to deliver (Patchwood et al., 2024) and acceptable to stroke survivors (Foote et al., 2025a) and practitioners (Foote et al., 2025b).
Beyond its proof-of-principle value, our findings may be used to inform pragmatic ways to support delivery at scale. Incorporating a brief, session-specific fidelity checklist into practitioner training and supervision can promote self-reflection, consolidate skills, and provide quality assurance signals for service leads—approaches long recommended in fidelity guidance for behavioural and psychotherapy trials (Bellg et al., 2004; Borrelli, 2011). In services where specialist psychology is limited, structured fidelity monitoring can help maintain intervention integrity when upskilling non-specialist workforces (task-sharing), provided it is paired with routine supervision (WHO, 2023). Embedding periodic observational checks (e.g., sampled ratings) within supervision is feasible and aligns with implementation drivers linking adherence to better outcomes, e.g., higher treatment integrity is generally associated with improved clinical outcomes (Schoenwald et al., 2000; Esposito et al., 2024). Future studies could test whether fidelity checks of this nature, integrated into standard supervision, improve skill retention and subsequent clinical outcomes in larger, more diverse brain injury populations (Fixsen et al., 2019).
Although the present study was conducted in the UK, the model outlined in WAterS could be adapted for diverse contexts. In low-resource or non-Western settings, the task-sharing approach may align with the World Health Organization’s mhGAP guidance for extending psychological care through supervised non-specialists (WHO, 2023). Adaptation would require attention to local workforce structures, cultural contexts, and access to reliable digital platforms (Patel et al., 2018; Kola et al., 2021). Future research could utilise co-design with local practitioners and stroke survivors to ensure cultural relevance while maintaining fidelity to ACT principles.
Further research on the ACT-FM (O’Neill et al., 2019) with a larger sample of practitioners might strengthen our promising findings in the current UK context. This could be used to establish cut-off scores for the delivery of a protocolised intervention with adequate fidelity to ACT processes and explore whether adaptations to the measure would be beneficial. Further research to optimise the WAterS training would be beneficial, aiming to ensure the training is successful across practitioners in enabling the achievement of an adequate level of fidelity to ACT processes, particularly in having an ‘open response style’. Including assessment of practitioner understanding and skill during the WAterS training course could support consistency in training outcomes across practitioners. The trained practitioners reported (Foote et al., 2025b) that opportunities to experience ACT for themselves and to role-play delivery were beneficial. Some reported that they would appreciate more time spent in training on other practical components of delivery, e.g., use of technology and scripts. Practitioners self-rating ACT consistency could lead to increased self-monitoring, and potentially increase fidelity. Future research (WAterS-2 (University of Manchester, 2024)) has exposed practitioners to the detail of the ACT-FM, aiming to support practitioner understanding of ACT consistency in practice, although the ACT-FM rating is still planned to be carried out by a third-party observer. Future research exploring whether interventions are received with fidelity may enable intervention optimisation and support implementation (Toomey et al., 2020).
5. Conclusions
In conclusion, this proof-of-principle study demonstrates that non-psychologist practitioners can deliver a structured, online, ACT-based group for stroke survivors with high fidelity following training and clinical supervision. Findings provide preliminary support for the feasibility of a scalable, task-shared model of psychological care in stroke rehabilitation, consistent with global recommendations (WHO, 2023; Patel et al., 2018). Fidelity monitoring using a bespoke checklist and the ACT-FM was feasible and informative, aligning with best-practice guidance that fidelity assessment can enhance training, supervision, and service-level quality assurance (Bellg et al., 2004; Borrelli, 2011; Breitenstein et al., 2010). Although exploratory and based on a small UK sample, findings highlight the potential to adapt the WAterS model for diverse and low-resource settings, with appropriate cultural and workforce tailoring (Kola et al., 2021). Future research should examine how fidelity assessment and training can support high-quality delivery and equitable access to post-stroke psychological support (Fixsen et al., 2019; Toomey et al., 2020).
Supplementary Materials
The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/psycholint7040100/s1, S1 presents a full description of the WAterS intervention and practitioner training programme, structured using the Template for Intervention Description and Replication (TIDieR) framework (see references within supplement) (Cotterill et al., 2018; Hoffmann et al., 2014; Hill et al., 2017a, 2017b; Foote, 2024). S2 provides an example of the WAterS fidelity checklist used to capture delivery of protocol components; a bespoke version was created for each of the nine sessions, and the Session 1 checklist is shown as an illustration.
Author Contributions
Conceptualization, E.P., H.F., A.B., and S.C.; methodology, E.P., H.F., A.B., and S.C.; formal analysis, H.F., E.P., S.C., and A.B.; writing—original draft preparation, H.F. and E.P.; writing—review and editing, H.F., E.P., S.C., and A.B.; supervision, A.B., S.C., and E.P.; funding acquisition, E.P. and A.B. All authors have read and agreed to the published version of the manuscript.
Funding
This independent research was funded by a Stroke Association Postdoctoral Fellowship Award (Ref SA PDF 18100024) and supported by a University of Manchester Research Impact Scholarship (JACS Code: C800). The views expressed are those of the authors and not necessarily those of the funders. Funders had no role in study design, execution, analysis, or results interpretation.
Institutional Review Board Statement
The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of the University of Manchester (ref 2021-11134-18220) on 19 March 2021.
Informed Consent Statement
Informed consent was obtained from all participants involved in the study.
Data Availability Statement
Dataset available on request from the authors.
Acknowledgments
The authors extend thanks to participants and the Clinical Lead for the WAterS study, Geoff Hill.
Conflicts of Interest
The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| ACT | Acceptance and Commitment Therapy |
| WAterS | Wellbeing After Stroke |
| ACT-FM | ACT-Fidelity Measure |
| PABAK | Prevalence-Adjusted and Bias-Adjusted Kappa |
References
- Altman, D. G. (1991). Practical statistics for medical research. Chapman & Hall. [Google Scholar]
- Bellg, A. J., Borrelli, B., Resnick, B., Hecht, J., Minicucci, D. S., Ory, M., Ogedegbe, G., Orwig, D., Ernst, D., & Czajkowski, S. (2004). Enhancing treatment fidelity in health behavior change studies: Best practices and recommendations from the NIH Behavior Change Consortium. Health Psychology, 23(5), 443–451. [Google Scholar] [CrossRef] [PubMed]
- Borrelli, B. (2011). The assessment, monitoring, and enhancement of treatment fidelity in public health clinical trials. Journal of Public Health Dentistry, 71(Suppl. S1), S52–S63. [Google Scholar] [CrossRef] [PubMed]
- Breitenstein, S. M., Gross, D., Garvey, C. A., Hill, C., Fogg, L., & Resnick, B. (2010). Implementation fidelity in community-based interventions. Research in Nursing & Health, 33(2), 164–173. [Google Scholar] [CrossRef]
- Campbell Burton, C. A., Murray, J., Holmes, J., Astin, F., Greenwood, D., & Knapp, P. (2013). Frequency of anxiety after stroke: A systematic review and meta-analysis of observational studies. International Journal of Stroke, 8(7), 545–559. [Google Scholar] [CrossRef]
- Carroll, C., Patterson, M., Wood, S., Booth, A., Rick, J., & Balain, S. (2007). A conceptual framework for implementation fidelity. Implementation Science, 2, 40. [Google Scholar] [CrossRef] [PubMed]
- Chen, G., Faris, P., Hemmelgarn, B., Walker, R. L., & Quan, H. (2009). Measuring agreement of administrative data with chart data using prevalence unadjusted and adjusted kappa. BMC Medical Research Methodology, 9, 5. [Google Scholar] [CrossRef] [PubMed]
- Cotterill, S., Knowles, S., Martindale, A. M., Elvey, R., Howard, S., Coupe, N., Wilson, P., & Spence, M. (2018). Getting messier with TIDieR: Embracing context and complexity in intervention reporting. BMC Medical Research Methodology, 18, 12. [Google Scholar] [CrossRef]
- Esposito, G., Di Maro, A., & Passeggia, R. (2024). The relationship between treatment integrity and outcome in group psychotherapy: A systematic review. Clinical Psychology & Psychotherapy, 31(1), e2952. [Google Scholar] [CrossRef] [PubMed]
- Fixsen, D. L., Van Dyke, M., & Blase, K. A. (2019). Implementation science: Fidelity predictions and outcomes. Active Implementation Research Network. Available online: https://whale-accordion-cssw.squarespace.com/s/Implementation-Science-FidelityPredictionsOutcomes-5jyh.pdf (accessed on 16 October 2025).
- Foote, H. (2024). Acceptance and commitment therapy to support psychological adjustment after stroke: Investigating acceptability, fidelity and measurement [Doctoral dissertation, The University of Manchester]. [Google Scholar]
- Foote, H., Bowen, A., Cotterill, S., & Patchwood, E. (2025a). An online, group Acceptance and Commitment Therapy is acceptable to stroke survivors: A qualitative interview study. Neuropsychological Rehabilitation, 35, 1865–1883. [Google Scholar] [CrossRef] [PubMed]
- Foote, H., Bowen, A., Cotterill, S., & Patchwood, E. (2025b). The acceptability of training to deliver online, group-based Acceptance and Commitment Therapy to stroke survivors: The experience of third-sector practitioners. Neuropsychological Rehabilitation. in-press. [Google Scholar]
- Ginsburg, L. R., Hoben, M., Easterbrook, A., Anderson, R. A., Estabrooks, C. A., & Norton, P. G. (2021). Fidelity is not easy! Challenges and guidelines for assessing fidelity in complex interventions. Trials, 22, 372. [Google Scholar] [CrossRef] [PubMed]
- Giovannetti, A. M., Pakenham, K. I., Presti, G., Quartuccio, M. E., Confalonieri, P., Bergamaschi, R., Grobberio, M., Di Filippo, M., Micheli, M., Brichetto, G., Patti, F., Copetti, M., Kruger, P., & Solari, A. (2022). A group resilience training program for people with multiple sclerosis: Study protocol of a multi-centre cluster-randomized controlled trial (multi-READY for MS). PLoS ONE, 17(5), e0267245. [Google Scholar] [CrossRef]
- Graham, C. D., Gillanders, D., Stuart, S., & Gouick, J. (2015). An acceptance and commitment therapy (ACT)-based intervention for an adult experiencing post-stroke anxiety and medically unexplained symptoms. Clinical Case Studies, 14(2), 83–97. [Google Scholar] [CrossRef]
- Hackett, M. L., & Pickles, K. (2014). Part I: Frequency of depression after stroke: An updated systematic review and meta-analysis of observational studies. International Journal of Stroke, 9(8), 1017–1025. [Google Scholar] [CrossRef]
- Hayes, S. (2016). Acceptance and commitment therapy, relational frame theory, and the third wave of behavioral and cognitive therapies—Republished article. Behavior Therapy, 47(6), 869–885. [Google Scholar] [CrossRef] [PubMed]
- Hill, G., Hynd, N., Price, J., Evans, S., & Brechin, D. (2017a). Living well with neurological conditions: An eight-week series of group workshops informed by acceptance and commitment therapy (ACT). South Tees NHS Foundation Trust. [Google Scholar]
- Hill, G., Hynd, N., Price, J., Evans, S., & Brechin, D. (2017b). Living well with neurological conditions: Evaluation of an ACT-informed group intervention for psychological adjustment in outpatients with neurological problems. The Neuropsychologist, 8559(4), 59–63. [Google Scholar]
- Hoffmann, T. C., Glasziou, P. P., Boutron, I., Milne, R., Perera, R., Moher, D., Altman, D. G., Barbour, V., Macdonald, H., Johnston, M., Lamb, S. E., Dixon-Woods, M., McCulloch, P., Wyatt, J. C., Chan, A.-W., & Michie, S. (2014). Better reporting of interventions: Template for intervention description and replication (TIDieR) checklist and guide. The BMJ, 348, g1687. [Google Scholar] [CrossRef] [PubMed]
- Intercollegiate Stroke Working Party. (2016). National clinical guideline for stroke: Fifth edition. Royal College of Physicians. [Google Scholar] [CrossRef]
- Kola, L., A Kohrt, B., Hanlon, C., A Naslund, J., Sikander, S., Balaji, M., Benjet, C., Cheung, E. Y. L., Eaton, J., Gonsalves, P., Hailemariam, M., Luitel, N. P., Machado, D. B., Misganaw, E., Omigbodun, O., Roberts, T., Salisbury, T. T., Shidhaye, R., Sunkel, C., … Patel, V. (2021). COVID-19 mental health impact and responses in low-income and middle-income countries: Reimagining global mental health. The Lancet Psychiatry, 8(6), 535–550. [Google Scholar] [CrossRef] [PubMed]
- Majumdar, S., & Morris, R. (2019). Brief group-based acceptance and commitment therapy for stroke survivors. British Journal of Clinical Psychology, 58(1), 70–90. [Google Scholar] [CrossRef] [PubMed]
- NHS Improvement-Stroke. (2011). Psychological care after stroke: Improving stroke services for people with cognitive and mood disorders. Available online: www.improvement.nhs.uk/stroke (accessed on 11 September 2025).
- Niu, Y., Sheng, S., Chen, Y., Ding, J., Li, H., Shi, S., Wu, J., & Ye, D. (2022). The efficacy of group acceptance and commitment therapy for preventing post-stroke depression: A randomized controlled trial. Journal of Stroke and Cerebrovascular Diseases, 31(2), 106225. [Google Scholar] [CrossRef] [PubMed]
- O’Neill, L., Latchford, G., McCracken, L. M., & Graham, C. D. (2019). The development of the Acceptance and Commitment Therapy Fidelity Measure (ACT-FM): A delphi study and field test. Journal of Contextual Behavioral Science, 14, 111–118. [Google Scholar] [CrossRef]
- Patchwood, E., Foote, H., Vail, A., Cotterill, S., Hill, G., & Bowen, A. (2024). Wellbeing After Stroke (WAterS): Feasibility testing of a co-developed acceptance and commitment therapy intervention to support psychological adjustment after stroke. Clinical Rehabilitation, 38(7), 979–989. [Google Scholar] [CrossRef] [PubMed]
- Patel, V., Saxena, S., Lund, C., Thornicroft, G., Baingana, F., Bolton, P., Chisholm, D., Collins, P. Y., Cooper, J. L., Eaton, J., Herrman, H., Herzallah, M. M., Huang, Y., Jordans, M. J. D., Kleinman, A., Medina-Mora, M. E., Morgan, E., Niaz, U., Omigbodun, O., … Unützer, J. (2018). The Lancet Commission on global mental health and sustainable development. The Lancet, 392(10157), 1553–1598. [Google Scholar] [CrossRef] [PubMed]
- Rauwenhoff, J. C. C., Bol, Y., Peeters, F., van den Hout, A. J. H. C., Geusgens, C. A. V., & van Heugten, C. M. (2022). Acceptance and commitment therapy for individuals with depressive and anxiety symptoms following acquired brain injury: A non-concurrent multiple baseline design across four cases. Neuropsychological Rehabilitation, 33(6), 1018–1048. [Google Scholar] [CrossRef]
- Reeve, A., Moghaddam, N., Tickle, A., & Young, D. (2021). A brief acceptance and commitment intervention for work-related stress and burnout amongst frontline homelessness staff: A single case experimental design series. Clinical Psychology & Psychotherapy, 28(5), 1001–1019. [Google Scholar] [CrossRef]
- Sathananthan, N., Dimech-Betancourt, B., Morris, E., Vicendese, D., Knox, L., Gillanders, D., Das Nair, R., & Wong, D. (2022). A single-case experimental evaluation of a new group-based intervention to enhance adjustment to life with acquired brain injury: VaLiANT (valued living after neurological trauma). Neuropsychological Rehabilitation, 32(8), 2170–2202. [Google Scholar] [CrossRef]
- Schoenwald, S. K., Henggeler, S. W., Brondino, M. J., & Rowland, M. D. (2000). Multisystemic therapy: Monitoring treatment fidelity. Family Process, 39(1), 83–103. [Google Scholar] [CrossRef] [PubMed]
- Shepherd, K., Golijani-Moghaddam, N., & Dawson, D. L. (2022). ACTing towards better living during COVID-19: The effects of Acceptance and Commitment therapy for individuals affected by COVID-19. Journal of Contextual Behavioral Science, 23, 98–108. [Google Scholar] [CrossRef] [PubMed]
- Stroke Association. (2018). Lived experience of stroke report—Chapter 1. Available online: https://www.stroke.org.uk/lived-experience-of-stroke-report/chapter-1-hidden-effects-of-stroke (accessed on 11 September 2025).
- Toomey, E., Hardeman, W., Hankonen, N., Byrne, M., McSharry, J., Matvienko-Sikar, K., & Lorencatto, F. (2020). Focusing on fidelity: Narrative review and recommendations for improving intervention fidelity within trials of health behaviour change interventions. Health Psychology and Behavioral Medicine, 8(1), 132–151. [Google Scholar] [CrossRef] [PubMed]
- University of Manchester. (2024). Wellbeing After Stroke-2 (WAterS-2) study website. Available online: https://sites.manchester.ac.uk/waters2/ (accessed on 11 September 2025).
- WHO. (2007, December 31). Task shifting: Rational redistribution of tasks among health workforce teams: Global recommendations and guidelines. World Health Organization. Available online: https://iris.who.int/items/b5c624db-537c-4e80-83f2-269f063fb8bb (accessed on 16 October 2025).
- WHO. (2023). Mental Health Gap Action Programme (mhGAP) guideline for mental, neurological and substance use disorders. World Health Organization Guidelines Review Committee, Mental Health, Brain Health and Substance Use (MSD). Available online: https://www.who.int/publications/i/item/9789240084278 (accessed on 16 October 2025).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).