2. Method: The Stages in the Journey
2.1. Planning
The authors of this article first came together in 2023 via zoom from Australia, Northern Ireland (UK), and the Republic of Ireland, to discuss if we would like to work on a project that expanded our interest in inclusive research to inclusive evaluation and regulation activities. We are a group of three university researchers (traditionally employed by our respective universities) and two researchers who identify as having lived experience. All five of us have had experience and membership of different inclusive research networks. We had previously collaborated on an article published in the
Social Sciences first Special Issue on inclusive research (
https://doi.org/10.3390/socsci11100483 (accessed on 26 September 2025)). For this current article we agreed to write up the lessons learned by collaborating on what we called the inclusive evaluation project.
Initially as a team, our discussions were guided by a set of accessible slides where the following questions were covered: Are there articles we should read before we start? Who should be interviewed and by whom? How will we go about discussing what people tell us? How will we write up what people say?
Authors 1, 2, 3, and 4 also met face-to-face in Sydney before the interviewing began and after ethics applications drafted by authors 1 and 5 had been approved in 2024 by University of Sydney Human Ethics Committee, 2024/037, approved on 30 January 2024 and Trinity College Dublin Ethics Committee, 1130, Approved on 2 April 2024.
2.2. Checking the Literature
Preliminary scanning by the first author in 2023, at the outset of the project, indicated that a formal review of the literature on inclusive service evaluation activities would not be fruitful. We used Scopus, Web of Science, and Google Scholar for the peer-reviewed literature. We applied a range of descriptors that focused on people with intellectual disabilities being included as evaluators/regulators in service settings, including search terms such as “people with disability and human services; people with intellectual disabilities and service evaluators; people with disabilities and evaluation and human services; evaluation of services for people with intellectual disabilities and consumers as evaluators; adults with intellectual disabilities and service evaluation teams and human services.”
Three articles of interest emerged, with one describing a system in New Zealand where consumers of services, including people with intellectual disabilities, had been trained by the
Standards and Monitoring Services (SAMS) (
n.d.), a non-governmental agency to join integrated evaluation teams (
Capie and Ahrens 1996). Using a set of standards developed by SAMS, people with intellectual disabilities joined family members and professional SAMS staff to visit a disability service, observing, interviewing, and feeding back outcomes of the evaluation to management as recommendations for change. The work of SAMS was developmental in New Zealand, with other auditing groups, in more recent times, including team members of people with disabilities and family/whānau working in partnership with professional evaluation staff (
New Zealand Ministry of Health n.d.).
A second article by
Meas (
2003) challenged organisations to involve people with intellectual disabilities in the evaluation of services beyond being a source of information, through undertaking interviews and identifying service improvements.
In 2003, Inclusion Europe published a document on how to achieve quality in service evaluation that advocated for service users to no longer be viewed in a passive role but one where they would be “viewed as strong consumers who actively evaluate and influence the quality of their support—of which they expect that it meets their needs and wishes” (
Inclusion Europe 2003, p. 1). They argued that service systems needed to be built on the perspectives of their users.
Of note was that all three articles were published over 20 years ago and preceded the United Nations Convention for the Rights of Persons with Disabilities (
United Nations 2006).
The ‘grey literature’ was more forthcoming in terms of documenting the involvement in evaluation/regulation of services by people with the lived experience of intellectual disabilities. For example, a system of Quality Checking was evident in the UK, where people with intellectual disabilities were trained, supported, and partnered on a one-to-one basis with people without disabilities to check the quality of people’s lives.
Quality Checkers, as an evaluation system with its related tools, was developed by Choice Support, a UK Not-For-Profit (NFP) service (
Choice Support n.d.), and then commissioned by the Care Commission, UK, to run the evaluation programme throughout designated parts of the UK. In 2018, Choice Support was contracted to work with Achieve Australia to pilot a Quality Checker Programme for use locally (
Achieve Australia n.d.). A predominance of those trained as Quality Checkers had been previously trained as inclusive researchers through the Centre for Disability Studies at the University of Sydney. Following the outcomes of the pilot phase in 2021, Achieve Australia revised the programme, aligning it closer to Australian culture and the Australian Disability Standards (
Australian Human Rights Commission n.d.) and rebranding it as Quality Champions (
Achieve Australia 2024, DSC panel).
Overall, the initial search for documentation of people with intellectual disabilities acting as service evaluators and/or service regulators was thin, increasing the need for an exploratory study. However, several studies of relevance have been published from 2023 onwards and are spotlighted later in the discussion.
2.3. Inviting People to Join Us for a Conversation
We aimed to obtain a purposeful sample with a representative in each of the four categories outlined below, which was achieved through the authors’ own networks across four jurisdictions. A total of 13 informants agreed to participate (see
Table 1). (Note: The codes will be used to identify their quotations, which are cited in the Findings section.)
Overall, 11 of the 13 participants had had some involvement in an aspect of inclusive evaluation, ranging from being on an evaluation team, organising such teams, inviting critique of disability resources/projects by people with intellectual disabilities, and providing funding for inclusive evaluation activities. Two of the thirteen participants had only been involved in research with people with intellectual disabilities as part of an inclusive research team.
In terms of categories of participants, the breakdown was as follows: people with intellectual disabilities who had been involved in research and/or evaluation (n = 2); professionals who had supported people with intellectual disabilities performing either inclusive research and/or evaluation (n = 4); regulators (n = 3); and support service directors/managers (n = 4).
2.4. Procedure
The interviews were online and mostly with one interviewee, except for one group interview with three managers of different support services and one duo where a support professional with a research background and a researcher/evaluator with intellectual disabilities were interviewed together. The interviews were semi-structured, covering any examples known of where people with intellectual disabilities had been involved in evaluation/regulation activities; the advantages, challenges, and risk of inclusive evaluator/regulatory roles being filled by people with intellectual disabilities; and first steps in getting government/Not for Profit (NFP) services to fund inclusive evaluation. In total eight individual interviews, one duo, and one focus group were spread across four countries as follows: 4 interviews and 1 focus group in Northern Ireland; 3 interviews in Australia; 1 in New Zealand; and 1 in the Republic of Ireland. Interviews were conducted by the authors mainly within their geographic base. They took place from mid-2024 to early 2025. The two researchers with lived experience of intellectual disabilities were Sydney-based and shared the task of asking the interview questions across three interviews alongside the first author and, similarly, in one interview with the second author.
2.5. Analysis
Each conversation was audio recorded and transcribed, and elements of grounded theory were used to inform the thematic content analysis that was undertaken under the three main aims of the study (
Corbin and Strauss 2015). Through the use of open and axial coding, major themes were identified across the interview and focus group data, covering the perspectives of the 13 informants. Authors 1 and 2 met to discuss the themes and their credibility. Author 1 prepared an easy-read version of the identified themes for a zoom discussion with authors 3 and 4, and author 5 gave digital feedback. A consensus was achieved on the main themes, relating to each research question that are now described.
4. Discussion: Where to from Here?
The rationale for inclusive evaluation within support services for persons with disabilities—and indeed all recipients in need of social supports—has been clearly articulated within Human Rights and Quality of Life frameworks. Moreover there is greater recognition in democratic governments that fund social services of giving persons in receipt of these services a stronger voice in ensuring that their support meets their needs. These conceptual frameworks have seen growing acceptance internationally, especially with respect to ensuring that health and social services, which are funded through national taxation, are equitable, efficient, and effective. Nonetheless the rhetoric has been slow in becoming a reality in even the most affluent countries, especially for the most marginalised of their citizens, among whom people with intellectual disabilities prominently feature (
World Health Organization 2011). Thus far, efforts to implement changes in mindsets and long-established practices are often driven more by the passion of individuals rather than commitment of senior managers who commission and deliver services (
Scourfield 2015).
The challenge now is to translate these visions into practice. This small study, with its three main aims and allied with the emerging literature, confirms that people with intellectual disabilities have brought added value to service evaluations and the processes required in the inspection and regulation of support services. We are more aware also of the extra training and supports that they may require to enhance their engagement as team members undertaking inclusive evaluations. Nonetheless significant challenges have to be overcome, but a range of mitigation strategies have been identified and tested. Recommendations have also emerged to guide future actions aimed at extending and sustaining inclusive evaluations.
Even so, these are early days in this new venture, but they are reminiscent of the early emergence of inclusive research. Looking back, its growth was fuelled more by academic researchers striving to put inclusive research into practice rather than engaging in scholarly debates about it (
O’Brien 2023). Hence, we end by identifying what we perceive to be the core actions needed to nurture inclusive evaluations. We offer for discussion an initial implementation plan with indicative activities, as shown in
Figure 1, for those interested in undertaking inclusive evaluation activities. It combines the actions described in this article and the wider inclusive research literature, to which this Special Issue is a valuable addition (
https://www.mdpi.com/journal/socsci/special_issues/GF4S06N1TC (accessed on 26 September 2025)). Our hope is that others will expand the plan in the years ahead.
The plan starts with actions in Stage 1, which value the lived experience of people with intellectual disabilities (
Curryer et al. 2024;
Kelly et al. 2024;
Koning et al. 2024;
Love 2023) and build respect for their capacity to be informed and competent evaluators. This stage embraces all the stakeholders involved in evaluations, from frontline staff to senior managers of services and advocacy groups, as well as professional evaluators in regulatory agencies and academia (
O’Brien et al. 2025).
This leads into Stage 2, co-designing (
JFA Purple Orange 2021) and piloting a small-scale inclusive evaluation, possibly emulating the approaches that our informants have described. This ‘proof-of-concept’ stage will yield valuable insights into unique and common challenges and find ways round them (
Moxley et al. 2013). It is likely that each evaluation will need to be attuned to the particular service context and culture in any case, so there is little to be gained by waiting for the ‘perfect’ inclusive evaluation model to be discovered.
Stage 3 works towards extending inclusive evaluations, primarily through building solidarity with others committed to inclusive evaluations through communities-of-practice approaches (
Ranmuthugala et al. 2011;
Wenger-Trayner n.d.) and seeking funds to support this new style of evaluation. Communities of practice encourage the exchange of knowledge and good practices and will also assist with lobbying government agencies for the necessary funding, primarily to cover the cost of training and resourcing of co-evaluators with intellectual disabilities. These practices have proven successful in promoting inclusive research. It remains to be seen if they can transfer to inclusive evaluations.
In Stage 4 the focus shifts to making changes within systems. It may be desirable that leadership for change should come from the top, but in our judgement, this is very unlikely to happen in statutory systems. It is better to build from the bottom-up (
Green 2016;
Sergeant et al. 2022). The goal is not just to ensure that financial resources are available for more effective monitoring and evaluations of services but that the policy and procedures that guide them are redesigned to make them inclusive (
Dew et al. 2018).
We envisage the plan spiralling into a further round of the same stages, as we anticipate that the first four-stage cycle we have outlined is likely to be only partially achieved in certain locations, for particular services, or for some of their users. For example, a further Stage 1 would widen the recognition of the lived experiences of people with intellectual disabilities among service commissioners, before moving forward again to pilot inclusive evaluation in other parts of the geographic service ecosystem, and so on.
Two further points of note: although we present the stages in order, it is possible for them to be worked on simultaneously or in a different order, depending on local contexts. Arguably there could be further sub-stages that may become apparent as the plan gets used in a variety of settings. This is only its beginning.
Finally, we end by summarising the key values that we believe need to drive frameworks for inclusive evaluation in
Table 7, just as
Walmsley and Johnson (
2003) did for inclusive research. The fact that these values overlap is no surprise, although we have adapted the wording from the insights we gained from our informants and the recent literature.
5. Conclusions
As inclusive research becomes more widely accepted, the time has come to explore how it might grow out from academia and extend into the monitoring and evaluation of services and allied functions such as their regulation and inspection, inquiries into malpractices, and the design of new support services. The informants across four jurisdictions confirmed its feasibility and voiced strong support for it, highlighting benefits such as greater trust and empathy during evaluations with users of services, more meaningful feedback for service providers, and increased confidence and employment opportunities for evaluators with disabilities. However, challenges remain, including funding and fair pay for the engagement of people with intellectual disabilities, training opportunities that meet the support needs of all stakeholders, and changing the cultural attitudes in support services that underestimate the abilities of people with intellectual disabilities. Steps to overcome these challenges are proposed, such as piloting inclusive evaluation programmes, providing inclusive evaluation training to all involved, and lobbying governments to fund these roles. We created an implementation plan to guide practitioners wishing to undertake inclusive evaluation. We conclude with a set of guiding principles that will nurture a spirit of inclusion and respect. Finally, threaded through the information we garnered was the theme of acceptance—accepting the competence and experience of people with intellectual disabilities and how they grow through being valued within inclusive programmes. Gaining acceptance of difference by others is the primary and arguably the more daunting challenge to be faced in making inclusive evaluations a reality.