Next Article in Journal
Teacher Burnout and Collegiality at the Workplace in Higher Education Institutions in the Arab Gulf Region
Previous Article in Journal
“We’re Not Going to Overcome Institutional Bias by Doing Nothing”: Latinx/a/o Student Affairs Professionals as Advocates for Equity
Previous Article in Special Issue
From Traditional to Programmatic Assessment in Three (Not So) Easy Steps
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Implementation of Programmatic Assessment: Challenges and Lessons Learned

by
Marjan Govaerts
1,*,
Cees Van der Vleuten
1 and
Suzanne Schut
2
1
School for Health Professions Education, Maastricht University, 6200 MD Maastricht, The Netherlands
2
Department of Science Education and Communication, Delft University of Technology, 2600 AA Delft, The Netherlands
*
Author to whom correspondence should be addressed.
Educ. Sci. 2022, 12(10), 717; https://doi.org/10.3390/educsci12100717
Submission received: 9 October 2022 / Accepted: 12 October 2022 / Published: 18 October 2022
(This article belongs to the Special Issue Programmatic Assessment in Education for Health Professions)
In the past few decades, health professions education programmes around the world have embraced the competency-based paradigm to guide the education and assessment of future healthcare workers. Competency-based education (CBE) hinges upon the basic principle that predetermined outcomes (competencies) guide teaching, learning, and assessment, in order to ensure that graduates demonstrate proficiency in essential competency domains and are able to deliver high-quality patient care throughout their professional careers [1,2]. CBE aims at transforming learners into healthcare workers who are committed to excellence and have developed competencies for life-long learning. In CBE, learners are typically placed at the centre and actively engaged in the learning and assessment process: the provision of frequent and meaningful performance feedback is assumed to enable learners to reflect on and shape their learning trajectories through the identification of appropriate learning opportunities for further development [3,4,5].
Assessment is crucial in achieving the goals of CBE. However, traditional approaches to assessment, which generally almost exclusively focus on the summative function of assessment, may no longer be appropriate. In competency-based education, assessment programmes should not only ensure robust decision making regarding learners’ achievement and competence development, but should also facilitate the generation of high-quality feedback for learning and support reflective practice, feedback seeking, and the use of feedback for ongoing performance improvement. In CBE, assessment programmes thus have to combine and integrate assessment functions of, for, and as learning [6].
Programmatic assessment (PA) is a whole-system approach to assessment that theoretically aligns with the key principles of CBE, as it aims to optimise assessment for and as learning while ensuring justifiable decision making regarding learners’ achievement of intended outcomes [7]. In PA, frequent low-stakes assessments are purposefully designed to provide meaningful longitudinal performance feedback to foster the development of learners’ competences, whereas high-stakes decision making is based on meaningful aggregation of multiple assessment data collected over longer periods of time, across different assessment formats, contexts, and assessors [8]. However, while programmatic assessment approaches are increasingly embraced as essential in the implementation of effective CBE [9], research findings rather consistently the show that integration of different assessment functions in assessment systems is often problematic [10,11,12,13,14]. The implementation of PA requires a fundamental change not only in assessment design, but also in teachers’ and learners’ views and assumptions about what constitutes the purpose and practice of high-quality assessment. The implementation of programmatic assessment may therefore be not so much about changes in assessment structures and procedures, but first and foremost about transforming the assessment culture and the basic underlying assumptions.
However, achieving transformational goals and sustainable change is challenging, as it not only aims at improving what people are already doing, but calls for a fundamental change in beliefs, perceptions, goals, roles, and norms [15] (pp 3-20). A recent systematic review of research on programmatic assessment revealed a range of key challenges related to implementation, hindering the achievement of intended educational and assessment goals [16]. Given the rise of programmatic assessment as a leading assessment paradigm in modern health professions education, important lessons can be learned from programmes that have changed their assessment approaches according to the principles underpinning PA. This Special Issue on programmatic assessment presents six case reports across different settings, each of which describes a model of programmatic assessment practice, approaches to and key challenges in implementation, and important take-home messages.
The paper by Tait and Kulasegaram describes the implementation of programmatic assessment in a Canadian medical school [17]. In this case, the full-scale implementation of programmatic assessment went hand in hand with a fundamental renewal of the curriculum and use of an overarching competency framework to guide education and assessment design. The key success factors in implementation proved to be a project-based approach in which the concepts of PA provided the foundation for the design, continuous monitoring, and adjustment of the implementation process. Additionally, central governance, strong leadership, and the availability of resources that supported new assessment processes and procedures were found to be critical components in the change process.
Another example of the fully fledged implementation of PA in a newly developed educational programme is presented in the case report by Bonvin and colleagues [18]. This paper describes how a Swiss medical school adapted the key principles underpinning PA to its local context. It illustrates this medical school’s implementation journey, which required the “deconstruction of old habits and certainties” and development of new narratives to help key stakeholders (students and faculty) to understand and engage with the core principles of PA. The ongoing support of students and faculty, careful re-allocation of resources, and investment in frequent, short-cycle improvement planning throughout the implementation process were essential for bringing about the intended change.
The papers by King et al. and Ryan and Judd, respectively, present examples of the incremental implementation of programmatic assessment. The paper by King and colleagues describes the introduction of coaching as an integral component of PA and, more specifically, the successes and failures in fostering a coaching culture (i.e., fostering a culture that helps learners to thrive through coaching) [19]. The case report shows how PA can be implemented incrementally by first focusing on specific components of PA that can realistically be embedded in existing organisational structures. More specifically, the authors argue that enacting small-scale changes—without disrupting existing structures and beliefs, and building upon the successes and positive impacts of pilot projects—can serve as a catalyst for broader and large-scale innovations, preparing organisations and the stakeholders within for more disruptive change.
Likewise, the paper by Ryan and Judd describes the stepwise approach in an Australian medical school’s transition from a traditional assessment system to a more programmatic assessment approach, with each step pushing the school’s ‘zone of proximal development’ [20]. Step 1 involved the introduction of a series of formative assessment tools and feedback, followed by an organisational change to improve the coordination of assessment delivery and the reform of assessment technology (Step 2). Step 3 involved a substantial and ongoing process of curriculum renewal and the assessment reform associated with that renewal, optimising the alignment of assessment with the key principles of PA. Similar to the paper by King et al., this case report presents an example of how the implementation of PA is context-dependent and influenced by existing structures and culture [21]. Both papers, however, also illustrate how programmatic assessment is a concept rather than a prescription or a recipe, elements of which can be implemented gradually, in feasible steps towards more extensive and complex transformational change.
Effective and sustainable change requires buy-in from all the key stakeholders involved in the assessment process [22,23]. The paper by Colbert and Bierer presents the Cleveland Clinic Lerner College of Medicine’s (CCLCM) systematic and comprehensive approach to the professional development of both learners and faculty, essential for the successful and sustainable implementation of programmatic assessment [24]. One of the main characteristics of the CCLCM programme is learner agency; i.e., the programme is intentionally designed to encourage learners to take personal ownership for their learning and assessment. The paper emphasises the need to invest in ongoing professional development programmes, in which longitudinal coaching and advising “on the job” are essential. At CCLCM, the professional development of faculty members involved in PA consists of a mixture of “traditional PD approaches” (workshops, training, lectures, guidelines, etc.), coaching on the job by more knowledgeable others (e.g., experienced faculty), and learning through networking within communities of practice. The programme is embedded in a system-wide faculty development programme, reflecting systems-level commitment, and uses multiple professional development venues. The paper furthermore argues that supporting students in longitudinal mentoring programmes is a sine qua non in programmatic assessment aimed at enhancing learner agency and fostering the development of self-regulatory skills. The complexity of a change towards PA—even when implemented in newly developed educational programmes—is reflected in the authors’ comments on ongoing and major challenges in changing the assessment culture, despite long-term investment in stakeholder training and development.
The paper by Rich and colleagues takes a learner-centred approach to explore how residents’ performance levels and engagement with the assessment system may influence the effectiveness and outcomes of assessment programmes [25]. The findings from this study show how differences in resident engagement and performance levels may influence assessment in some (mal)adaptive ways. More specifically, the findings suggest that residents who are engaged in formative feedback and perform strongly probably do not receive as much high-quality feedback to guide the further development of expertise, potentially deterring residents from continuing to invest time in programmatic assessment for learning. The findings furthermore show that weaker-performing residents seem to promote the adoption of a problem-identification assessment paradigm rather than a developmental approach to the use of assessment data. Therefore, the assessment programme does not seem to equitably realise potential benefits for all the residents. The findings from this study thus illustrate how key stakeholders’ assumptions and beliefs, and their interactions within and with the assessment system may shape the assessment culture and assessment outcomes in potentially unintended and undesirable ways.
We believe there are several key lessons to be learned from these case reports, related to context, capacity building for change, and leadership, in line with insights from the field of change management [22,26,27,28]. All the case reports included in this Special Issue clearly show that successful implementation requires a receptive context. Obviously, a ‘compelling story’ and a clear vision on what needs to be achieved are fundamentals of successful change. However, transformative and sustainable change requires paying attention to what Gersick (1991), as cited in Clausen and Kragh (2019), defines as socially constructed “deep structures” within the organisation [28,29]. Deep structures shape and are shaped by the organisational culture, and include communication patterns, competing values and priorities, motivational forces among individuals and groups, and the narratives that construct reality. Over time, the assessment beliefs, assumptions, and behaviours in educational organisations may result in ”taken-for-granted facts” [28] or core rigidities that shape the assessment culture and its narratives, potentially hampering change—even if individuals are willing to do things differently. The experiences described in this Special Issue as well as previous research findings show that organisational as well as national culture influences a medical programme’s readiness for change [30,31]. The case reports illustrate how regulations at the level of the university as well as national legislation or accreditation frameworks may both facilitate and hinder the implementation of programmatic assessment. For instance, implementation can be facilitated by university regulations that allow for the alignment of assessment procedures with the purposes of programmatic assessment [18]. Similarly, national requirements to implement competency-based education may force educational programmes to re-think educational and assessment designs, and to shift towards longitudinal learning and assessment trajectories in which formative feedback and assessment for learning are central to competence development [17,18]. At the same time, however, the existence of national licensing exams or university policies regarding grading may interfere with the development of new narratives, or the internalisation of new values and assessment behaviours [18,19,20].
Capacity building is a key element in any change process. All the stakeholders involved in programmatic assessment need to develop the competencies required to make the desired change in assessment culture happen, and training in new skills and task performance is crucial. However, research findings and experiences described in the case report by Rich and colleagues, for example, show that stakeholders may not always be willing or able to fulfil new roles as intended: rather, old behaviours are “simply” fitted to the new environment [15,25]. Therefore, an exclusive focus on “how-to-do-it” workshops is not enough for stakeholders to make the principles of PA their own, and implementation is likely to fail (or be less successful) if we do not pay attention to what drives people in assessment and education, their behaviours, norms, and beliefs. The papers in this Special Issue very clearly show how the goals, priorities, and motivational drivers for change may differ across stakeholders and stakeholder groups within an educational programme, affected by underlying beliefs and assumptions about, for example, assessment purposes, and the role of grading and feedback, or differing perceptions of the values of competency-based assessment. Dominant, persistent narratives about formative versus summative assessment, accountability, and roles and responsibilities in the assessment process or conceptions of what constitutes “professional competence” may impact the change process and outcome. Commitment building might therefore very well be the most difficult part of implementation in an otherwise receptive environment. This may require long-term and iterative efforts to sustainably change the assessment culture, targeting all stakeholder groups and all levels of the organisation. It requires investment in the development and consistent use of new narratives that reflect the new assessment reality and are used consistently in assessment formats and procedures. Illustrative examples are provided in the paper of Bierer and Colbert, which describes a longitudinal, system-wide approach to the development of staff members and students beyond the “how-to” level, to support ongoing engagement with the programmatic approach to assessment [24]. The activities aimed at mentoring and the creation of communities of practice, in which newcomers learn from and with more experienced (staff) members and collaborate in working towards the achievement of clear assessment goals, may be particularly effective in realising successful and sustainable change.
The successful implementation of programmatic assessment requires strong leadership and central governance. Leaders, however, are not a panacea for making change happen. A tolerance for failure, a willingness to experiment, psychological safety, and a collaborative and non-hierarchical approach have all been identified as key characteristics of innovative culture [32]. The case reports in this Special Issue seem to underline these premises. The successful implementation of PA requires a top-down as well as a bottom-up approach in which all key stakeholders are involved and collaborate in the change process. Identifying and mobilising “champions” (i.e., leaders in change; change agents) across the organisation and stakeholder groups is likely to enhance the implementation and consolidation of innovative assessment approaches. The implementation of PA, especially in existing educational practice, is a complex and unpredictable process, within a potentially unstable environment [19,20]. As a consequence, the implementation of PA may call for pragmatic, adaptable approaches that acknowledge the fact that change is not always plannable or rational and requires the capability to quickly and adequately respond to changes in the internal or external environment [15]. The strategies to support successful change therefore include the creation of an environment in which people feel safe to express concerns about how the change in assessment affects their working routines, goals, or ambitions, and to experiment with new behaviours. Strong commitment to change then has to be combined with openness to the realities of others, and a willingness to modify plans and ideas whenever necessary. Making sure that all stakeholders’ voices are being heard in short-cycle improvement activities may thus be an underpinning strategy leading to the adoption of the innovation and sustainable change [15,17,18].
Finally, the case reports clearly illustrate the importance of resources, and the use of and investment in technology facilitating the meaningful aggregation of assessment data, in particular. For instance, the development of a user-friendly assessment evidence database, dashboard, or electronic portfolio is essential for PA to be effective. Paying attention to how to best support the achievement of assessment goals by removing unnecessary (administrative) burdens from overloaded staff and learners—yet stimulating them to engage in assessment for and as learning—is definitely a key factor determining the acceptance and sustainability of the change process.
This Special Issue shows that the successful implementation of PA can proceed in many different ways. The implementation processes and outcomes, however, are dependent on the alignment of many different factors in the (local) setting. It is important to realise that the implementation of PA is not about transferring predefined structures and processes but about translating underlying concepts and principles to assessment programmes that fit the organisation’s context. It seems of utmost importance to create a shared understanding of the nature and purpose of programmatic assessment and the construction and use of the (new) narratives aligned with the ambition for change. Like with many educational models, the key to success lies in the people engaging in the process of change.

Author Contributions

M.G. wrote a first draft of this paper; C.V.d.V. and S.S. substantially contributed to revisions and the final version. All authors have read and agreed to the published version of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Frank, J.R.; Snell, L.S.; Cate, O.T.; Holmboe, E.S.; Carraccio, C.; Swing, S.R.; Harris, P.; Glasgow, N.; Campbell, C.; Dath, D.; et al. Competency-based medical education: Theory to practice. Med. Teach. 2010, 32, 638–645. [Google Scholar] [CrossRef]
  2. Frank, J.R.; Snell, L.; Englander, R.; Holmboe, E.S.; ICBME collaborators. Implementing competency-based medical education: Moving forward. Med. Teach. 2017, 39, 568–573. [Google Scholar] [CrossRef]
  3. Holmboe, E.S.; Sherbino, J.; Long, D.M.; Swing, S.R.; Frank, J.; International CBME Collaborators. The role of assessment in competency-based medical education. Med. Teach. 2010, 32, 676–682. [Google Scholar] [CrossRef] [PubMed]
  4. Harris, P.; Bhanji, F.; Topps, M.; Ross, S.; Lieberman, S.; Frank, J.R.; Snell, L.; Sherbino, J.; ICBME Collaborators. Evolving concepts of assessment in a competency-based world. Med. Teach. 2017, 39, 603–608. [Google Scholar] [CrossRef]
  5. Lockyer, J.; Carraccio, C.; Chan, M.-K.; Hart, D.; Smee, S.; Touchie, C.; Holmboe, E.S.; Frank, J.R.; ICBME Collaborators. Core principles of assessment in competency-based medical education. Med. Teach. 2017, 39, 609–616. [Google Scholar] [CrossRef]
  6. Schellekens, L.H.; Bok, H.G.; de Jong, L.H.; van der Schaaf, M.F.; Kremer, W.D.; van der Vleuten, C.P. A scoping review on the notions of Assessment as Learning (AaL), Assessment for Learning (AfL), and Assessment of Learning (AoL). Stud. Educ. Eval. 2021, 71, 101094. [Google Scholar] [CrossRef]
  7. Van Der Vleuten, C.P.M.; Schuwirth, L.W.T.; Driessen, E.W.; Dijkstra, J.; Tigelaar, D.; Baartman, L.K.J.; Van Tartwijk, J. A model for programmatic assessment fit for purpose. Med. Teach. 2012, 34, 205–214. [Google Scholar] [CrossRef] [PubMed]
  8. Heeneman, S.; de Jong, L.H.; Dawson, L.J.; Wilkinson, T.J.; Ryan, A.; Tait, G.R.; van der Vleuten, C.P. Ottawa 2020 consensus statement for programmatic assessment–1. Agreement on the principles. Med. Teach. 2021, 43, 1139–1148. [Google Scholar] [CrossRef] [PubMed]
  9. Iobst, W.F.; Holmboe, E.S. Programmatic Assessment: The Secret Sauce of Effective CBME Implementation. J. Grad. Med. Educ. 2020, 12, 518–521. [Google Scholar] [CrossRef]
  10. Eva, K.W.; Bordage, G.; Campbell, C.; Galbraith, R.; Ginsburg, S.; Holmboe, E.; Regehr, G. Towards a program of assessment for health professionals: From training into practice. Adv. Health Sci. Educ. Theory Pract. 2016, 21, 897–913. [Google Scholar] [CrossRef] [PubMed]
  11. Bennett, R.E. Formative assessment: A critical review. Assess Educ. 2011, 18, 5–25. [Google Scholar] [CrossRef]
  12. Harlen, W.; James, M. Assessment and Learning: Differences and relationships between formative and summative assessment. Assess Educ. 2006, 4, 365–379. [Google Scholar] [CrossRef]
  13. Pryor, J.; Crossouard, B. A socio-cultural theorisation of formative assessment. Oxf. Rev. Educ. 2008, 34, 1–20. [Google Scholar] [CrossRef]
  14. Taras, M. Assessment—Summative and Formative—Some Theoretical Reflections. Br. J. Educ. Stud. 2005, 53, 466–478. [Google Scholar] [CrossRef]
  15. Evans, R. Changing Paradigms. In The Human Side of School Change: Reform, Resistance, and the Real-Life Problems of Innovation, 1st ed.; The Jossey-Bass Education Series; Jossey-Bass Inc.: San Francisco, CA, USA, 1996; pp. 3–20. [Google Scholar]
  16. Schut, S.; Maggio, L.A.; Heeneman, S.; van Tartwijk, J.; van der Vleuten, C.; Driessen, E. Where the rubber meets the road—An integrative review of programmatic assessment in health care professions education. Perspect. Med. Educ. 2020, 10, 6–13. [Google Scholar] [CrossRef] [PubMed]
  17. Tait, G.R.; Kulasegaram, K.M. Assessment for Learning: The University of Toronto Temerty Faculty of Medicine M.D. Program Experience. Educ. Sci. 2022, 12, 249. [Google Scholar] [CrossRef]
  18. Bonvin, R.; Bayha, E.; Gremaud, A.; Blanc, P.-A.; Morand, S.; Charrière, I.; Mancinetti, M. Taking the Big Leap: A Case Study on Implementing Programmatic Assessment in an Undergraduate Medical Program. Educ. Sci. 2022, 12, 425. [Google Scholar] [CrossRef]
  19. King, S.M.; Schuwirth, L.W.T.; Jordaan, J.H. Embedding a Coaching Culture into Programmatic Assessment. Educ. Sci. 2022, 12, 273. [Google Scholar] [CrossRef]
  20. Ryan, A.; Judd, T. From Traditional to Programmatic Assessment in Three (Not So) Easy Steps. Educ. Sci. 2022, 12, 487. [Google Scholar] [CrossRef]
  21. Roberts, C.; Khanna, P.; Lane, A.S.; Reimann, P.; Schuwirth, L. Exploring complexities in the reform of assessment practice: A critical realist perspective. Adv. Heal. Sci. Educ. 2021, 26, 1641–1657. [Google Scholar] [CrossRef]
  22. Bland, C.J.; Starnaman, S.; Wersal, L.; Moorhead-Rosenberg, L.; Zonia, S.; Henry, R. Curricular change in medical schools: How to succeed. Acad. Med. 2000, 75, 575–594. [Google Scholar] [CrossRef]
  23. Driessen, E.W.; Van Tartwijk, J.; Govaerts, M.; Teunissen, P.; Van Der Vleuten, C.P.M. The use of programmatic assessment in the clinical workplace: A Maastricht case report. Med. Teach. 2012, 34, 226–231. [Google Scholar] [CrossRef] [PubMed]
  24. Colbert, C.Y.; Bierer, S.B. The Importance of Professional Development in a Programmatic Assessment System: One Medical School’s Experience. Educ. Sci. 2022, 12, 220. [Google Scholar] [CrossRef]
  25. Rich, J.V.; Cheung, W.J.; Cooke, L.; Oswald, A.; Gauthier, S.; Hall, A.K. Do Resident Archetypes Influence the Functioning of Programs of Assessment? Educ. Sci. 2022, 12, 293. [Google Scholar] [CrossRef]
  26. Griffiths, J.; Dalgarno, N.; Schultz, K.; Han, H.; Van Melle, E. Competency-Based Medical Education implementation: Are we transforming the culture of assessment? Med. Teach. 2019, 41, 811–818. [Google Scholar] [CrossRef] [PubMed]
  27. Aiken, C.; Keller, S. The irrational side of change management. McKinsey Q. 2009, 2, 101–109. [Google Scholar]
  28. Clausen, B.; Kragh, H. Why Don’t They Just Keep on Doing It? Understanding the Challenges of the Sustainability of Change. J. Chang. Manag. 2019, 19, 221–245. [Google Scholar] [CrossRef]
  29. Gersick, C.J.G. Revolutionary change theories: A multilevel exploration of the punctuated equilibrium paradigm. Acad. Manag. Rev. 1991, 16, 10–36. [Google Scholar] [CrossRef]
  30. Jippes, M.; Driessen, E.W.; Majoor, G.D.; Gijselaers, W.H.; Muijtjens, A.M.; Van Der Vleuten, C.P. Impact of national context and culture on curriculum change: A case study. Med. Teach. 2013, 35, 661–670. [Google Scholar] [CrossRef] [PubMed]
  31. Jippes, M.; Driessen, E.W.; Broers, N.J.; Majoor, G.D.; Gijselaers, W.H.; van der Vleuten, C.P. Culture matters in successful curriculum change: An international study of the influence of national and organizational culture tested with multilevel structural equation modeling. Acad. Med. 2015, 90, 921–929. [Google Scholar] [CrossRef] [Green Version]
  32. Pisano, G.P. The hard truth about innovative culture. Harv. Bus. Rev. 2019, 97, 62–71. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Govaerts, M.; Van der Vleuten, C.; Schut, S. Implementation of Programmatic Assessment: Challenges and Lessons Learned. Educ. Sci. 2022, 12, 717. https://doi.org/10.3390/educsci12100717

AMA Style

Govaerts M, Van der Vleuten C, Schut S. Implementation of Programmatic Assessment: Challenges and Lessons Learned. Education Sciences. 2022; 12(10):717. https://doi.org/10.3390/educsci12100717

Chicago/Turabian Style

Govaerts, Marjan, Cees Van der Vleuten, and Suzanne Schut. 2022. "Implementation of Programmatic Assessment: Challenges and Lessons Learned" Education Sciences 12, no. 10: 717. https://doi.org/10.3390/educsci12100717

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop