Abstract
Junior faculty are often called upon to deliver high-stakes large-group presentations. Training in the skills needed to do this effectively is often lacking. We devised a 1.25 h coaching program. The coach analyzed a practice run of the presentation using a locally developed assessment tool. Areas covered included public speaking skills, promoting learner understanding and retention, creating a dynamic learning climate, and optimal use of slides. COVID-19 necessitated a switch to virtual coaching, and we studied the impact of virtual vs. in-person coaching. We added two additional coaches and studied the transferability of the coaching component. There was high uptake of the offered coaching. Participant surveys showed improved comfort levels with large-group presentations; had a sense that their presentation skills had improved; showed an increased likelihood of volunteering for future speaking opportunities; and were likely to recommend the program. Comparisons between virtual and in-person coaching showed no statistical difference, and there was little difference between the original coach and the subsequent two coaches. Qualitative assessments revealed broad areas in which faculty felt the program had been most impactful. The coaching program was well-received and resulted in concrete positive changes in presenter behavior. Conducting the coaching in a virtual manner may increase the feasibility of the intervention, as would expanding the coach pool.
1. Introduction
Peer mentoring is being increasingly studied as a tool to improve teaching skills of medical faculty [1,2,3]. Its effectiveness is based on the principles of direct observation and deliberate practice. The direct observation of clinical skills has been shown to improve feedback [4,5,6] and increase learner confidence [2,7] and skill [7,8]; it is likely that direct observation of teaching skills confers similar benefit, but more data are needed. Deliberate practice, which involves repetitively executing specific skills with an ongoing receipt of feedback to gain mastery of those skills, is a critical component of achieving competency [9].
Many junior faculty members in academic institutions are required to give didactic presentations to showcase their academic niche and introduce their work to the medical community. These large-group presentations, often involving esoteric scientific content, require special skill to capture and maintain the attention of the audience. While there are well-established pedagogical techniques which can be applied to this format, most junior faculty are not provided with any kind of training in presentation skills. Peer mentoring, which incorporates both direct observation and deliberate practice, has not been well-studied specifically to improve the presentation skills of medical faculty, and questions of sustainability [10] and the ideal peer mentoring model [11] have not yet been answered in the literature.
The goal of this project was to enhance the presentation skills of junior faculty through the mentoring process of direct observation, targeted feedback, and deliberate practice. Our primary research objective was to evaluate the effect of this intervention on mentored faculty. We also sought to assess the impact of moving this mentoring from an in-person to virtual format with the onset of COVID-19, as well as the outcome of employing additional mentors.
2. Materials and Methods
Roughly 10–12 junior (i.e., assistant professor-level) faculty members deliver a Department of Medicine Grand Rounds presentation each year at the University of Wisconsin. These 60 min presentations are typically given in a large auditorium, to an audience comprising >50 members of the Department of Medicine, as well as an online audience of 50–100 members. The content usually includes the clinical and/or research focus of the faculty member. Each assistant professor scheduled to give Grand Rounds in 2015–2022 was prospectively invited to participate in the mentoring program.
Junior faculty who are to present Grand Rounds were contacted via email by the chair of the Department of Medicine Education Committee and offered the opportunity to receive feedback and guidance on their upcoming presentation. From 2015–2020, those who chose to participate were referred to a faculty member who has previously undergone training in public speaking skills and has significant experience with the direct observation and mentoring of teaching and presentation skills. Starting in 2020, with the transition to virtual presentations because of the COVID-19 pandemic, the mentoring sessions were also conducted remotely, and two other faculty members were included as mentors to reduce the burden on the primary mentor. These additional faculty mentors were also experienced medical educators. They received training from the original mentor, including instructions on how to use and apply the assessment tool, and took part in a discussion about mentoring presentation skills.
The presenter and one of the three mentors met in the auditorium, and the presenter delivered a practice run of the presentation. During the practice run, the mentor conducted a comprehensive analysis of the teaching performance using a locally developed assessment tool adapted from the literature [12] and previously validated instruments [13,14,15]. We added recommendations for virtual presentations during the pandemic adapted from published recommendations [16] (Table 1).
Table 1.
Locally developed assessment tool.
The mentor recorded the practice run on an iPad to enhance and augment feedback. Feedback focused on specific, low-inference behaviors, to maximize the likelihood of actual behavior change. Broad areas covered included techniques of public speaking; creating a dynamic learning climate; techniques to promote understanding and retention by the learner; and optimal use of slides. The presenter repeated certain portions of the lecture after feedback was delivered to allow deliberate practice to improve skill. The entire mentoring session took approximately 1.25 h. Table 2 illustrates specific examples of changes made after mentoring. Each mentored faculty member was asked to complete an evaluation of the program after their presentation.
Table 2.
Two examples of before-mentoring and after-mentoring introductory statements and slides.
All descriptive numerical data are summarized as mean and standard deviation. We used Student’s t-test for numerical comparisons. Categorical data are expressed as counts and frequencies and compared using Fisher’s exact test. The regression odds analysis is conducted using a logistic regression model. All figures are constructed using slide plots. We analyzed all data using SAS 9.4 and STATA version 17. Open-ended responses were analyzed for themes, grouped into categories, and reported by total number and percents.
3. Results
A total of 51 of 53 (96%) assistant professors scheduled to give Grand Rounds between September 2015 and April 2022 accepted the invitation to be mentored. All faculty completed the evaluation form which included four questions for speakers to answer using a 1–10 scale (where 10 was the top of the scale). The questions were: do you feel more comfortable delivering a large-group presentation; to what extent do you feel your presentation skills have improved; are you more likely to volunteer for speaking opportunities in the future; and would you recommend this mentoring program to others? High ratings were seen in all domains, with relatively lower ratings for likelihood to volunteer for future speaking opportunities.
To assess the possible variable impact of virtual mentoring, we assessed the ratings by those mentored virtually as compared to those mentored in-person and found no difference (Table 3). To assess the use of different mentors on the effectiveness of the program, we compared the original mentor (Mentor A) to the additional mentors (Mentor B and C). The evaluation of the program was similar among speakers who were mentored by the additional mentors as compared to the original mentor with one statistically significant, but minimally important, difference in recommending the mentoring program to others (Table 4).
Table 3.
Evaluation of Grand Rounds mentoring program by teaching mode.
Table 4.
Evaluation of Grand Rounds mentoring program by faculty mentored.
Speakers were asked to qualitatively describe the three factors where mentoring most impacted or changed their presentation. We analyzed the themes and found that improvements regarding slide design, audience interaction, and body language were the most common (Table 5).
Table 5.
Speaker self-evaluation comments.
4. Discussion
Our effort to improve presentation skills of faculty using a mentoring model has been successful, with clear lessons learned; some of which are identified in the literature [18]. Our primary objective was to evaluate the impact of this mentoring on trainees, and it seems clear that trainees had self-assessed improvements in their presentation skills, along with increased comfort and confidence in large-group teaching formats. We have been gratified that almost all invited participants have engaged in this non-mandatory opportunity, and we believe that speaks to the desire faculty have to collect feedback on their teaching skills, particularly junior faculty who are often thrust into prominent teaching situations without having had training in the necessary skills. We note that the high evaluation scores we see for self-efficacy and confidence do not translate into an equally high desire to volunteer for future speaking opportunities; this may be due to the inherent human reluctance to speak in front of large audiences, which will require more effort to overcome.
Our participants’ most frequently stated areas of improvement (slide design, audience interaction, body language) were different when compared to a study of lecturers in emergency medicine [18]. Participants in that study rated the most significant improvements in “provides a brief outline”, “provides a conclusion for the talk”, and “clearly states goal of the talk”. This may be due to differences in mentoring strategies and emphasis.
Introducing the option of virtual mentoring seemed to have no impact on the speakers’ perception of the effectiveness of the mentoring. Virtual mentoring adds flexibility and convenience to the mentoring process and allows for mentoring in environments where in-person meetings cannot occur.
Additional mentors’ evaluation outcomes were very similar to the original mentor, suggesting that the standardization of mentoring is feasible. This would reduce the burden on a single mentor and implies that the mentoring program described above is transferable and potentially generalizable.
Limitations of the Study
This study has several limitations. The sample size is relatively small and is limited to assistant professors in one department of a single academic medical center who were mentored by more senior faculty members. We may not have seen the markedly positive results in effectiveness of the program if we had also mentored more experienced faculty. This program is relatively time-intensive, and departmental support is needed for time spent mentoring. Other departments may not be able to offer similar levels of support or have the resources and expertise to build a formal program in presentation mentoring. An important limitation is that learners may view presentation quality differently than the mentors reviewing it. Furthermore, the effectiveness and success of the program was measured by self-assessment, which may confer some bias and heterogeneity. Finally, we recognize that mentoring, by its nature, can depend on the style and technique of the mentor, and this could potentially limit the generalizability of this intervention to other mentors. However, our use of standardizing techniques did not show a significant difference based on the specific mentor.
5. Conclusions
Overall, the results of this study suggest that direct observation and deliberate practice, incorporated into a program of peer mentoring, are effective at increasing self-efficacy and confidence in didactic presentation skills. The maintenance of effectiveness when moving the mentoring to a virtual format and when broadening the mentor pool may enhance the feasibility and sustainability of this program.
Author Contributions
J.S.: conceptualization and design; data curation; formal analysis; methodology; project administration; writing—original draft; writing—review and editing. Z.D.G.: design; data curation; formal analysis; methodology; writing—original draft; writing—review and editing. L.J.Z.: conceptualization and design; data curation; formal analysis; methodology; project administration; writing—original draft; writing—review and editing. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
The University of Wisconsin Institutional Review Board deemed this program as not requiring IRB review because, in accordance with federal regulations, our project was considered to be a Program Evaluation and did not constitute research.
Informed Consent Statement
Not applicable.
Data Availability Statement
The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.
Acknowledgments
The authors wish to thank all faculty who participated in the program to improve presentation skills.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Gusic, M.; Hageman, H.; Zenni, E. Peer review: A tool to enhance clinical teaching. Clin. Teach. 2013, 10, 287–290. [Google Scholar] [CrossRef] [PubMed]
- Mookherjee, S.; Monash, B.; Wentworth, K.L.; Sharpe, B.A. Faculty development for hospitalists: Structured peer observation of teaching. J. Hosp. Med. 2014, 9, 244–250. [Google Scholar] [CrossRef] [PubMed]
- Hyland, K.M.; Dhaliwal, G.; Goldberg, A.N.; Chen, L.; Land, K.; Wamsley, M. Peer review of teaching: Insights from a 10-year experience. Med. Sci. Educ. 2018, 28, 675–681. [Google Scholar] [CrossRef]
- Li, J.T. Assessment of basic physical examination skills of internal medicine residents. Acad. Med. 1994, 69, 296–299. [Google Scholar] [CrossRef] [PubMed]
- Cydulka, R.K.; Emerman, C.L.; Jouriles, N.J. Evaluation of resident performance and intensive bedside teaching during direct observation. Acad. Emerg. Med. 1996, 3, 345–351. [Google Scholar] [CrossRef] [PubMed]
- Dattner, L.; Lopreiato, J.O. Introduction of a direct observation program into a pediatric resident continuity clinic: Feasibility, acceptability, and effect on resident feedback. Teach. Learn. Med. 2010, 22, 280–286. [Google Scholar] [CrossRef] [PubMed]
- Chen, W.; Liao, S.; Tsai, C.; Huang, C.; Lin, C.; Tsai, C. Clinical skills in final-year medical students: The relationship between self-reported confidence and direct observation by faculty of residents. Ann. Acad. Med. Singap. 2008, 37, 3–8. [Google Scholar] [CrossRef] [PubMed]
- Perera, J.; Mohamadou, G.; Kaur, S. The use of objective structured self-assessment and peer-feedback (OSSP) for learning communication skills: Evaluation using a controlled trial. Adv. Health Sci. Educ. Theory Pract. 2010, 15, 185–193. [Google Scholar] [CrossRef] [PubMed]
- Ericsson, K.A. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad. Med. 2004, 79, S70–S81. [Google Scholar] [CrossRef] [PubMed]
- Stockdill, M.; Hendricks, B.; Barnett, M.D.; Bakitas, M.; Harada, C.N. Peer observation of teaching: A feasible and effective method of physician faculty development. Gerontol. Geriatr. Educ. 2023, 44, 261–273. [Google Scholar] [CrossRef] [PubMed]
- Bell, A.E.; Meyer, H.S.; Maggio, L.A. Getting Better Together: A Website Review of Peer Coaching Initiatives for Medical Educators. Teach. Learn. Med. 2020, 32, 53–60. [Google Scholar] [CrossRef] [PubMed]
- Lucas, S. The Art of Public Speaking; McGraw-Hill Press: New York, NY, USA, 2020. [Google Scholar]
- Lenz, P.H.; McCallister, J.W.; Luks, A.M.; Le, T.T.; Fessler, H.E. Practical strategies for effective lectures. Ann. Am. Thorac. Soc. 2015, 12, 561–566. [Google Scholar] [CrossRef] [PubMed]
- Litzelman, D.K.; Stratos, G.A.; Marriott, D.J.; Skeff, K.M. Factorial validation of a widely disseminated educational framework for evaluating clinical teachers. Acad. Med. 1998, 73, 688–695. [Google Scholar] [CrossRef] [PubMed]
- Newman, L.R.; Lown, B.A.; Jones, R.N.; Johansson, A.; Schwartzstein, R.M. Developing a Peer Assessment of Lecturing Instrument: Lessons Learned. Acad. Med. 2009, 84, 1104–1110. [Google Scholar] [CrossRef] [PubMed]
- Gartner-Schmidt, J. The New Normal–Virtual and Hybrid Presentations: Developing Content, Designing Slides, and Delivery Guidelines. Ear Nose Throat J. 2022, 101, 20S–28S. [Google Scholar] [CrossRef] [PubMed]
- Available online: http://www.cdc.gov/nchs/fastats/accidental-injury.htm (accessed on 27 May 2024).
- Pedram, K.; Marcelo, C.; Paletta-Hobbs, L.; Meadors, E.; Dow, A. Twelve tips for creating and sustaining a peer assessment program of clinical faculty. Med. Teach. 2024, 46, 183–187. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

