Next Article in Journal
Review of Virtual Power Plant Response Capability Assessment and Optimization Dispatch
Previous Article in Journal
Optimal Transient Control Scheme for Grid-Forming Permanent Magnet Synchronous Generator-Based Wind Farms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AI-Generated Mnemonic Images Improve Long-Term Retention of Coronary Artery Occlusions in STEMI: A Comparative Study

by
Zahraa Alomar
1,2,
Meize Guo
3 and
Tyler Bland
2,*
1
School of Medicine, University of Washington, Seattle, WA 98195, USA
2
WWAMI Medical Education Department, University of Idaho, Moscow, ID 83844, USA
3
The CSEveryone Center for Computer Science Education, University of Florida, Gainesville, FL 32611, USA
*
Author to whom correspondence should be addressed.
Technologies 2025, 13(6), 217; https://doi.org/10.3390/technologies13060217
Submission received: 27 March 2025 / Revised: 22 April 2025 / Accepted: 18 May 2025 / Published: 26 May 2025
(This article belongs to the Special Issue Application of Artificial Intelligence in Medical Image Analysis)

Abstract

:
Medical students face significant challenges retaining complex information, such as interpreting ECGs for coronary artery occlusions, amidst demanding curricula. While artificial intelligence (AI) is increasingly used for medical image analysis, this study explored using generative AI (DALLE-3) to create mnemonic-based images to enhance human learning and retention of medical images, in particular, electrocardiograms (ECGs). This study is among the first to investigate generative AI as a tool not for automated diagnosis but as a human-centered educational aid designed to enhance long-term retention in complex visual tasks like ECG interpretation. We conducted a comparative study with 275 first-year medical students across six campuses; an experimental group (n = 40) received a lecture supplemented with AI-generated mnemonic ECG images, while control groups (n = 235) received standard lectures with traditional ECG diagrams. Student achievement and retention were assessed by course examinations, and student preference and engagement were measured using the Situational Interest Survey for Multimedia (SIS-M). Control groups showed a significant decline in scores on the relevant exam question over time, whereas the experimental group’s scores remained stable, indicating improved long-term retention. Experimental students also reported significantly higher situational interest in the mnemonic-based images over traditional images. AI-generated mnemonic images can effectively improve long-term retention of complex ECG interpretation skills and enhance student engagement and preference, highlighting generative AI’s potential as a valuable cognitive tool in image analysis during medical education.

1. Introduction

The field of medicine is a vast and ever-evolving domain, placing increasingly immense demands on medical students [1,2]. Due to the large volume of knowledge that must be acquired, retained, and applied, medical students must dedicate considerable time to memorizing and understanding these diverse materials [3,4]. To accomplish this, medical students spend an average of 7.8 h per weekday engaged in academic-related endeavors and an additional 4.9 h on weekends [5]. This underscores the significant time commitment required of medical students to master the vast body of knowledge essential to their education and future practice, often driving both students and educators to seek innovative methods that enhance learning efficiency and retention [6].
The application of generative artificial intelligence (genAI) in medical imaging is a rapidly advancing field, with significant focus on developing algorithms for automated image analysis, disease detection, and diagnostic support [7,8,9,10]. However, alongside the development of AI as an analyst, there is a critical, complementary need to enhance the abilities of human clinicians who remain central to the diagnostic process. These clinicians must interpret complex images, often working in collaboration with or validating AI findings, requiring interpretive skills learned during their training and practice [11]. One such emerging approach that can contribute to developing these crucial human skills is the use of genAI in medical education [12].
The integration of genAI into medical education has opened new avenues for creating engaging and personalized learning materials [12]. Image-generating AI, such as DALLE-3, Stable Diffusion, and Midjourney, can produce high-quality images and visual aids that have the potential to enhance the learning experience [13]. Crucially, in the context of medical image interpretation, these tools offer novel ways to represent complex diagnostic information, potentially accelerating the development of pattern recognition skills essential for accurate analysis by future practitioners [14,15]. Recent studies have highlighted the potential of AI-generated content to improve educational outcomes by providing interactive and visually appealing resources [13,16]. Furthermore, we have shown that the use of AI-generated videos can improve student understanding and retention of medical topics [17]. Generative AI also allows for the customization of educational content to meet individual learning needs, making it a valuable tool in personalized education [18].
Despite the promising potential of genAI, there are several challenges and concerns associated with its use in medical education [19]. One significant issue is the accuracy and reliability of AI-generated images. Medical education requires highly accurate and precise visuals to ensure that students learn correct information. However, genAI models often produce images that lack the necessary anatomical or clinical accuracy, leading to potential misunderstandings [20]. While genAI applications involving direct image analysis are still in their relative infancy, educational tools leveraging genAI’s generative capabilities can adopt different strategies, particularly when the goal is to enhance the human learning process related to image interpretation [21]. Additionally, there are concerns about the integration of AI-generated images into existing curricula, as educators may be hesitant to adopt new technologies without clear evidence of their effectiveness and reliability [22,23]. To address these challenges, and recognizing that proficient human interpretation remains a cornerstone of clinical practice even in the age of genAI analysis, we propose the use of genAI to create mnemonic images rather than strictly medically accurate images. This approach focuses genAI’s capabilities on enhancing cognitive processes—memory and association—fundamental to learning how to interpret complex visual medical data, such as electrocardiogram (ECG) readings.
Mnemonic techniques, considered the art of memory, can be useful for learning difficult and complex information [24]. These techniques involve transforming hard-to-remember material into something more memorable [25]. Visual mnemonics, in particular, aid in recalling abstract or complex information and facilitate both the sequential and immediate retrieval of memorized material [26]. Numerous studies have demonstrated the effectiveness of pictorial mnemonics in improving the recall of factual knowledge, long-term memory retention in college students, and enhancing students’ memory for important textual information integrated by an underlying central theme [27]. In medical education, where the breadth and depth of required knowledge are extensive, memory tools like mnemonics can significantly augment the learning process [28]. This is evident in the widespread use of visual learning platforms such as Sketchy Medical and Picmonic, which have become favored study sources among medical students [29,30]. Mnemonic strategies have proven to be invaluable in equipping students with materials for acquiring straightforward, but nonetheless not easily remembered, facts and information [24].
The problem addressed in this study is the difficulty medical students face in retaining complex information about coronary artery occlusions using traditional ECG diagrams—a critical skill in medical image interpretation. Diagnosing a heart attack, or ST-elevation myocardial infarction (STEMI), involves first identifying ST-segment elevations, often accompanied by reciprocal ST-segment depressions, and then determining which ECG leads show these changes to localize the affected coronary artery. Our intervention involved using genAI (DALLE-3) to create mnemonic-based images overlayed on a 12-lead ECG. These mnemonic overlays are not designed to detect ST-segment elevations in the waveform but to facilitate the correlation of ST-segment elevations in specific leads with the underlying coronary artery territories they represent. By focusing on mnemonics, the AI-generated images can leverage the strengths of visual memory aids while circumventing the need for precise anatomical accuracy. This approach can enhance the learning experience by making complex information more memorable and easier to recall.
The aim of our research was to evaluate the effectiveness of these mnemonic images in enhancing long-term retention and student preference. We conducted a comparative study with a control and an experimental group, measuring exam performance and student preferences using surveys. We hypothesized that the mnemonic-based images would improve long-term retention and be preferred by students over traditional ECG images. To the best of our knowledge, this is the first study to evaluate the effectiveness of generative AI-generated mnemonic images specifically designed to support the interpretation of ECGs in the context of coronary artery localization. While previous work has examined AI tools for image analysis or generic educational support, this study uniquely explores the cognitive benefits of AI-generated images as a personalized learning aid for a high-stakes diagnostic skill.
The remainder of this paper is structured as follows: Section 2 describes the methodology, including participant recruitment, intervention design, and data analysis procedures. Section 3 presents the results from both achievement data and student survey responses. Section 4 discusses the educational implications and limitations of the findings. Finally, Section 5 concludes with a summary of key takeaways and directions for future research.

2. Materials and Methods

This educational research was approved as exempt by the institutional review board of the University of Idaho (21-223). This study was conducted at the University of Idaho WWAMI Medical Education Program, which is part of the six-campus/site-collaborative University of Washington School of Medicine program that serves Washington, Wyoming, Alaska, Montana, and Idaho. The WWAMI program allows students to complete their first two preclinical years of medical education in their home states before transitioning to clinical training, providing an accessible pathway for medical education across these five states. This study involved first-year medical students from all six WWAMI sites (n = 275) who received uniform material and exam questions across all sites. The primary aim was to evaluate the effectiveness of a mnemonic-based image, generated using generative AI, in improving exam performance and educational material preference in ECG interpretation for acute coronary syndrome.

2.1. Participants

The participants included first-year medical students from six different locations or sites. All students received the same lecture content and exam questions. The experimental group comprised 40 students (n = 40) attending the corresponding author’s site (Site 6), while the remaining 235 students served as the control group across the other 5 sites.

2.2. Intervention

All students were enrolled in a 6-week cardiovascular course during which they received a one-hour lecture covering material related to ECG interpretation in acute coronary syndrome, including an image correlating changes in each lead of a 12-lead ECG with its corresponding coronary artery (Figure 1A). The experimental group (n = 40) received additional material featuring a graphic with mnemonic-based images overlayed onto a localization ECG image (Figure 1B). These mnemonic images were generated using DALLE-3 through the ChatGPT interface, which can be viewed in Appendix A (Figure A1, Figure A2 and Figure A3); upscaled with Krea.ai; and further refined in Adobe Photoshop.

2.3. Assessments

Students were assessed using a multiple-choice exam question (MCQ) related to the localization of coronary artery occlusion in acute coronary syndrome on their weekly exam (Exam 3), conducted 6 days after the lecture, and again on their final course exam (Exam 4), 11 days after the lecture. The same exams were administered to all students at all sites at similar times.

2.4. Data Collection

Following the final exam, students in the experimental group were invited to participate in a survey. A total of 31 students from the experimental group completed the Situational Interest Survey of Multimedia (SIS-M) (Table 1) through the Qualtrics platform. The SIS-M was developed by Dr. Tonia Dousay, a professor specializing in instructional design and educational technology, to assess different aspects of situational interest in multimedia learning environments. Designed for use in educational settings, the SIS-M targets adult learners and evaluates constructs such as triggered situational interest (initial engagement with multimedia), maintained interest, and value interest (perceived usefulness of the content). Initially used to assess the effectiveness of multimedia in promoting engagement and motivation in higher education and adult learning [31,32], the SIS-M has recently been applied to medical education research [17,33,34], making it a suitable tool for evaluating learner engagement in this study.
The survey included questions that asked students to consent to participate and respond to the 12-item SIS-M twice, first referencing the original image and then the experimental image. The survey includes items to rank on a 1-5 scale (1 = strongly disagree; 5 = strongly agree); a question asking for preference of image format; and an open-ended question asking, “Why do you think this is your preference?”.
Student exam grades and specific grades on the material-specific exam question were recorded to measure baseline achievement and material-specific achievement, respectively.

2.5. Data Analysis

Researchers utilized SPSS (30.0.0.0 (172)) to analyze the students’ grades and SIS-M survey results. Achievement data were reported as the average exam score at each site and the average score on material-specific exam questions for each site. Since we did not have access to individual student grades, but instead the average site grades for each exam and exam question, differences in exam question achievement between the weekly exam and the final exam were measured using a 2 × 2 contingency table with a 1-tailed Chi-Squared test. Linear regression of site scores between the control groups and the experimental group was performed using GraphPad Prism (v 9.5.1 (733)).
The SIS-M survey analysis considered multiple dimensions of situational interest: triggered interest (Trig), maintained interest (MT), maintained feeling (MF), and maintained value (MV). Given the parametric nature of the data, four paired t-tests were used to evaluate experimental group students’ interest in the original and experimental images.
For the open-ended question in the SIS-M survey, thematic analysis was conducted using multiple large language models (LLMs), including ChatGPT (GPT4o and o1-preview) and Claude 3.5 Sonnet (Figure 2). This involved generating initial codes and identifying themes, followed by the researcher combining and refining these themes for overlap and relevancy between the three LLM models [17,33,34]. Prompt engineering techniques used included Persona Prompting [35,36], Zero-Shot Chain of Thought (CoT) [37], and Self-Criticism [38]. Zero-Shot Chain of Thought prompting was omitted in prompts utilizing the ChatGPT o1-preview model as it has built-in Tree-of-Thought functionality in every output. The initial prompt was the following:
Act like a brilliant medical education researcher. I am doing a study on the use of a graphic with mnemonic-based images overlayed onto an ECG image that teaches the localization coronary artery obstructions during STEMI. These mnemonic-based images were generated with generative AI. I surveyed the participants on their preference of the mnemonic image over the traditional image that only included colored boxes over the ECG image and asked them to explain their preference. Please perform a thematic analysis on the below participant responses marked between <response> </response>. Let’s work this out in a step by step way to be sure we have the right answer.
<response>
Participant responses here
</response>”
The follow-up query in the conversation was a Self-Criticism prompt: “Please reflect on your previous answer for any errors”.

2.6. Ethical Considerations

This educational research was approved as exempt by the institutional review board of the University of Idaho (21-223). According to OpenAI’s Content Policy and Terms of Use, users retain ownership of images generated with DALLE-3. The images used in this study were not representations of real individuals but rather generic, non-identifiable subjects such as dogs, insects, and a race car, minimizing any ethical or privacy concerns related to likeness or identity.

3. Results

To address the question of increased performance with the experimental media, we first measured baseline knowledge of students across the six sites. The average exam score for each of the four exams in the course was used for this measure (Figure 3A). The experimental site (Site 6) did not show significantly higher overall exam scores compared to the other sites, indicating comparable baseline knowledge levels across the experimental and control groups.
Students were assessed on their knowledge of ECG interpretation during acute coronary syndrome presentation, six days after learning the material (Exam 3) and eleven days after learning the material on the course final exam (Exam 4). This was performed by having them answer an MCQ question related to the material covered in the lecture. Regarding Exam 3, there was no significant difference in achievement between any sites, including the experimental site (Figure 3B,C). This suggests that the initial understanding of the material was similar across all groups shortly after the lecture.
However, on the final exam (Exam 4) question related to the same material, a significant difference was observed. Students in the control group at all sites, but not those in the experimental group, showed a significant drop in exam question scores from the previous exam (Figure 3B,D). This drop indicates a decline in long-term retention of the material. In contrast, students in the experimental group had a slight drop in scores that was statistically insignificant, demonstrating that the mnemonic-based images promoted better long-term memory retention of the material.
Linear regression analysis of the material-related question scores on the weekly exam and final exam supports these findings. The analysis showed a non-statistically significant but trending interaction effect between the group (control vs. experimental) and time (weekly exam vs. final exam) on exam scores. The experimental group exhibited a smaller decline in scores on the final exam compared to the control group, supporting the effectiveness of the mnemonic-based images in promoting long-term retention (Figure 3C,D).
To address the question of increased interest and preference for the experimental media, four paired sample t-tests were conducted to explore the students’ preference for the redesigned learning materials over the original ones. The results revealed a significant difference among the 31 participants (Table 2). Notably, 80% (n = 25) of the participants preferred the experimental image, while 13% (n = 4) preferred the original image, and 6% (n = 2) had no preference. The vast majority of students preferring the mnemonic-based image underscores the importance of the study’s findings.
The participants’ average triggered situational interest (Trig) in the experimental image (M = 4.69, SD = 0.43) was significantly higher than in the original image (M = 2.52, SD = 0.90), t = −11.709, p < 0.001. The 95% confidence interval for the mean difference between the two ratings was −2.55 to −1.79, suggesting a preference for the experimental image.
The findings for maintained (MT) interest indicated that the participants’ interest rating of the experimental image (M = 4.60, SD = 0.48) was significantly greater than that of the original learning image (M = 3.83, SD = 0.90), t = −3.816, p < 0.001. The 95% confidence interval for the mean difference between the two ratings was −1.18 to −0.36.
The results for maintained-feeling (MF) interest revealed that the participants’ interest rating of the experimental image (M = 4.46, SD = 0.62) was significantly greater than that of the original image (M = 3.49, SD = 1.01), t = −4.069, p < 0.001. The 95% confidence interval for the mean difference between the two ratings was -1.45 to -0.48.
The outcomes for maintained-value (MV) interest suggested that the participants’ interest rating of the experimental image (M = 4.74, SD = 0.47) was significantly greater than that of the original image (M = 4.17, SD = 0.94), t = −2.956, p = 0.006. The 95% confidence interval for the mean difference between the two ratings was −0.97 to −0.18.
A thematic analysis of the open-ended survey responses in the SIS-M provided insights into why students preferred the experimental image over the traditional image. Four primary themes emerged from the responses:
  • Mnemonic’s Impact on Retention and Learning: The mnemonic-based image helped with long-term retention and made memorization easier, both for exams and in the long term.
  • Engagement and Interest: The mnemonic-based image made the material more fun and interactive compared to traditional learning methods.
  • Preference for Mnemonic Imagery: Participants favored mnemonic-based characters and visuals over the traditional colored boxes, citing better memorization aids.
  • Challenges with Mnemonic:
    • Confusion with Mnemonic Imagery: Some found the mnemonic elements confusing or hard to connect to the content.
    • Prior Memorization of Traditional Image: Some participants struggled with switching to the mnemonic image after memorizing the traditional one.
In summary, participants overwhelmingly preferred the mnemonic-based image for its ability to enhance memorability, make learning more engaging, and simplify complex information. Achievement data supported this finding by demonstrating improved long-term retention of students’ ability to correlate ST-segment elevations in specific ECG leads with their corresponding obstructed coronary artery. However, some participants noted challenges, such as confusion with certain mnemonic elements or the influence of image order on their preference. Overall, the mnemonic-based image provided a more interactive and memorable learning experience for most participants.

4. Discussion

This study presents a novel application of genAI by using it to create mnemonic-based images that support medical students in learning ECG interpretation for coronary artery localization. Unlike prior uses of genAI in medical education that have focused on automated analysis or content delivery, this approach evaluates AI-generated images as a tool to enhance cognitive processes, specifically the long-term retention of complex visual associations. The results revealed two main findings: First, students exposed to mnemonic-based images generated by genAI showed improved long-term retention of the material as compared to those exposed to the traditional ECG image. This was demonstrated by a lesser decline in scores on the final exam for the experimental group compared to the control group. Second, there was a notable increase in student preference for the mnemonic images, with 80% of participants favoring the redesigned materials as opposed to the original ECG diagrams. These mnemonic images significantly improved the students’ ability to recall and apply information over time due to the effective use of word/image associations. This study showed that such associations help in memorizing, as they create visually interesting and captivating materials, thus reducing cognitive load. This approach made learning more interesting and interactive while at the same time reinforcing memory through powerful visual cues for complex medical concepts, ultimately leading to the enhanced long-term retention of interpreting ECGs related to acute coronary syndrome.
The enhanced memory retention observed in this study can be attributed to the mnemonic images’ ability to create strong visual associations, aligning with established principles of memory formation and recall [39]. The principal goal of mnemonic instruction is to help students remember facts and concepts, which is imperative for academic success, as content in every area needs to be memorized and quickly retrieved [10]. The increased student engagement with mnemonic images likely stems from their reported captivating and entertaining nature, which aligns with previous findings on the effectiveness of pictorial mnemonics in improving recall of factual knowledge [27]. The reduction in cognitive load, as reported by the students, contributed to improved performance by allowing more cognitive resources to be allocated to understanding and retaining the material, rather than struggling with memorization. These findings are consistent with the existing literature on mnemonic use in medical education; notably, Sketchy Medical, is heavily utilized throughout the organ-system-based curriculum during the first year of medical education [30]. However, this study extends beyond traditional mnemonic techniques by incorporating AI-generated images, offering a new approach to creating personalized and engaging educational content. Previous studies have shown the potential of AI-generated content in improving educational outcomes and knowledge acquisition [20]. This research specifically demonstrates its effectiveness in creating mnemonic aids for complex medical concepts, bridging the gap between traditional mnemonics and cutting-edge AI technology in educational materials.
The efficacy of AI-generated mnemonic images in enhancing the retention of ECG interpretation skills indicates significant potential for this approach to be used across various domains within medical education. Complex subjects, such as anatomy, pharmacology, and pathophysiology, could benefit from the implementation of similar mnemonic-based resources, thereby transforming the way students assimilate extensive medical information [14,40,41,42]. The greater amount of material that must be learned in medical school consists of words and numbers. Mnemonics employ a form of chunking [4,43], decreasing the number of items to remember by grouping them together, which is particularly beneficial in the context of medical education, where continuing education and the maintenance of knowledge are of utmost importance. Visual mnemonics serve the brain by building associations between diagnoses and disease processes with easy-to-recall images [43]. The incorporation of these tools into current educational frameworks could involve supplementing standard lectures with AI-generated mnemonics, promoting a more diverse learning atmosphere. This may influence teaching strategies by encouraging and prompting educators to incorporate more visual and associative learning modalities. This can further lead to the development of comprehensive, AI-enhanced mnemonic resources tailored for medical education. Increased student engagement and preference for mnemonic images indicate that this approach could significantly enhance student satisfaction and overall learning experience. By decreasing cognitive load and making hard-to-remember information more memorable, these tools could lessen the pressure associated with the demanding nature of medical education. AI-generated mnemonics can be applied as a resource to aid students in efficiently retaining important information as medicinal knowledge continues to expand.

Limitations

The strengths of this study are evident in its rigorous design, including the multicampus setting, which offered a varied student demographic and ensured uniform lecture content across all locations. The integration of both qualitative and quantitative data, such as exam performance indicators and the Situational Interest Survey of Multimedia (SIS-M), provided a thorough understanding of the intervention’s effects. Moreover, the novel application of genAI for developing mnemonic-based educational resources marks a notable progression in research methodology within education. Nonetheless, the study also faced limitations. There is a risk of self-selection bias in the survey responses, possibly skewing the qualitative data, as students with strong opinions on the mnemonic images may have been more inclined to respond and participate. The applicability of the findings to other medical schools or educational contexts might be restricted due to the study’s particular circumstances. Furthermore, the relatively small size of the experimental group (n = 40) compared to the control group (n = 235) could have affected the statistical power of the results and may not fully reflect the broader student population.
Furthermore, the use of genAI technologies also poses a challenge in environments with varying technological resources. However, these limitations are potentially mitigated by the growing accessibility of user-friendly AI platforms, which are simplifying the use of AI in educational contexts and may broaden the applicability of such innovative teaching tools.
To overcome these constraints and further investigate the potential of AI-generated mnemonic tools in medical education, future studies should aim to broaden the scope and scale of similar research. Examining the use of this technique in other medical fields and disciplines, such as anatomy, pharmacology, or pathophysiology, could yield important insights into its overall effectiveness. Longitudinal studies that assess the long-term impact of mnemonic-based images on medical education would be useful in understanding their lasting influence on knowledge retention and clinical application. To improve the generalizability and statistical significance of the results, future research should seek to involve larger and more varied groups of students from different medical institutions and educational backgrounds. Moreover, investigating various types of mnemonic images and their effects on different learning styles could help customize this method to meet the diverse needs and preferences of students, potentially resulting in more personalized and effective educational strategies in medical training.

5. Conclusions

This study demonstrates the significant potential of AI-generated mnemonic images to improve learning and retention in medical education, especially in challenging subjects and hard-to-remember content. The enhancement of long-term retention and heightened student engagement observed through these innovative educational resources emphasize their significance in tackling the issues of information overload in medical training. These findings encourage a future where genAI may transform educational practices in medicine, providing personalized, engaging, and effective learning experiences. As the medical field continues to evolve, the incorporation of innovative methods like AI-generated mnemonic images becomes increasingly fundamental. By adopting these technological advancements, medical educators can better prepare future healthcare professionals with the essential knowledge and skills to navigate the complexities of medicine, ultimately leading to improved patient care and outcomes. Future studies should investigate the application of this approach in other subject areas to evaluate its generalizability across the medical curriculum.

Author Contributions

Conceptualization, T.B.; data curation, T.B.; formal analysis, M.G. and T.B.; investigation, T.B.; methodology, T.B.; visualization, T.B.; writing—original draft, Z.A., M.G. and T.B.; writing—review and editing, Z.A. and T.B. All authors have read and agreed to the published version of the manuscript.

Funding

No funding was used in support of this study.

Institutional Review Board Statement

This study was approved as exempt by the institutional review board of the University of Idaho (21-223).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The datasets presented in this article are not readily available because of the sensitive nature of the students’ grades. Requests to access the datasets should be directed to Tyler Bland (tbland@uidaho.edu).

Acknowledgments

We would like to thank all the experimental-site students for their participation in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SIS-MSituational Interest Survey of Multimedia
genAIGenerative artificial intelligence
TrigTriggered interest
MTMaintained interest
MFMaintained feeling
MVMaintained value
LLMLarge language model
MCQMultiple-choice question
ECGElectrocardiogram
STEMIST-elevation myocardial infarction

Appendix A

Figure A1. DALLE-3 image generation of an ant climbing up a ladder. (a) Prompt: “Please generate an image of an ant body bent at 90 degrees”. (b) Prompt: “Please generate an image of a wooden ladder propped up against a wall and slightly angling away from the camera. White background”.
Figure A1. DALLE-3 image generation of an ant climbing up a ladder. (a) Prompt: “Please generate an image of an ant body bent at 90 degrees”. (b) Prompt: “Please generate an image of a wooden ladder propped up against a wall and slightly angling away from the camera. White background”.
Technologies 13 00217 g0a1
Figure A2. DALLE-3 image generation of a labrador with a lasso. (a) Prompt: “Generate an image of a labrador dog wearing a cowboy hat and swinging a lasso. It should be a side view”. (b) The image from (a) was modified with the prompt: “The dog should be using his paws and legs to look like he is actively swinging the lasso”.
Figure A2. DALLE-3 image generation of a labrador with a lasso. (a) Prompt: “Generate an image of a labrador dog wearing a cowboy hat and swinging a lasso. It should be a side view”. (b) The image from (a) was modified with the prompt: “The dog should be using his paws and legs to look like he is actively swinging the lasso”.
Technologies 13 00217 g0a2
Figure A3. DALLE-3 image generation of a terrier driving a race car. (a) Prompt: “Please generate an image of a terrier driving a race car. Use a realistic style”. (b) The image from (a) was modified with the prompt: “Please generate it from a side view”. (c) The image from (b) was modified with the prompt: “It looks like the car is backwards. Please generate an image of the entire car with the terrier driving from a side view. Use a 16:9 ratio and have it on a white background”.
Figure A3. DALLE-3 image generation of a terrier driving a race car. (a) Prompt: “Please generate an image of a terrier driving a race car. Use a realistic style”. (b) The image from (a) was modified with the prompt: “Please generate it from a side view”. (c) The image from (b) was modified with the prompt: “It looks like the car is backwards. Please generate an image of the entire car with the terrier driving from a side view. Use a 16:9 ratio and have it on a white background”.
Technologies 13 00217 g0a3

References

  1. Gutierrez, C.; Cox, S.; Dalrymple, J. The Revolution in Medical Education. Tex. Med. 2016, 112, 58–61. [Google Scholar] [PubMed]
  2. Samarasekera, D.D.; Goh, P.S.; Lee, S.S.; Gwee, M.C.E. The clarion call for a third wave in medical education to optimise healthcare in the twenty-first century. Med. Teach. 2018, 40, 982–985. [Google Scholar] [CrossRef] [PubMed]
  3. Klatt, E.C.; Klatt, C.A.M. How much is too much reading for medical students? Assigned reading and reading rates at one medical school. Acad. Med. 2011, 86, 1079–1083. [Google Scholar] [CrossRef] [PubMed]
  4. Densen, P. Challenges and Opportunities Facing Medical Education. Trans. Am. Clin. Climatol. Assoc. 2011, 122, 48–58. [Google Scholar]
  5. Zeeman, J.M.; Kang, I.; Angelo, T.A. Assessing student academic time use: Assumptions, predictions and realities. Med. Educ. 2018, 53, 285–295. [Google Scholar] [CrossRef]
  6. Ho, P.A.; Girgis, C.; Rustad, J.K.; Noordsy, D.; Stern, T.A. Advancing Medical Education Through Innovations in Teaching During the COVID-19 Pandemic. Prim. Care Companion J. Clin. Psychiatry 2021, 23, 20nr02847. [Google Scholar] [CrossRef]
  7. Musalamadugu, T.S.; Kannan, H. Generative AI for medical imaging analysis and applications. Future Med. AI 2023, 1. [Google Scholar] [CrossRef]
  8. Pinto-Coelho, L. How Artificial Intelligence Is Shaping Medical Imaging Technology: A Survey of Innovations and Applications. Bioengineering 2023, 10, 1435. [Google Scholar] [CrossRef]
  9. Esteva, A.; Chou, K.; Yeung, S.; Naik, N.; Madani, A.; Mottaghi, A.; Liu, Y.; Topol, E.; Dean, J.; Socher, R. Deep learning-enabled medical computer vision. NPJ Digit. Med. 2021, 4, 5. [Google Scholar] [CrossRef]
  10. Lang, O.; Yaya-Stupp, D.; Traynis, I.; Cole-Lewis, H.; Bennett, C.R.; Lyles, C.R.; Lau, C.; Irani, M.; Semturs, C.; Webster, D.R.; et al. Using generative AI to investigate medical imagery models and datasets. EBioMedicine 2024, 102, 105075. [Google Scholar] [CrossRef]
  11. Najjar, R. Redefining Radiology: A Review of Artificial Intelligence Integration in Medical Imaging. Diagnostics 2023, 13, 2760. [Google Scholar] [CrossRef] [PubMed]
  12. Narayanan, S.; Ramakrishnan, R.; Durairaj, E.; Das, A. Artificial Intelligence Revolutionizing the Field of Medical Education. Cureus 2023, 15, e49604. [Google Scholar] [CrossRef]
  13. Aktay, S. The usability of Images Generated by Artificial Intelligence (AI) in Education. Int. Technol. Educ. J. 2022, 6, 51–62. [Google Scholar]
  14. Arango-Ibanez, J.P.; Posso-Nuñez, J.A.; Díaz-Solórzano, J.P.; Cruz-Suárez, G. Evidence-Based Learning Strategies in Medicine Using AI. JMIR Med. Educ. 2024, 10, e54507. [Google Scholar] [CrossRef]
  15. dos Santos, D.P.; Giese, D.; Brodehl, S.; Chon, S.H.; Staab, W.; Kleinert, R.; Maintz, D.; Baeßler, B. Medical students’ attitude towards artificial intelligence: A multicentre survey. Eur. Radiol. 2018, 29, 1640–1646. [Google Scholar] [CrossRef] [PubMed]
  16. Chassignol, M.; Khoroshavin, A.; Klimova, A.; Bilyatdinova, A. Artificial Intelligence trends in education: A narrative overview. Procedia Comput. Sci. 2018, 136, 16–24. [Google Scholar] [CrossRef]
  17. Worthley, B.; Guo, M.; Sheneman, L.; Bland, T. Antiparasitic Pharmacology Goes to the Movies: Leveraging Generative AI to Create Educational Short Films. AI 2025, 6, 60. [Google Scholar] [CrossRef]
  18. Wayne, H.; Maya, B.; Charles, F. Artificial Intelligence in Education: Promises and Implications for Teaching and Learning; Center for Curriculum Redesign: Boston, MA, USA, 2019; p. 228. [Google Scholar]
  19. Dave, M.; Patel, N. Artificial intelligence in healthcare and education. Br. Dent. J. 2023, 234, 761. [Google Scholar] [CrossRef]
  20. Noel, G.P.J.C. Evaluating AI-powered text-to-image generators for anatomical illustration: A comparative study. Anat. Sci. Educ. 2024, 17, 979–983. [Google Scholar] [CrossRef]
  21. Masters, K. Ethical use of Artificial Intelligence in Health Professions Education: AMEE Guide No. 158. Med. Teach. 2023, 45, 574–584. [Google Scholar] [CrossRef]
  22. Schiff, D. Out of the laboratory and into the classroom: The future of artificial intelligence in education. AI Soc. 2020, 36, 331–348. [Google Scholar] [CrossRef]
  23. Watty, K.; McKay, J.; Ngo, L. Innovators or inhibitors? Accounting faculty resistance to new educational technologies in higher education. J. Account. Educ. 2016, 36, 1–15. [Google Scholar] [CrossRef]
  24. Mostafa, E.A.; El Midany, A.A. Review of mnemonic devices and their applications in cardiothoracic surgery. J. Egypt. Soc. Cardio-Thorac. Surg. 2017, 25, 79–90. [Google Scholar] [CrossRef]
  25. Richland, L.E.; Kornell, N.; Kao, L.S. The Pretesting Effect: Do Unsuccessful Retrieval Attempts Enhance Learning? J. Exp. Psychol. Appl. 2009, 15, 243–257. [Google Scholar] [CrossRef] [PubMed]
  26. Rummel, N.; Levin, J.R.; Woodward, M.M. Do pictorial mnemonic text-learning aids give students something worth writing about? J. Educ. Psychol. 2003, 95, 327–334. [Google Scholar] [CrossRef]
  27. Lubin, J.; Polloway, E.A. Mnemonic Instruction in Science and Social Studies for Students with Learning Problems: A Review. Learn. Disabil. Contemp. J. 2016, 14, 207–224. [Google Scholar]
  28. O’hanlon, R.; Laynor, G. Responding to a new generation of proprietary study resources in medical education. J. Med. Libr. Assoc. 2019, 107, 251–257. [Google Scholar] [CrossRef]
  29. Fischetti, C.; Bhatter, P.; Frisch, E.; Sidhu, A.; Helmy, M.; Lungren, M.; Duhaime, E. The Evolving Importance of Artificial Intelligence and Radiology in Medical Trainee Education. Acad. Radiol. 2022, 29, S70–S75. [Google Scholar] [CrossRef]
  30. Wu, J.H.; Gruppuso, P.A.; Adashi, E.Y. The Self-directed Medical Student Curriculum. JAMA 2021, 326, 2005–2006. [Google Scholar] [CrossRef]
  31. Dousay, T.A. Effects of redundancy and modality on the situational interest of adult learners in multimedia learning. Educ. Technol. Res. Dev. 2016, 64, 1251–1271. [Google Scholar] [CrossRef]
  32. Dousay, T.A.; Trujillo, N.P. An examination of gender and situational interest in multimedia learning environments. Br. J. Educ. Technol. 2018, 50, 876–887. [Google Scholar] [CrossRef]
  33. Bland, T.; Guo, M.; Dousay, T.A. Multimedia design for learner interest and achievement: A visual guide to pharmacology. BMC Med. Educ. 2024, 24, 113. [Google Scholar] [CrossRef]
  34. Bland, T. Enhancing Medical Student Engagement Through Cinematic Clinical Narratives: Multimodal Generative AI–Based Mixed Methods Study. JMIR Med. Educ. 2025, 11, e63865. [Google Scholar] [CrossRef] [PubMed]
  35. Wang, J.; Liu, Z.; Zhao, L.; Wu, Z.; Ma, C.; Yu, S.; Dai, H.; Yang, Q.; Liu, Y.; Zhang, S.; et al. Review of Large Vision Models and Visual Prompt Engineering. Meta-Radiology 2023, 1, 100047. [Google Scholar] [CrossRef]
  36. White, J.; Fu, Q.; Hays, S.; Sandborn, M.; Olea, C.; Gilbert, H.; Elnashar, A.; Spencer-Smith, J.; Schmidt, D.C. A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT. Available online: https://arxiv.org/abs/2302.11382v1 (accessed on 23 September 2024).
  37. Kojima, T.; Gu, S.S.; Reid, M.; Matsuo, Y.; Iwasawa, Y. Large Language Models are Zero-Shot Reasoners. In Proceedings of the 36th Conference on Neural Information Processing Systems, New Orleans, LA, USA, 28 November–9 December 2022; Volume 35, pp. 22199–22213. Available online: https://arxiv.org/abs/2205.11916v4 (accessed on 23 September 2024).
  38. Huang, J.; Gu, S.S.; Hou, L.; Wu, Y.; Wang, X.; Yu, H.; Han, J. Large Language Models Can Self-Improve. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Singapore, 6–10 December 2023; 2023; pp. 1051–1068. [Google Scholar] [CrossRef]
  39. Dresler, M.; Shirer, W.R.; Konrad, B.N.; Müller, N.C.; Wagner, I.C.; Fernández, G.; Czisch, M.; Greicius, M.D. Mnemonic Training Reshapes Brain Networks to Support Superior Memory. Neuron 2017, 93, 1227–1235.e6. [Google Scholar] [CrossRef] [PubMed]
  40. Al-Maqbali, F.; Ambusaidi, A.; Shahat, M.A.; Alkharusi, H. The effect of teaching science based on mnemonics in reducing the sixth-grade female students’ cognitive load according to their imagery style. J. Posit. Psychol. 2022, 6, 2069–2084. [Google Scholar]
  41. Zachariah, A.M.; Meenu, S.; Vijayalakshmi, G.; Pothen, L. Effectiveness of mnemonics based teaching in medical education. Int. J. Health Sci. 2022, 6, 9635–9640. [Google Scholar]
  42. Bland, T.; Guo, M. Visual Mnemonics and Gamification: A New Approach to Teaching Muscle Physiology. J. Technol.-Integr. Lessons Teach. 2024, 3, 73–82. [Google Scholar] [CrossRef]
  43. Lewis, J.B.; Mulligan, R.; Kraus, N. The Importance of Medical Mnemonics in Medicine. Pharos 2018, 30–35. [Google Scholar]
Figure 1. Image to support coronary artery obstruction localization via ECG. (A) Original image presenting the relationship between ECG leads and coronary artery and heart localization (lateral, inferior, and anterior). (B) Experimental image with mnemonic-based illustrations overlayed on their respective ECG leads and explanations of the illustrations below.
Figure 1. Image to support coronary artery obstruction localization via ECG. (A) Original image presenting the relationship between ECG leads and coronary artery and heart localization (lateral, inferior, and anterior). (B) Experimental image with mnemonic-based illustrations overlayed on their respective ECG leads and explanations of the illustrations below.
Technologies 13 00217 g001
Figure 2. Overview of thematic analysis. The initial prompt utilized prompt engineering techniques that included Persona Prompting [35,36], Zero-Shot Chain of Thought (CoT) [37], and Self-Criticism [38]. The output from all three models was reviewed, refined, and combined into the final analysis by the researcher.
Figure 2. Overview of thematic analysis. The initial prompt utilized prompt engineering techniques that included Persona Prompting [35,36], Zero-Shot Chain of Thought (CoT) [37], and Self-Criticism [38]. The output from all three models was reviewed, refined, and combined into the final analysis by the researcher.
Technologies 13 00217 g002
Figure 3. Student achievement analysis. (A) Exam scores for all exams across all sites. Site 6 received the experimental image along with the original image. (B) Scores on the exam question relating to material covered by the coronary artery obstruction localization with ECG images. A similar question was assessed on Exam 3 (six days post-lecture) and Exam 4 (eleven days post-lecture). (C,D) Linear regression analysis of the percentage of students scoring a correct answer on Exam 3 (C) and Exam 4 (D) questions covered by the original and experimental images. * p < 0.05, ** p < 0.01, and *** p < 0.001. Exp: experimental; Org: original.
Figure 3. Student achievement analysis. (A) Exam scores for all exams across all sites. Site 6 received the experimental image along with the original image. (B) Scores on the exam question relating to material covered by the coronary artery obstruction localization with ECG images. A similar question was assessed on Exam 3 (six days post-lecture) and Exam 4 (eleven days post-lecture). (C,D) Linear regression analysis of the percentage of students scoring a correct answer on Exam 3 (C) and Exam 4 (D) questions covered by the original and experimental images. * p < 0.05, ** p < 0.01, and *** p < 0.001. Exp: experimental; Org: original.
Technologies 13 00217 g003
Table 1. SIS-M Items.
Table 1. SIS-M Items.
SIS TypeSurvey Item
SI-triggeredThe image was interesting.
SI-triggeredThe image grabbed my attention.
SI-triggeredThe image was often entertaining.
SI-triggeredThe image was so exciting, it was easy to pay attention.
SI-maintained-feelingWhat I learned in the image is fascinating to me.
SI-maintained-feelingI am excited about what I learned in the image.
SI-maintained-feelingI like what I learned in the image.
SI-maintained-feelingI found the information in the image interesting.
SI-maintained-valueWhat I studied in the image is useful for me to know.
SI-maintained-valueThe things I studied in the image are important to me.
SI-maintained-valueWhat I learned in the image can be applied to my job.
SI-maintained-valueI learned valuable things in the image.
Table 2. Paired sample t-tests for triggered situational interest (Trig), maintained interest (MT), maintained feeling (MF), and maintained value (MV). OR: original image; EX: experimental image.
Table 2. Paired sample t-tests for triggered situational interest (Trig), maintained interest (MT), maintained feeling (MF), and maintained value (MV). OR: original image; EX: experimental image.
Paired DifferencestdfSignificance
MeanStd. DeviationStd. Error Mean95% Confidence Interval of the Difference
LowerUpper One-Sided pTwo-Sided p
OR Trig-EX Trig−2.171.030.19−2.55−1.79−11.70930<0.001<0.001
OR MT-EX MT−0.771.120.20−1.18−0.36−3.81630<0.001<0.001
OR MF-EX MF−0.971.320.24−1.45−0.48−4.06930<0.001<0.001
OR MV-EX MV−0.571.080.19−0.97−0.18−2.956300.0030.006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alomar, Z.; Guo, M.; Bland, T. AI-Generated Mnemonic Images Improve Long-Term Retention of Coronary Artery Occlusions in STEMI: A Comparative Study. Technologies 2025, 13, 217. https://doi.org/10.3390/technologies13060217

AMA Style

Alomar Z, Guo M, Bland T. AI-Generated Mnemonic Images Improve Long-Term Retention of Coronary Artery Occlusions in STEMI: A Comparative Study. Technologies. 2025; 13(6):217. https://doi.org/10.3390/technologies13060217

Chicago/Turabian Style

Alomar, Zahraa, Meize Guo, and Tyler Bland. 2025. "AI-Generated Mnemonic Images Improve Long-Term Retention of Coronary Artery Occlusions in STEMI: A Comparative Study" Technologies 13, no. 6: 217. https://doi.org/10.3390/technologies13060217

APA Style

Alomar, Z., Guo, M., & Bland, T. (2025). AI-Generated Mnemonic Images Improve Long-Term Retention of Coronary Artery Occlusions in STEMI: A Comparative Study. Technologies, 13(6), 217. https://doi.org/10.3390/technologies13060217

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop