Next Article in Journal
The Impact of a Computing Curriculum Accessible to Students with ASD on the Development of Computing Artifacts
Previous Article in Journal
Web Mining of Online Resources for German Labor Market Research and Education: Finding the Ground Truth?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Curriculum in IDD Healthcare (CIDDH) eLearn Course: Evidence of Continued Effectiveness Using the Streamlined Evaluation and Analysis Method (SEAM)

1
Department of Sociology and Demography, The University of Texas at San Antonio, San Antonio, TX 78249, USA
2
Bartkowski & Associates Research Team, San Antonio, TX 78258, USA
*
Author to whom correspondence should be addressed.
Knowledge 2024, 4(1), 68-84; https://doi.org/10.3390/knowledge4010004
Submission received: 14 November 2023 / Revised: 28 January 2024 / Accepted: 12 February 2024 / Published: 21 February 2024

Abstract

:
Medical professionals are rarely trained to treat the unique healthcare needs and health disparities of people with intellectual and developmental disabilities (IDD). The Curriculum in IDD Healthcare (CIDDH) eLearn course aims to redress gaps in the delivery of medical care to people with IDD. An initial comprehensive evaluation of CIDDH in-person training content had previously underscored its knowledge and skill transfer efficacy for Mississippi healthcare providers. Training content has recently become available to medical professionals nationwide through an online self-paced modality to address physicians’ IDD education needs. This study introduces and applies a new evaluation framework called SEAM (Streamlined Evaluation and Analysis Method) that offers a promising avenue for rendering a follow-up appraisal after rigorous evidence of program effectiveness has been previously established. SEAM reduces the data-reporting burden on trainees and maximizes instructor–trainee contact time by relying on an abbreviated post-only questionnaire focused on subjective trainee appraisals. It further reduces methodological and analytical complexity to enhance programmatic self-assessment and facilitate sound data interpretation when an external evaluator is unavailable. Ratings from a small sample of early-cohort trainees provide an important test of effectiveness during CIDDH’s transition to online learning for clinicians nationwide. Using SEAM, CIDDH achieved high ratings from this initial wave of trainees across various evaluative domains. The study concludes by highlighting several promising implications for CIDDH and SEAM.

1. Introduction

During the past several years, clinician training programs designed to improve healthcare delivery to patients with intellectual and developmental disabilities (IDD) have proliferated given these patients’ significant health disparities. Many of these programs have been evaluated [1,2,3]. Systematic reviews of such evaluations have underscored the positive impacts of providing clinician training in IDD healthcare [4,5]. Benefits of such training programs include increased clinician knowledge about the unique health challenges faced by patients with IDD, improved understanding of effective treatment techniques, and enhanced clinician comfort when delivering such care [6]. An expanded awareness of health disparities commonly observed among patients with IDD and increased knowledge of interventions proven to ameliorate these disparities have also been evident [7,8,9].
Not all clinician trainings focused on IDD healthcare delivery are equally effective due to variation in disability type, severity, and complexity. For example, clinician trainings can offset the skill gap among doctors who treat patients with autism spectrum disorder (ASD) and people with moderate forms of IDD, but effectively treating patients who are deaf, blind, and affected by more severe disabilities can be elusive [10,11]. Prior experience treating patients with IDD can be a tremendous asset, as can non-clinical contact opportunities, holistic clinician education, and consultations among medical colleagues and health professionals [7,11,12,13]. Yet, problematic consequences are commonly observed when clinicians or other medical professionals are ill-equipped to manage behavioral challenges presented by some patients with IDD [14,15,16]. Clinicians unaware of behavioral dimensions of some IDD conditions may misinterpret unusual actions or habits as “patient noncompliance”. Consequently, IDD-related comorbid developmental disorders often go unaddressed [4]. Moreover, mental health issues often co-occur with IDD, a circumstance that can create a mental health literacy gap as primary care providers often miss early signs of these issues [17,18]. The benefits of experimental learning and persistent training for IDD practitioners have been emphasized in recent research [19]. Behavior support plans have been shown to increase practitioners’ skills while generating a reduction in challenging IDD-related behaviors [5]. A balanced approach (varied educational training with hands-on applications) to treating individuals with IDD has been emphasized since the start of the COVID-19 pandemic and has been outlined in new training paradigms and frameworks [20,21,22]. Overall, this body of research underscores the value of offering practical training strategies and ongoing medical education on treating patients with IDD.
The present study seeks to build on this scholarship by offering an evaluation of a holistic (multi-module, wide-ranging) clinician training program designed to enhance care for patients with IDD, namely, the Curriculum in IDD Healthcare (CIDDH). Because this program has recently transitioned to online implementation, data from an initial wave of trainees are used to conduct an early-cohort analysis focused on discerning eLearning programmatic effectiveness. This investigation also advances an innovative approach to evaluation called SEAM (Streamlined Evaluation and Analysis Method). A substantial portion of core content associated with CIDDH has been rigorously evaluated in a previous publication with compelling evidence of implementation effectiveness [1]. That earlier comprehensive evaluation relied principally on pretest/post-test knowledge gains and a subjective skills barometer to define and discern training success associated with face-to-face medical training sessions. Statistically significant changes from pretest to post-test were commonly observed on objective knowledge and skill items in that earlier evaluation, as were significant pre-to-post improvements in the subjective skills barometer.
SEAM is appropriate for this investigation because it relies on a ratio-based analysis of response distributions, namely, the percent of superlative (best possible) responses as effectiveness thresholds for each evaluation item. SEAM does not, therefore, require a large sample. Consequently, analyses of data from an early-cohort sample of approximately 100 CIDDH trainees is sufficient to apply this newly developed method. SEAM is employed here as a concise replication of that earlier evaluation [1] given the newly expanded clinician training base and delivery through a self-paced online learning platform. Ancillary content has also been integrated into CIDDH related to advancements in the field. An early-cohort analysis of the CIDDH eLearn course with a small sample of trainees provides a stringent test of the program’s effectiveness at the transition point to online learning. The more extensive time required to collect and analyze data from a larger sample would leave the course’s online transition point unable to be evaluated due to post-transition adaptations. This study aims to capture an evaluation snapshot of a program previously deemed effective, shortly thereafter undergoing a transition from in-person didactic instruction to self-paced online learning.
SEAM is implemented in this study as a minimal-burden data collection effort to determine if positive evaluations persist as a program has evolved from an in-person format to an online modality with a newly expanded nationwide trainee base. The streamlined approach offered by SEAM is a viable follow-up to an earlier comprehensive evaluation. SEAM relies on post-only assessments designed to minimize any time spent away from training delivery given the various time demands on clinicians. SEAM incorporates evaluative domains to discern overall and component-specific effectiveness (e.g., handout and presentation quality). Most importantly, SEAM can be employed by organizations without sufficient capacity to secure an external evaluator as a means of conducting a self-assessment with easily applied rules for sound data interpretation. SEAM is not proposed as a model for establishing the newfound evidence-based status of a program. However, it can be used to ensure the quality of service delivery in a training program that previously produced scientific evidence of its effectiveness.
As noted, prior research highlights the many advantages associated with trainings and curricula that bolster healthcare practitioners’ capacity to treat patients with IDD. As people with IDD have moved into community settings and away from institutionalized environments, medical clinicians play a crucial role in delivering essential care to such patients [1]. Craig Escude, M.D., with more than 25 years of experience in the field and President of IntellectAbility, created the Curriculum in IDD Healthcare (CIDDH) eLearn course to help clinicians cultivate the requisite knowledge (e.g., IDD awareness) and skills (e.g., technical expertise, treatment capacities) to bolster care for patients with IDD. CIDDH was developed to empower practicing clinicians with enhanced medical techniques for delivering high-quality IDD healthcare (https://replacingrisk.com/curriculum-in-idd-healthcare-elearn/, accessed on 1 January 2024). The primary aim of this evaluation is to test the educational efficacy of this curriculum as evidenced by a variety of clinician ratings, and ultimately improve patient outcomes. This curriculum was created and delivered by Dr. Escude, along with other physicians, to teach their colleagues the fundamentals of healthcare for patients with IDD. CIDDH consists of several modules that provide pertinent information to healthcare professionals to improve the health and lives of their patients. CIDDH features content that was previously implemented and evaluated with great success [1] but had been initially provided in a different format (face-to-face instruction) and more circumscribed context (Mississippi). Unlike its predecessor curriculum, CIDDH is a series of modules offered online and nationwide. In this evaluation, each module from the curriculum is evaluated. In short, this study is two-pronged, such that it (1) renders an updated evaluation of training content previously shown to be highly effective but now offered in a different format and (2) features an innovative model for program evaluation, namely, SEAM (Streamlined Evaluation and Analysis Method), which is more compact and straightforward to implement than conventional approaches to evaluation. The fields of medicine and evaluation science alike can benefit from this project.

2. Materials and Methods

2.1. Making the Case for SEAM

The Streamlined Evaluation and Analysis Method (SEAM) was developed by the CIDDH evaluator (this study’s first author) in the context of this project as an alternative to conventional evaluation approaches. Conventional evaluation measures often employ pretest/post-survey surveys and then generate time-series comparisons before and after training to discern the degree to which vital knowledge (informational capacity) and skills (technical proficiency) were transferred to trainees. This pre/post method, particularly when used with a comparable control group not exposed to the treatment (training), is the gold standard in evaluation given its scientific rigor. At its core is the experimental or, often, quasi-experimental method that controls for potential confounding influences by aiming to isolate treatment effects. However, this method places a heavy data collection burden on trainees and limits instruction time, thereby reducing instructor–trainee contact, question-and-answer sessions, etc. This pre-post method may also lead to discomfort among unknowledgeable trainees, especially initially when completing a pretest, despite assurances that a lack of knowledge coming into a training is completely acceptable. Additionally, work-related time constraints may lead program participants to opt out of any surveys beyond the baseline instrument, that is, post-test and follow-up surveys. The common result is an abundance of pretest surveys complemented by a dearth of their post-test counterparts. This survey disparity may complicate efforts to conduct matched-pair analyses between pretest and post-test surveys unless a unique identifier is provided to trainees or by them, which is itself a somewhat complex task that may entail requesting personal information (e.g., a cell phone number as unique identifier). However, there is no better way to establish the effectiveness of a program during its early stages of implementation. But what about after such evidence has been found? Should such onerous methods always be utilized? Perhaps not. SEAM is proposed as a compact follow-up evaluation alternative.
SEAM is not intended to replace conventional evaluation, as scientific rigor is still the best way to determine initial evidence of effectiveness. Yet, a program that has already been rigorously evaluated could benefit from abbreviated follow-up assessments to ensure continued quality. SEAM is designed to achieve this aim. In this study, SEAM was developed and employed because of ample prior evidence of program effectiveness [1]. SEAM serves as a cross-check of consistent training effectiveness. In a world that is now filled with evaluation surveys for consumers, product users, etc., SEAM is less likely to produce survey fatigue and the pre-training embarrassment of ignorance that a pretest/post-test methodology inevitably invites. Finally, SEAM can be very useful in situations when funds for external evaluation are no longer available, as is often the case after a grant concludes. In such circumstances, efforts to determine persistent effectiveness, a key sustainability consideration, may need to be downscaled. By utilizing post-only surveys and relying on descriptive statistics that can be easily interpreted by non-scientists, SEAM is an effective model to use given its economized design features.
Both data collection and data analysis are reconsidered through SEAM, which emphasizes “A” as analysis. SEAM provides a brief evaluation framework and a technique for assessing (analyzing and interpreting) data that is suitable for non-scientists without complex statistical techniques. SEAM’s analytical approach hinges on drawing contrasts between superlative and non-superlative response categories. A superlative response category is the highest possible rating on a given measure. Therefore, on a Likert scale with a salutary (positive) prompt, “Strongly agree” would be the superlative response. A related commonly used three-point scale solicits a rating in relation to expectations: “Exceeds expectations” (superlative), “Meets expectations” (mid-range), and “Fails to meet expectations” (deficient). On a salutary binary yes/no measure, with or without a “Don’t know” response option, “Yes” would serve as the superlative response. Similarly, a satisfaction scale often proposes three rating categories: “Very satisfied”, “Somewhat satisfied”, and “Not satisfied”, with the first of these being the superlative category.
Streamlined analysis in the context of SEAM operates on one of two ratio rules for superlative (best possible) responses versus non-superlative (all other) responses. A 3-to-1 ratio is used to determine highly effective programming and a 2-to-1 ratio rule defines moderately effective programming (designated “effective”). So, a highly effective success threshold can be defined such that triple the number of superlative responses is observed compared with all non-superlative responses combined. Essentially, SEAM’s logic is that a highly effective threshold is achieved when 75% of responses reflect a superlative performance rating. This stringent threshold creates a challenging bar for success and therefore indicates highly effective program functioning. Of course, the 3-to-1 ratio is more difficult to achieve with a greater number of response categories and depends on the naming of the categories. The achievement of 75% superlative responses is likely more difficult to achieve on a four-point and five-point Likert scale (strongly agree…strongly disagree) than a binary (yes/no) measure because the former provides gradations of positive rating options. Moreover, the achievement of 75% “exceeds expectations” ratings may be especially difficult to achieve for someone who enters a training with very high expectations at the outset (perhaps through positive recruitment messaging or word-of-mouth encouragement). Therefore, moderately effective program functioning is achieved when the number of superlative responses is double that of all other responses, in accordance with a 2-to-1 ratio rule (67% superlative responses). This streamlined approach to analysis is not as rigorous as statistical significance and is limited to univariate (single measure) analysis. Therefore, SEAM does have limitations. But it is an accessible and efficient quality check suitable for a follow-up evaluation.

2.2. Training Modules, Evaluation Instruments, and Analytical Approach

The CIDDH evaluation instrument was developed with streamlined considerations in mind. Led by the program creator and informed by the evaluation that preceded it [1], the CIDDH instrument is intentionally compact. Subjective trainee performance ratings were collected with a post-only digital survey administered to participants after completing a course module. The data collection period (April 2020–January 2023) was somewhat protracted due to the COVID-19 pandemic. At that time, many healthcare workers attended to pandemic response concerns and additional training was not a priority. So, we have an extended period through which trainings were offered. Evaluation questions tap into core module components that are clearly outlined on the CIDDH website (https://replacingrisk.com/curriculum-in-idd-healthcare-elearn, accessed on 1 January 2024). The CIDDH modules are as follows, with descriptions for each module featured in quotes as found on the program’s website:
  • IDD Basics Then, Now and Next: “Diagnosis, causes, prevalence, and classifications are covered. The history of IDD treatment and the reasons for the move from institutional to community-based support are covered and much more”.
  • Healthcare Basics in IDD: “A discussion of common medical issues, their presentations and general treatment options are covered. This discussion includes the ‘Fatal 5+’—Constipation, Aspiration, Dehydration, Seizures and Sepsis, as well as other topics like GERD, osteoporosis, contraception, feeding tubes, wheelchairs, end-of-life considerations and basic dental issues”.
  • Common Behavioral Presentations of Medical Conditions in People with IDD: “Using real-life case studies, this module effectively illustrates how numerous challenging behaviors may point to specific, underlying medical conditions in people with IDD. Topics include head-banging, refusing meals, hand-mouth behavior, aggressiveness in particular situations, resisting lying down or sleeping, pica and many more”.
  • Dual Diagnosis in IDD: “This module covers the challenges of diagnosing mental health conditions in people with IDD. It covers how certain non-verbal behaviors may point to a mental health condition and the importance of evaluating for underlying medical conditions before instituting treatment for adverse behaviors with psychotropic medications”.
  • Effective Communication for IDD Healthcare: “This talk covers topics including understanding the differing baselines of people with IDD, fostering good communication between healthcare providers and support staff, speaking TO the person who is the patient rather than directing communication to the support staff, how to garner the most helpful information from support staff, and understanding the structure of the team model of support”.
  • Bringing It All Together: Case Studies in IDD Healthcare: “While case studies are sprinkled through the other presentations, this enjoyable and highly interactive workshop-type module focuses on specific cases to drive home the previously discussed topics illustrating practical applications of the information”.
It is worth noting that separate from the evaluation questions (discussed below), learners needed to complete a brief end of module objective knowledge quiz with a score of 80% correct or better before gaining access to the next module. Unlimited attempts were provided to complete the knowledge quiz and objective knowledge quiz scores were not logged. Since these objective knowledge quizzes solely served a subsequent module access function, the results of these quizzes were not suitable for evaluation. With unlimited attempts granted to learners and a minimum correct threshold in place for accessing the next module, score variations on these objective knowledge quizzes would be limited and the number of attempts would vary minimally. Therefore, a separate series of evaluation items was used to ascertain learner feedback, described more fully below.
For each module, a concise and consistent cluster of similar evaluation questions was posed: (1) overall evaluation (general appraisal of the module); (2) handout evaluation (quality of distributed document(s) for that module); (3) presentation evaluation (quality of content and logic of information provided in the module); and (4) topic (relevance of module subject matter to trainees’ professional responsibilities). For each module-specific evaluative domain, the following response options were provided: excellent, good, average, fair, and poor. In the results that follow, these categories are featured verbatim for all valid responses using data visualization techniques. A series of summative questions were also posed, including but not limited to expectation ratings, inclinations to recommend the training to others, willingness to attend additional trainings, and so forth. For ease of interpretation, all response categories generating valid responses are reported below as featured on completed surveys. For all such measures, counts (number of valid responses per category) and percentages (proportions of each response option) are reported. To enhance data visualization, applications of the 3-to-1 (highly effective) and 2-to-1 (effective) ratio rules for superlative responses are featured in the narrative interpretive of findings. Finally, one item asked trainees to rate their ability to provide care to patients with IDD before versus after the training. Before-training ability was measured retrospectively on the post-only survey while post-training ability was measured upon training completion. This self-rating ability scale ranges from 1–100 and has been used in previously published research on CIDDH’s predecessor [1]. Means (averages) on this capability barometer were generated before and after the training, as were the number of responses clustered by quartile (less than 25, 25–50, 51–75, and more than 75) at both time periods. No personal data were collected in terms of trainee race/ethnicity, gender, occupation, years of practice, and so forth. While the omission of such information is a limitation, it is an intentional aspect of this streamlined approach to data collection. This limitation is revisited in the Section 4.

2.3. Trainee Survey Completion, Data Management, and Data Analysis

Some trainees took the course to meet continuing education unit (CEU) requirements, which are sometimes called CME (continuing medical education) in this field. Others took the course simply for professional development purposes. Users were presented with a CEU Opt-In slide on the final module. If selecting “Yes”, users were redirected to a CEU Survey page to fill out the post-only survey and download a CEU/CME certificate. Trainees were not required to answer all questions on the evaluation instrument, thus preserving trainee choice in the survey completion process. Selecting “No, Thanks” on the CEU page automatically redirected the trainee to a Zoho version of the course survey. Both surveys featured the same questions, but professional development (non-CEU) trainees had less of an incentive to provide their feedback.
The CME and non-CME data sources captured by the online post-training survey were unioned (combined into a single dataset). Survey item skips resulted in blanks that were not all true missing data, given different motives for course completion and generally higher rates of CME trainee survey completion. Given the impossibility to ascertain origin information on missing data (true missing data could exist from premature browser closure), all skips have been eliminated from analysis. Therefore, only valid responses are featured in the results reported below. Given CME trainee over-representation in the submission of completed surveys, missing data could be systematic, which is a potential study limitation. However, a case can be made that CME participants who rely on the utility of training to deliver effective medical services have more stringent training expectations than their non-CME peers. CME trainees, therefore, are likely to present a higher bar for training effectiveness thresholds. They may be among the most difficult trainees to satisfy, so positive results with them (as found here) could be especially difficult to attain. As noted, results are displayed in terms of counts (the number of valid responses per response category) with percentages (valid responses for a particular response category divided by the total valid responses for that survey item). Interpretations are rendered in the narrative that accompanies the tables.

3. Results

Turning to the results of this study, a maximum of 221 completed surveys featuring valid responses were collected overall. Respective survey maximums of 172 surveys were collected from CEU trainees and 49 surveys were completed by non-CEU trainees. All surveys were combined for comprehensive analysis as featured in the accompanying figures. As noted, actual responses fluctuate across modules because trainees may have taken some but not all modules. Response numbers also vary across items within a survey because trainees were allowed to skip questions that they preferred not to answer on any single post-module instrument. The module-specific surveys are the best starting point for this evaluation, with results featured seriatim for each successive module as they are completed sequentially in the course.
We conducted several reliability and validity analyses for the modules and scales used in the survey. For each module and scale, we show satisfactory Cronbach’s alpha coefficients as measures of internal consistency, i.e., reliability. All the coefficients are excellent (see Table 1), ranging from 0.85 to 0.94 [23]. Face validity for these items is supported because clinician trainees commonly encounter survey questions like those featured on the CIDDH evaluation instruments due to mandatory continuing education units (CEUs). The questions are therefore readily understood (interpretable) by surveyed trainees. In addition to face validity, the overall rating measure within each module or scale was used as the generic and external criterion measure widely accepted in the evaluation field. If this criterion measure is highly and significantly correlated with other components of each module or scale in this IDD study, we can then establish criterion validity. Bivariate Pearson correlation analyses, as shown in Table 1, reveal that correlation coefficients range from 0.51 (only one coefficient) to 0.95. It is important to note that all coefficients are consistently and statistically significant at least at the 0.001 level. Such statistical evidence may also suggest satisfactory content validity, that is, the degree to which each module or scale satisfactorily covers a range of measures that reflect material included in a module or a construct. Any discrepancies between the valid response counts featured in Table 1 and those featured in subsequent tables are due to data management procedures used to address missing values for results presented in the table. Table 1 reports the results of bivariate correlation analyses where n was determined by non-missing values for both variables. Additional information on these procedures is available upon request from the authors.
All remaining results are featured compactly to convey trainees’ subjective assessments overall, and then their appraisals of the presentation content quality, handouts, and the topic’s relevance to their work. To ensure scientific rigor, chi-squared tests or, when methodologically warranted, t-tests were conducted with p-values reported accordingly. Actual chi-squared values are available from the first author upon request. The four-point reporting pattern for CIDDH modules begins with Table 2, which features the results of the first CIDDH learning module. For this foundational module, which aimed to provide a general IDD overview, four quality measures were asked, namely, how trainees rated the overall training, quality of module content, informational handouts, and the value of the topic in relation to the trainees’ work. Seventy of the total 92 responses (76.09%) rated the overall module experience as excellent, crossing the 75% superlative response threshold indicating highly effective performance. The presentations (74 out of 92, 80.43%) also met the highly effective threshold. Handouts received 28 of 40 responses as excellent (70.00%), and topics addressed (34 out of 49, 69.39%) indicated excellent, which surpasses the SEAM threshold of moderate effectiveness. The quality of the module and module training materials met the SEAM efficacy standards.
Table 3 indicates general success for the second CIDDH module, Healthcare Basics in IDD. The overall quality of the module received 71 excellent ratings out of a total 92 responses (77.17%). Similarly, presentation quality received 72 excellent ratings out of a total 92 (78.26%). Handouts were rated excellent by 29 out of 40 total responses (72.50%). Topic relevance attracted 71 excellent responses from among 92 total (77.17%). So, three of these four ratings meet the highly effective threshold of 75% superlative responses.
Table 4 turns to evaluations for the Common Behavioral Presentations of Medical Conditions in People with IDD module. For this module’s overall quality, 71 excellent ratings are again observed out of 92 responses (77.17%), which crosses the highly effective threshold. Superlative ratings are continually observed at thresholds of 80.43% (74/92) for presentation quality, 67.50% (27/40) for handout quality, and 78.26% (72/92) for topic, two of which surpass the highly effective threshold of 75% superlative responses.
Table 5 features results for Dual Diagnosis in IDD, with a mix of highly effective and moderately effective ratings. There were 68 excellent ratings out of 91 (74.73%) for overall module quality. For presentation quality, 70 of 92 responses (76.09%) were excellent. Handout quality received 28 excellent out of 40 responses (70.00%). Topic relevance prompted 67 excellent responses from 90 in total (74.44%). For the Effective Communication for IDD Healthcare module (Table 6), three rating categories in this module clear the highly effective SEAM threshold of at least 75% superlative responses. For overall module quality, 70 excellent ratings are observed out of a total 91 responses (76.92%). Further superlative ratings observed include 78.26% (72/92) for both presentation quality and topic, both of which surpass SEAM’s highly effective margin. Handout quality was moderately effective at 70.11% (61/87) of responses rated as excellent.
Table 7 indicates continued success for the sixth CIDDH module, Bringing It All Together. The overall effectiveness of this module ranged evenly between moderately and highly effective according to SEAM standards. The training sample rendered a 75.00% (70/92) excellent rating for the module overall. For presentation quality, 75.00% (69/92) of responses were excellent. Handout quality received a 70.79% (63/89) excellent rating in responses. Topic relevance received a 73.63% (67/91) excellent response rating. Thus, a mix of highly and moderately effective ratings were observed.
Table 8 reveals evaluations for the Self-Rated Ability to Deliver [IDD] Care module. Higher scores reflect greater trainee confidence in their ability to deliver effective care to individuals with IDD. In this table, the superlative quartile is reflected in a score of 76 or greater (more than 75) on this 100-point scale. Before training completion, only approximately four in ten trainees (38.91%, 86/221) indicated high confidence (self-assigned score of more than 75) in care delivery. After training completion, nearly seven in ten trainees (67.87%, 150/221) rendered this superlative self-rating in care confidence, which is just short of double the proportion at baseline.
Table 9 displays a favorable outcome for the content appropriateness of all CIDDH modules. A strong majority (91.49%, 43/47) indicated that the CIDDH training content quality was just right, as opposed to too advanced (4.26%, 2/47) or too basic (4.26%, 2/47). CME certification quality further resulted in 90.00% (45/50) of trainees rating the curriculum as just right, with 10.00% (5/50) rating it as too basic. Table 10 assesses the informational material quality of CIDDH resources, focused mostly on freedom from informational bias across all module materials. A strong majority of the trainees (96.91%, 94/97) identified no bias within the CIDDH modules, with only 3.09% (3/97) identifying at least some bias.
Table 11 displays results concerning personal training impacts. These items assess trainee gains from all CIDDH modules. On ordinal scales, respondents’ opinions of the trainings exhibited robust agreement that knowledge, confidence, practical skill acquisition, and other desirable abilities had increased. The overall training sample experienced an increase in knowledge at 81.70% (76/93), falling within the highly effective threshold. Similarly, more than three quarters of respondents strongly agreed that they learned new things from the training (75.27%, 70/93) and would change their professional actions moving forward (77.42%, 72/93). An increase in confidence was expressed in nearly three fourths of responses (73.12% 68/93), which is just shy of the highly effective margin. A host of additional training outcomes are also featured in Appendix A to this article, generally demonstrating highly effective training impacts.

4. Discussion

This study set out to provide a follow-up evaluation of training content featured in the Curriculum in IDD Healthcare (CIDDH) eLearn course. Because impressive evidence for this course content was previously established through a refereed publication [1], a more accessible approach to evaluation was introduced here. This narrowly focused replication study was designed to gauge the effectiveness of a programmatic transition from in-person didactic instruction (CIDDH’s predecessor at Mississippi DETECT) to self-paced online trainings (CIDDH) with limited change in content primarily including updates to remain current with the evolving field. This study also provided the opportunity to develop and employ a novel evaluation model called SEAM (Streamlined Evaluation and Analysis Method). SEAM relies largely on subjective post-training assessments rendered by program trainees across a range of evaluative domains. To determine if implementation changes are warranted, responses from approximately 100 trainees were collected early after this transition to online learning was made. While the argument for a greater number of responses is understood, the early-cohort analysis conducted here provides the truest test of continued effectiveness early in the learning curve of the online transition. SEAM uses a ratio-based method of analysis to determine levels of effectiveness rather than statistical methods that rely on large sample sizes, thereby permitting a small early-cohort analysis to be conducted.
To be clear, SEAM should not be used when initially attempting to determine if scientific evidence does or does not support a training program. Scientific rigor at the outset of program delivery should never be foregone. However, once such evidence has been amassed and a follow-up assessment is needed, SEAM minimizes the data collection burden on trainees, maximizes instructor–trainee contact time, and provides an opportunity for organizational self-assessment in the absence of an external evaluator. Moreover, the straightforward results provided by descriptive statistics can be understood by non-scientists, so interpretive accessibility is a bonus with SEAM. In the world of evidence-based programming, external evaluation with advanced statistical techniques remains the gold standard for performance monitoring and impact analysis. But, once evidence has been established, smaller-scale follow-up evaluations using SEAM can ensure consistent quality in programmatic offerings.
In general, the results presented here indicate that CIDDH content is deemed very effective by trainees. The results were tabulated from a combined sample of those who completed the course for continuing medical education (CME) units and those who simply took the course for professional development purposes (preliminary analyses available from the author by request reveal a general absence of meaningful differences among these training subgroups). For most quality measures, the results are salutary and remarkably similar, often entailing approximately three superlative (highest rating) responses compared to all non-superlative responses combined. SEAM proposes this 3-to-1 ratio (75% superlative responses) as evidence of highly effective programming, while a 2-to-1 ratio (67% superlative responses) indicates moderate effectiveness. This point underscores an important analytical feature of SEAM. One of SEAM’s goals is interpretive simplicity in the absence of a professional evaluator. Without a trained evaluator, many non-scientists understandably do not have the analytical expertise sufficient to conduct t-tests, execute chi-squared tests, or calculate thresholds of statistical significance (p-values). Even software programs that can conduct such tests (e.g., Excel, online survey engines) require knowledge of how to conduct them properly (e.g., one-tailed versus two-tailed t-tests, verifying equal variances across samples).
With these practical and somewhat daunting considerations in mind, this study introduced a straightforward ratio-based interpretive method for post-only results. This ratio-based interpretive method is not foolproof. It permits inspection of only one variable at a time (univariate descriptive statistical analysis), so it cannot detect relationships among variables, nor can it control for confounding influences as would regression. It can be hampered by creaming, that is, serving or collecting valid responses from only the trainees who remain with a program through its full duration, which could introduce selectivity bias. (In fairness, many statistical methods are subject to selectivity bias if this threat is not carefully managed.) In short, ratio-based interpretations are subject to more limitations than advanced statistical techniques. Yet, with the 3-to-1 and 2-to-1 ratios on training outcomes in mind, some confidence in positive program impact is warranted.
By this standard, CIDDH often proved to be quite effective. In many cases, superlative (highest possible) ratings outnumbered the combination of mid-range and deficient ratings by a factor of 3 to 1, meeting or exceeding the 75% superlative response threshold. In most of the remaining outcomes, a 2-to-1 ratio indicating a 67% superlative response threshold or better was observed. Highly effective results were especially evident in measures of the program overall, where the presentations were rated with respect to expectations, improvement of a trainee’s medical practice, presenter knowledge, and willingness to recommend the training to others. For each module, ratings were positive overall as well as for handouts, presentation quality, and the topic’s relevance to the trainees’ work. Overall ratings were especially strong across modules. Finally, a self-rated ability-to-deliver-care measure moved upward when ratings before training (retrospectively gauged at post-test) and after the training were compared. Quartile distributions of these scores revealed appreciable gains in ability. This measure exhibited gains comparable to the earlier evaluation of CIDDH’s predecessor training program at DETECT of Mississippi [1]. This is an especially noteworthy achievement given CIDDH’s nationwide reach.
CIDDH has consistently produced salutary results for its trainees with modules addressing many issues (e.g., dual diagnosis, effective communication techniques) that often escape attention in the provision of IDD healthcare delivery. Healthcare training programs tailored to individuals with IDD play a crucial role in promoting health, well-being, and self-advocacy skills. By addressing the unique needs of individuals with IDD, healthcare training programs like CIDDH can reduce health disparities and improve the quality of life among patients with IDD. In short, continued effectiveness for CIDDH seems evident, and efforts to collect more data with fewer missing values have already begun in earnest.
The limitations of SEAM have been specified in this study but bear some reiteration. SEAM is only suitable for use in follow-up evaluations of programs that have exhibited prior evidence of effectiveness. SEAM cannot control for complex statistical relationships because its focus is univariate descriptive statistics that are easily rendered and interpreted by non-scientists. Additionally, the streamlined surveys developed for this follow-up evaluation of CIDDH did not capture trainee demographics. Even if such data were available, case numbers were not sufficient to warrant split-sample bivariate analyses (e.g., performance appraisals disaggregated by trainee race, ethnicity, gender, years of professional experience, or other factors). While this streamlined approach to survey burden reduction is advantageous in this context, it also means that disparities in the number and types of persons served (e.g., race, ethnicity, gender, occupational role, and years in practice), as well as subgroup-specific outcomes, cannot be analyzed for this investigation as they were for its predecessor evaluation [1]. So, the collection of additional data from trainees for a richer repository of ratings is warranted. Tracking patient outcomes would be the next logical step to gauge the ultimate impact of clinician improvement trainings focused on IDD healthcare but is beyond the scope of this study. Nevertheless, the advantages of a more economized evaluation model are compelling. These benefits include less time diverted from implementation to evaluation, minimized data burden for program participants, and prospects for self-assessment when an external evaluator is not available. Future evaluation inquiries would do well to apply SEAM in other contexts, medical and non-medical, to verify a promising future for this method when evidence of program effectiveness has already been established.

Author Contributions

Conceptualization, J.P.B.; methodology, J.P.B. and X.X.; software, X.X.; validation, X.X. and J.P.B.; formal analysis, J.P.B., X.X. and K.K.; investigation, J.P.B.; resources, K.K., J.P.B. and X.X.; data curation, J.P.B. and X.X.; writing—original draft preparation, J.P.B. and K.K.; writing—review and editing, J.P.B., K.K. and X.X.; visualization, J.P.B. and X.X.; supervision, J.P.B.; project administration, J.P.B.; funding acquisition, J.P.B. All authors have read and agreed to the published version of the manuscript.

Funding

This study received no external funding through a conventional research grant. The evaluator was compensated for effort expended through a flat fee that reflected a competitive market rate.

Institutional Review Board Statement

As a program evaluation project that is not defined as research by the Office of Human Research Protections in the United States, this study did not require ethical approval. Therefore, no institutional review board (IRB) approval was necessary to complete this work. The author has served on various university IRBs for over 20 years and is well-versed in IRB requirements and protocols.

Informed Consent Statement

Because this study is based on program evaluation rather than formal research, standard IRB consenting procedures did not apply. Nevertheless, all trainees were informed of the purpose and voluntary nature of the evaluation instruments used to collect their feedback on the training modules they completed. CME credit certificates were accessed through the post-training evaluation survey, but no field in the survey was required. Therefore, skips of specific items or the instrument as a whole were possible. As with any evaluation, trainees could forego responding to any items they preferred to skip or could decline to participate the evaluation altogether. The evaluator has taken all necessary steps to preserve data confidentiality and related considerations.

Data Availability Statement

The data used to conduct this evaluation are proprietary and not suitable for public release. Please contact the first author for more information about the data used to conduct this study.

Acknowledgments

The authors express their gratitude to Craig Escude, the training content designer and medical advisor, for valuable feedback on this manuscript prior to its submission and a colleague at Bartkowski & Associates Research Team for his assistance in manuscript preparation and support activities. The interpretations offered in this paper are those of the study authors alone.

Conflicts of Interest

The lead author served as the project evaluator for this study and was compensated at a competitive market rate for the services rendered. Instrument design was informed by efforts to complement and streamline the initial comprehensive evaluation of this curriculum, published by Bartkowski and colleagues in 2018 [1], as described in this study. A collaborative approach to evaluation was used to conduct this study. Data were collected by the organization that delivered the training. The data were transmitted in raw form to the evaluator for coding, cleaning, and analyses. All data analyses, data interpretations, writing, and publication decisions were conducted by the authors.

Appendix A

Table A1. Other training outcomes.
Table A1. Other training outcomes.
CategorynPercentCumulative PercentSignificance (x2)
1. Would Recommend the Training Program
1. Disagree10.820.82p < 0.001
2. Neither Agree nor Disagree21.642.46
3. Agree1512.3014.75
4. Strongly Agree10485.25100
5. Total122100
2. Would Attend Future Programs
1. Disagree10.820.82p < 0.001
2. Neither Agree nor Disagree54.104.92
3. Agree2016.3921.31
4. Strongly Agree9678.69100
5. Total122100
3. Good Method of Presentation
1. Strongly Disagree21.641.64p < 0.001
2. Neither Agree nor Disagree21.643.28
3. Agree1814.7518.03
4. Strongly Agree10081.97100
5. Total122100
4. Teaching Effectiveness
1. Disagree10.820.82p < 0.001
2. Neither Agree nor Disagree21.642.46
3. Agree1512.3014.75
4. Strongly Agree10485.25100
5. Total122100
5. Outcomes Met
1. Neither Agree nor Disagree10.820.82p < 0.001
2. Agree1512.3013.11
3. Strongly Agree10686.89100
4. Total122100
6. Facilities Conducive to Learning
1. Yes12098.3698.36p < 0.001
2. No21.64100
3. Total122100
Table A2. Opinion of training.
Table A2. Opinion of training.
CategorynPercentCumulative PercentSignificance (x2)
1. Recommend the Training to a Colleague
1. Strongly Disagree11.081.08p < 0.001
2. Somewhat Disagree11.082.15
3. Somewhat Agree1415.0517.20
4. Strongly Agree7782.80100
5. Total93100
2. Opinion of Curriculum
1. 1 Less Positive12.132.13p < 0.001
2. 212.134.26
3. 312.136.38
4. 41123.4029.79
5. 5 More Positive3370.21100
6. Total47100
3. Opinion of CME Activity
1. Satisfactory12.002.00p < 0.001
2. Good918.0020.00
3. Excellent4080.00100
4. Total50100

References

  1. Bartkowski, J.P.; Kohler, J.; Escude, C.L.; Xu, X.; Bartkowski, S. Evaluating the impact of a clinician improvement program for treating patients with intellectual and developmental disabilities: The challenging case of Mississippi. Healthcare 2018, 6, 3. [Google Scholar] [CrossRef] [PubMed]
  2. Pinals, D.A.; Hovermale, L.; Mauch, D.; Anacker, L. Persons with intellectual and developmental disabilities in the mental health system: Part 1. Clinical considerations. Psychiatr. Serv. 2021, 73, 313–320. [Google Scholar] [CrossRef] [PubMed]
  3. Pinals, D.A.; Hovermale, L.; Mauch, D.; Anacker, L. Persons with intellectual and developmental disabilities in the mental health system: Part 2. Policy and systems considerations. Psychiatr. Serv. 2021, 73, 321–328. [Google Scholar] [CrossRef] [PubMed]
  4. Adirim, Z.; Sockalingam, S.; Thakur, A. Post-graduate medical training in intellectual and developmental disabilities: A systematic review. Acad. Psychiatry 2021, 45, 371–381. [Google Scholar] [CrossRef]
  5. Mahon, D.; Walsh, E.; Holloway, J.; Lydon, H. A systematic review of training methods to increase staff’s knowledge and implementation of positive behaviour support in residential and day settings for individuals with intellectual and developmental disabilities. J. Intellect. Disabil. 2022, 26, 732–757. [Google Scholar] [CrossRef] [PubMed]
  6. Bogenschutz, M.; Nord, D.; Hewitt, A. Competency-based training and worker turnover in community supports for people with IDD: Results from a group randomized controlled study. Intellect. Dev. Disabil. 2015, 53, 182–195. [Google Scholar] [CrossRef] [PubMed]
  7. Agarwal, R.; Heron, L.; Naseh, M.; Burke, S.L. Mentoring students with intellectual and developmental disabilities: Evaluation of role-specific workshops for mentors and mentees. J. Autism Dev. Disord. 2020, 51, 1281–1289. [Google Scholar] [CrossRef] [PubMed]
  8. Keesler, J.M. From the DSP perspective: Exploring the use of practices that align with trauma-informed care in organizations serving people with intellectual and developmental disabilities. Intellect. Dev. Disabil. 2020, 58, 208–220. [Google Scholar] [CrossRef]
  9. Keesler, J.M.; Purcell, A.; Thomas-Giyer, J. Advancing trauma-informed care in intellectual and developmental disability services: A pilot study of a digital training with direct service providers. J. Appl. Res. Intellect. Disabil. 2023, 36, 615–628. [Google Scholar] [CrossRef]
  10. Havercamp, S.M.; Ratliff-Schaub, K.; Macho, P.N.; Johnson, C.N.; Bush, K.L.; Souders, H.T. Preparing tomorrow’s doctors to care for patients with autism spectrum disorder. Intellect. Dev. Disabil. 2016, 54, 202–216. [Google Scholar] [CrossRef]
  11. Smith, S.E.; McCann, H.P.; Urbano, R.C.; Dykens, E.M.; Hodapp, R.M. Training healthcare professionals to work with people with intellectual and developmental disabilities. Intellect. Dev. Disabil. 2021, 59, 446–458. [Google Scholar] [CrossRef] [PubMed]
  12. Morin, D.; Valois, P.; Rivard, M.; Bardon, C.; Faust, C.; Robitaille, C. Impact of participation in Special Olympics Healthy Athletes® on attitudes of health professionals through direct contact with people with intellectual disability. Int. J. Dev. Disabil. 2023, 1–10. [Google Scholar] [CrossRef]
  13. Singh, N.N.; Lancioni, G.E.; Karazsia, B.T.; Myers, R.E. Caregiver training in mindfulness-based positive behavior supports (MBPBS): Effects on caregivers and adults with intellectual and developmental disabilities. Front. Psychol. 2016, 7, 98. [Google Scholar] [CrossRef]
  14. Brock, M.E.; Anderson, E.J. Training paraprofessionals who work with students with intellectual and developmental disabilities: What does the research say? Psychol. Sch. 2021, 58, 702–722. [Google Scholar] [CrossRef]
  15. Amir, N.; Smith, L.D.; Valentine, A.M.; Mitra, M.; Parish, S.L.; Simas, T.A.M. Clinician perspectives on the need for training on caring for pregnant women with intellectual and developmental disabilities. Disabil. Health J. 2022, 15, 101262. [Google Scholar] [CrossRef] [PubMed]
  16. Schalock, R.L.; Luckasson, R.; Tassé, M.J. Ongoing transformation in the field of intellectual and developmental disabilities: Taking action for future progress. Intellect. Dev. Disabil. 2021, 59, 380–391. [Google Scholar] [CrossRef]
  17. Aller, T.B.; Russo, R.B.; Kelley, H.H.; Bates, L.; Fauth, E.B. Mental health concerns in individuals with developmental disabilities: Improving mental health literacy trainings for caregivers. Intellect. Dev. Disabil. 2023, 61, 49–64. [Google Scholar] [CrossRef]
  18. Constantino, J.N.; Strom, S.; Bunis, M.; Nadler, C.; Rodgers, T.; LePage, J.; Cahalan, C.; Stockreef, A.; Evans, L.; Jones, R.; et al. Toward actionable practice parameters for “dual diagnosis”: Principles of assessment and management for co-occurring psychiatric and intellectual/developmental disability. Curr. Psychiatry Rep. 2020, 22, 9. [Google Scholar] [CrossRef]
  19. Golub-Victor, A.C.; Peterson, B.; Calderón, J.; Dias, J.L.; Fitzpatrick, D.F. Student confidence in providing healthcare to adults with intellectual disability: Implications for health profession curricula. Intellect. Dev. Disabil. 2022, 60, 477–483. [Google Scholar] [CrossRef] [PubMed]
  20. Luckasson, R.; Schalock, R.L. A balanced approach to decision-making in supporting people with IDD in extraordinarily challenging times. Res. Dev. Disabil. 2020, 105, 103719. [Google Scholar] [CrossRef] [PubMed]
  21. Schalock, R.L.; Luckasson, R. Enhancing research practices in intellectual and developmental disabilities through person-centered outcome evaluation. Res. Dev. Disabil. 2021, 117, 104043. [Google Scholar] [CrossRef] [PubMed]
  22. Gómez, L.E.; Schalock, R.L.; Verdugo, M.A. A new paradigm in the field of intellectual and developmental disabilities: Characteristics and evaluation. Psichothema 2021, 33, 28–35. [Google Scholar]
  23. DeVellis, R.F.; Thorpe, C.T. Scale Development: Theory and Applications; Sage Publications: Thousand Oaks, CA, USA, 2021. [Google Scholar]
Table 1. Reliability and validity analyses.
Table 1. Reliability and validity analyses.
Modules/ScalesBivariate Pearson Correlation Coefficient (95% CI)Significance (n)Reliability: Cronbach’s Alpha
IDD Basics Then, Now and Next 11. Overall Rating 0.865
   2. Presentation0.66 (0.527–0.762)<0.001 (92)
   3. Handout0.69 (0.475–0.822)<0.001 (40)
   4. Topic0.77 (0.626–0.865)<0.001 (49)
Healthcare Basics in IDD 11. Overall Rating 0.941
   2. Presentation0.83 (0.758–0.887)<0.001 (92)
   3. Handout0.67 (0.453–0.812)<0.001 (40)
   4. Topic0.86 (0.794–0.905)<0.001 (92)
Common Behavioral Presentations of Medical Conditions in People with IDD 11. Overall Rating 0.933
   2. Presentation0.86 (0.801–0.908)<0.001 (92)
   3. Handout0.87 (0.760–0.927)<0.001 (40)
   4. Topic0.87 (0.802–0.909)<0.001 (92)
Dual Diagnosis in IDD 11. Overall Rating 0.854
   2. Presentation0.71 (0.591–0.799)<0.001 (91)
   3. Handout0.51 (0.238–0.710)<0.001 (40)
   4. Topic0.79 (0.691–0.854)<0.001 (90)
Effective Communication for IDD Healthcare 11. Overall Rating 0.941
   2. Presentation0.84 (0.772–0.894)<0.001 (91)
   3. Handout0.76 (0.651–0.836)<0.001 (86)
   4. Topic0.95 (0.921–0.965)<0.001 (91)
Bringing It All Together: Case Studies in IDD Healthcare 11. Overall Rating 0.936
   2. Presentation0.86 (0.791–0.903)<0.001 (92)
   3. Handout0.72 (0.601–0.807)<0.001 (89)
   4. Topic0.84 (0.764–0.890)<0.001 (91)
Training Impacts 21. Overall Knowledge Increase
   2. New Information Learned0.77 (0.665–0.838)<0.001 (93)0.930
   3. Increase in Confidence0.84 (0.768–0.892)<0.001 (93)
   4. Changes in Professional Actions0.71 (0.593–0.799)<0.001 (93)
1. 1 = poor, 2 = fair, 3 = average, 4 = good, and 5 = excellent. 2. 1 = strongly disagree, 2 = somewhat disagree, 3 = somewhat agree, and 4 = strongly agree.
Table 2. Frequency distribution: IDD basics then, now, and next.
Table 2. Frequency distribution: IDD basics then, now, and next.
CategorynPercentCumulative PercentSignificance (x2)
1. Overall Rating
1. Average44.354.35p < 0.001
2. Good1819.5723.91
3. Excellent7076.09100
4. Total92100
2. Quality of Presentation
1. Average33.263.26p < 0.001
2. Good1516.3019.57
3. Excellent7480.43100
4. Total92100
3. Handouts
1. Good1230.0030.00p < 0.001
2. Excellent2870.00100
3. Total40100
4. Value of Topic
1. Average24.084.08p = 0.011
2. Good1326.5330.61
3. Excellent3469.39100
4. Total49100
Table 3. Frequency distribution: Healthcare basics in IDD.
Table 3. Frequency distribution: Healthcare basics in IDD.
CategorynPercentCumulative PercentSignificance (x2)
1. Overall Rating
1. Average22.172.17p < 0.001
2. Good1920.6522.83
3. Excellent7177.17100
4. Total92100
2. Quality of Presentation
1. Average22.172.17p < 0.001
2. Good1819.5721.74
3. Excellent7278.26100
4. Total92100
3. Handouts
1. Good1127.5027.50p = 0.004
2. Excellent2972.50100
3. Total40100
4. Value of Topic
1. Average22.172.17p < 0.001
2. Good1920.6522.83
3. Excellent7177.17100
4. Total92100
Table 4. Frequency distribution: Common behavioral presentations of medical conditions in people with IDD.
Table 4. Frequency distribution: Common behavioral presentations of medical conditions in people with IDD.
CategorynPercentCumulative PercentSignificance (x2)
1. Overall Rating
1. Average11.091.09p < 0.001
2. Good2021.7422.83
3. Excellent7177.17100
4. Total92100
2. Quality of Presentation
1 Average11.091.09p < 0.001
2. Good1718.4819.57
3 Excellent7480.43100
4. Total92100
3. Handouts
1. Average12.502.50p < 0.001
2. Good1230.0032.50
3. Excellent2767.50100
4. Total40100
4. Value of Topic
1. Average11.091.09p < 0.001
2. Good1920.6521.74
3. Excellent7278.26100
4. Total92100
Table 5. Frequency distribution: Dual diagnosis in IDD.
Table 5. Frequency distribution: Dual diagnosis in IDD.
CategorynPercentCumulative PercentSignificance (x2)
1. Overall Rating
1. Average22.202.20p < 0.001
2. Good2123.0825.27
3. Excellent6874.73100
4. Total91100
2. Quality of Presentation
1. Fair11.091.09p < 0.001
2. Average22.173.26
3. Good1920.6523.91
4. Excellent7076.09100
5. Total92100
3. Handouts
1. Poor12.502.50p < 0.001
2. Average12.505.00
3. Good1025.0030.00
4. Excellent2870.00100
5. Total40100
4. Value of Topic
1. Average11.111.11p < 0.001
2. Good2224.4425.56
3. Excellent6774.44100
4. Total90100
Table 6. Frequency distribution: Effective communication for IDD healthcare.
Table 6. Frequency distribution: Effective communication for IDD healthcare.
CategorynPercentCumulative PercentSignificance (x2)
1. Overall Rating
1. Average22.202.20p < 0.001
2. Good1920.8823.08
3. Excellent7076.92100
4. Total91100
2. Quality of Presentation
1. Poor11.091.09p < 0.001
2. Average22.173.26
3. Good1718.4821.74
4. Excellent7278.26100
5. Total92100
3. Handouts
1. Average22.302.30p < 0.001
2. Good2427.5929.89
3. Excellent6170.11100
4. Total87100
4. Value of Topic
1. Fair11.091.09p < 0.001
2. Average22.173.26
3. Good1718.4821.74
4. Excellent7278.26100
5. Total92100
Table 7. Frequency distribution: Bringing it all together.
Table 7. Frequency distribution: Bringing it all together.
CategorynPercentCumulative PercentSignificance (x2)
1. Overall Rating
1. Average33.263.26p < 0.001
2. Good1921.7425.00
3. Excellent7075.00100
4. Total92100
2. Quality of Presentation
1. Average33.263.26p < 0.001
2. Good2021.7425.00
3. Excellent6975.00100
4. Total92100
3. Handouts
1. Average44.494.49p < 0.001
2. Good2224.7229.21
3. Excellent6370.79100
4. Total89100
4. Value of Topic
1. Average33.303.30p < 0.001
2. Good2123.0826.37
3. Excellent6773.63100
4. Total91100
Table 8. Trainees’ self-rated ability to deliver IDD care.
Table 8. Trainees’ self-rated ability to deliver IDD care.
CategorynPercentCumulative PercentSignificance (x2)
1. Ability Before Training
1. 1st Quartile2924.8924.89p < 0.001
2. 2nd Quartile5523.0847.96
3. 3rd Quartile5113.1261.09
4. 4th Quartile8638.91100
5. Total221100
2. Ability After Training
1. 1st Quartile362.712.71p < 0.001
2. 2nd Quartile613.1215.84
3. 3rd Quartile2916.2932.13
4. 4th Quartile15067.87100
5. Total221100
Table 9. Evaluation of training quality.
Table 9. Evaluation of training quality.
CategorynPercentCumulative PercentSignificance (x2)
1. Content Quality
1. Too Basic24.264.26p < 0.001
2. Just right4391.4995.74
3. Too advanced24.26100
4. Total47100
2. CME Quality
1. Too Basic510.0010.00p < 0.001
2. Just Right4590.00100
3. Total50100
Table 10. Evaluation of informational material quality: Free of bias.
Table 10. Evaluation of informational material quality: Free of bias.
CategorynPercentCumulative PercentSignificance (x2)
1. Yes9496.9196.91p < 0.001
2. No33.09100
3. Total97100
Table 11. Evaluation of personal training impacts.
Table 11. Evaluation of personal training impacts.
CategorynPercentCumulative PercentSignificance (x2)
1. Increase in Knowledge
1. Strongly Disagree11.101.10p < 0.001
2. Somewhat Agree1617.2018.30
3. Strongly Agree7681.70100
4. Total93100
2. Learning New Things
1. Strongly Disagree22.152.15p < 0.001
2. Somewhat Disagree22.154.30
3. Somewhat Agree1920.4324.73
4. Strongly Agree7075.27100
5. Total93100
3. Increase in Confidence
1. Strongly Disagree11.081.08p < 0.001
2. Somewhat Disagree11.082.15
3. Somewhat Agree2324.7326.88
4. Strongly Agree6873.12100
5. Total93100
4. Changes in Professional Actions
1. Strongly Disagree22.152.15p < 0.001
2. Somewhat Disagree11.083.23
3. Somewhat Agree1819.3522.58
4. Strongly Agree7277.42100
5. Total93100
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bartkowski, J.P.; Xu, X.; Klee, K. The Curriculum in IDD Healthcare (CIDDH) eLearn Course: Evidence of Continued Effectiveness Using the Streamlined Evaluation and Analysis Method (SEAM). Knowledge 2024, 4, 68-84. https://doi.org/10.3390/knowledge4010004

AMA Style

Bartkowski JP, Xu X, Klee K. The Curriculum in IDD Healthcare (CIDDH) eLearn Course: Evidence of Continued Effectiveness Using the Streamlined Evaluation and Analysis Method (SEAM). Knowledge. 2024; 4(1):68-84. https://doi.org/10.3390/knowledge4010004

Chicago/Turabian Style

Bartkowski, John P., Xiaohe Xu, and Katherine Klee. 2024. "The Curriculum in IDD Healthcare (CIDDH) eLearn Course: Evidence of Continued Effectiveness Using the Streamlined Evaluation and Analysis Method (SEAM)" Knowledge 4, no. 1: 68-84. https://doi.org/10.3390/knowledge4010004

APA Style

Bartkowski, J. P., Xu, X., & Klee, K. (2024). The Curriculum in IDD Healthcare (CIDDH) eLearn Course: Evidence of Continued Effectiveness Using the Streamlined Evaluation and Analysis Method (SEAM). Knowledge, 4(1), 68-84. https://doi.org/10.3390/knowledge4010004

Article Metrics

Back to TopTop