Next Article in Journal
Embedded-AMP: A Multi-Thread Computational Method for the Systematic Identification of Antimicrobial Peptides Embedded in Proteome Sequences
Previous Article in Journal
Understanding of Final Year Medical, Pharmacy and Nursing Students in Pakistan towards Antibiotic Use, Antimicrobial Resistance and Stewardship: Findings and Implications
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Systematic Review

The Effectiveness of Interactive Dashboards to Optimise Antibiotic Prescribing in Primary Care: A Systematic Review

School of Public Health, Physiotherapy and Sports Science, University College Dublin, D04 V1W8 Dublin, Ireland
Insight Centre for Data Analytics, University of Galway, H91 AEX4 Galway, Ireland
Author to whom correspondence should be addressed.
Antibiotics 2023, 12(1), 136;
Submission received: 5 December 2022 / Revised: 2 January 2023 / Accepted: 5 January 2023 / Published: 10 January 2023
(This article belongs to the Section Antibiotics Use and Antimicrobial Stewardship)


Governments and healthcare organisations collect data on antibiotic prescribing (AP) for surveillance. This data can support tools for visualisations and feedback to GPs using dashboards that may prompt a change in prescribing behaviour. The objective of this systematic review was to assess the effectiveness of interactive dashboards to optimise AP in primary care. Six electronic databases were searched for relevant studies up to August 2022. A narrative synthesis of findings was conducted to evaluate the intervention processes and results. Two independent reviewers assessed the relevance, risk of bias and quality of the evidence. A total of ten studies were included (eight RCTs and two non-RCTs). Overall, seven studies showed a slight reduction in AP. However, this reduction in AP when offering a dashboard may not in itself result in reductions but only when combined with educational components, public commitment or behavioural strategies. Only one study recorded dashboard engagement and showed a difference of 10% (95% CI 5% to 15%) between intervention and control. None of the studies reported on the development, pilot or implementation of dashboards or the involvement of stakeholders in design and testing. Interactive dashboards may reduce AP in primary care but most likely only when combined with other educational or behavioural intervention strategies.

1. Introduction

The rise in antibiotic consumption has resulted in the spread of antimicrobial resistance, which was responsible for 1.27 million deaths in 2019 [1,2]. Up to 20% of antibiotic prescribing (AP) is deemed to be inappropriate, translating to 20,000 unnecessary APs in the UK daily [3,4]. Despite extensive efforts to promote prudent use of antibiotics, ambulatory AP has only slightly decreased over the past decade and this positive trend varied between countries [5]. In Europe in 2020, the AP rate was 600 per 1000 persons per year, while in the US, this was around 800 prescriptions per 1000 persons per year [5]. In the EU/EEA, an overall reduction from 19.3 (2012) to 15 (2020) daily doses (DDD) per 1000 inhabitants per day was recorded, which translates to a 22% reduction in antibiotic consumption in the community. Bulgaria was the only country where total antibiotic consumption over this period increased [6].
The introduction of technology to optimise prescribing and quality improvement coincided with the introduction of clinical decision support systems (CDSSs) and audits and feedback (A&F) [7]. CDSSs provide information on best evidence guidelines to close the gap between optimal practice and actual clinical care [8]. However, CDSSs have shown to have low to moderate effects on improving appropriate AP [9]. CDSS implementation in primary care has shown workflow barriers resulting in alert fatigue and negative experiences (due to limit the prescriber to approved treatment option) by the General Practitioner (GP) [10,11,12].
On the other hand, A&F systems do not interfere with doctor prescribing autonomy and deliver options for education through a feedback tool [13]. Traditional A&F interventions utilising peer comparison, where individuals are compared to top-performing peers, along with positive reinforcement, have shown a decrease of 16% of AP in primary care [14]. Additionally, A&F interventions often provide a visual element or dashboard to deliver clinical performance feedback [14,15]. In 2015, Dowding et al. published a comprehensive overview of clinical and quality dashboards in healthcare environments for general treatments and concluded that introducing dashboards can positively decrease ventilator-associated pneumonia rates, increase on time AP and improve turnaround time for signing reports. However, the relationship between dashboard features (graphical display type and presentation methods to users) and improvements in outcomes or incorporation into everyday clinical practice was unclear [16]. A more recent systematic review (SR) of randomised controlled trials (RCTs) that evaluated the use of clinical dashboards integrated in patient management systems showed improved medication adherence (for patients with inflammatory arthritis) and test ordering (cardiovascular risk screening of patients). However, this review also reported limited impact of dashboards on the prescription of antibiotics and statins [17].
With increased digitalisation of healthcare, new technologies and improved data visualisation techniques, advances have been made to integrate A&F and CDSSs into clinical care [18,19]. Today, some patient management systems include interactive dashboards with or without integrated A&F or CDSS. This review aims to assess the effectiveness of interactive dashboards to optimise AP in primary care.

2. Results

After removing duplicates, a total of 6539 potentially relevant reports were recovered. After evaluating 47 full text reports, 10 studies [20,21,22,23,24,25,26,27,28,29] were included in the synthesis of evidence (Figure 1). The list of excluded studies and the reasons for their exclusion are shown in Table S1. The characteristics of the included studies are detailed in Table S2.

2.1. Included Studies

2.1.1. Study Design

Ten studies were included: three individual RCTs [20,26,28], four cluster trials [22,23,25,27], a crossover trial [24]) and two non-RCTs (one a controlled before and after study [21] and one ITS [29]) (Table S2).

2.1.2. Participants and Settings

The number of participants was reported in seven studies (individual level). The participants included 3609 physicians [20,21,24,26,27,28] and 2566 dentists [25]. The study setting included general practices [22,23], dental practices [25], primary care institutions [24,27,29], community health centres [23], community-based practices [23], hospital-based practices [23] and emergency departments (ED) [21] (Table S2). The ED was included as a primary care setting as some countries provide GP/primary care services within the ED [31].

2.1.3. Description of the Intervention

The characteristics of the included intervention and control studies are available in Table S2. Overall, ten studies used a visualisation tool or dashboard to provide A&F of AP to the prescriber, and the duration of the interventions varied between three months to two years [20,21,22,23,24,25,26,27,28,29]. However, six of these added other elements to the intervention. Dun Yan (2021) included an education component, which provided the national consensus treatment guidelines and an online education course [26]. Hemkens (2017) included recommendations for evidence-based guidelines for optimised antibiotic use in primary care [28]. Curtis (2021) included three different waves in the behavioural impact intervention for optimised engagement: a tailored broad-spectrum antibiotic feedback to which in wave 2 a reminder (dashboard link) was added and in wave 3 a potential cost saving [22]. Shen (2018) added information on operation guidelines, public commitment and takeaway information (patients take home) [27]. Elouafkaoui (2016) provided a behaviour-change message, which was created with guidance recommendations for AP [25]. Davidson (2022) included an education campaign for patients and providers [29]. Across the 10 studies, the control group consisted of education components only [26], usual static email attachments [20,27] or no intervention (usual care) [20,21,22,23,24,25,28].
Table 1 describes the visualisation tools or dashboards. Daneman (2021) [20], Linder (2010) [23] and Davidson 2022 [29] briefly mentioned details of dashboard development; however, this process varied widely between these studies. The data for the dashboard and main outcome measurements were predominantly from patient management systems. However, Hemkens (2017) used data from statutory health insurers for drug prescription and health care service claims [28] and Curtis (2021) used data from national monthly datasets [22]. The data summaries and features varied among studies, describing AP by diagnosis [21,23,24,26,27,29] or type of antibiotic [23,24,28,29]. Furthermore, some included peer comparison [20,21,24,25,27,28], information of other medications [20] and practice overviews [20,23,26,28,29]. Five studies included reminders about using the dashboard at ten days and up to six months [21,22,23,24,28].

2.1.4. Outcomes

Eight studies measured changes in AP (primary outcome) but used different outcomes and measure types (Tables S2 and S3). Four studies reported changes as overall rate of AP [24] by diagnostic categories (acute respiratory infections [21,23], upper respiratory infection, bronchitis, sinusitis and pharyngitis [26]). Hemkens (2017) reported change from baseline (between-group difference) in AP per year (defined daily doses-DDD/100c) according to patient group, type of antibiotic and age group [28]. Elouafkaoui (2016) reported the change from baseline of all antibiotic items/100 claims and defined daily dose (all antibiotics)/100 claims [25]. Shen (2018) reported the percentage of patients that received an antibiotic when presenting with symptomatic respiratory tract infections or gastrointestinal tract infections [27]. Curtis (2021) measured the proportion of broad-spectrum antibiotics out of the total number of antibiotics prescribed [22].
Regarding secondary outcomes, Linder (2010) and Jones (2021) reported appropriate and inappropriate antibiotic use outcomes [21,23] and Davidson (2022) reported inappropriate antibiotic use [29]. These outcomes were defined by Linder (2010) as antibiotic appropriate and inappropriate Acute Respiratory Infection (ARI) visits when compared to the International Classification of Diseases (ICD-9-CM) code for pneumonia, streptococcal pharyngitis, sinusitis and otitis media [35]. Jones (2021) followed the Meeker et al. [36] methods to develop a list of ICD-10 Codes for upper respiratory system conditions for which antibiotics were considered appropriate and inappropriate (clinician experts reviewed the list) and also mentioned using evidence-based guidelines for when antibiotics were appropriate (however, they did not reference which guidelines were used) [21]. Davidson (2022) included conditions (ICD-10) for which antibiotics are not indicated (acute sinusitis, otitis media (nonsuppurative), acute bronchitis, pharyngitis (nonbacterial), cough, upper respiratory infection (URI), common cold, allergic rhinitis and influenza) [29].
Du Yan (2021) described the proportion of total visits where an antibiotic was prescribed for sinusitis or pharyngitis over time [26]. Daneman (2021) included prolonged antibiotic duration (proportion of antibiotic treatments exceeding 7 days during the quarter (four quarters of 2018 and four quarters of 2019)) and antibiotic initiation, defined as the proportion of residents initiated on an antibiotic during the quarter [20]. Two studies reported dashboard engagement, Curtis (2021) recorded practices having at least one dashboard view [22] and Linder (2010) measured the proportion of intervention physicians who used the dashboard at least once [23] (Tables S2 and S3).

2.2. Excluded Studies

Of the 53 possible eligible studies, 40 were excluded after assessing the full text. See Table S1 of the characteristics of excluded studies in full text and the PRISMA study flowchart Figure 1.

2.3. Effects of the Interventions

The summary of the results is described in Table S3. There was high heterogeneity due to varying definitions of the outcome, measure types, type of infection and the combination of other interventions. As a result, an overall pooled effect outcome could not be determined. Figure 2 shows the effect of AP changes (primary outcome) in odds ratios, for three studies reporting this outcome. The results illustrate a larger decrease in AP in the intervention group compared to the control group, except Du Yan (2021) [26] for sinusitis and pharyngitis and Linder (2010) who compared the ARI Quality Dashboard versus usual care [23]. However, in Linder (2010), the intervention physicians who used the dashboard at least once were less likely to prescribe antibiotics for all ARIs compared to those who did not use the dashboard [23]. It is notable that only Linder (2010) sent monthly e-mails reminding physicians of the dashboard (Table 1) [23].
Figure 3 illustrates the effect of AP changes quantified as the percentage change from baseline (between-group difference) for two studies and no overall differences in AP were observed. However, Hemkens (2017) showed that quarterly written personalised prescription feedback was associated with reduced AP in the 6 to 18 year age group in the first year (−9% (CI 95% −15% to −2%)) and in adults (19 to 65 years) in the second year (−5% (CI 95% −8% to −1%)) [28]. Elouafkaoui (2016) reported a 6% reduction in AP in the intervention group (written behaviour change message with peer comparison and A&F relative) to the control group (no A&F). Table S3 describes the result of other intervention subgroups of this factorial study. Elouafkaoui (2016) was the only cluster study that reported an intra-cluster correlation coefficient (0.2 (CI 95% 0.1 to 0.2)). Furthermore, the interval between receiving A&F varied according to allocation (either 0 and 6 months or at 0, 6 and 9 months) [25].
Curtis (2021) found that the addition of reminders and information on impact of prescribing to tailored broad-spectrum antibiotic feedback (behavioural impact intervention) resulted in lower broad-spectrum prescribing by comparison with feedback without reminders (plain intervention) (plain versus behavioural intervention, regression coefficient 0.0041, CI 95% 0.00007 to 0.008). However, when both interventions were compared to no intervention, no difference in AP reduction was shown (Table S3). The cross-over study (Chang 2020) reported no significant difference in AP between the intervention comprising feedback and individual ranking score, with no intervention, which may be due to a carryover effect (see RoB section). The controlled before and after study (Jones 2021) showed a reduction in AP of 5% for the intervention group (from 36% at baseline to 31% during follow-up), compared with an increase of 3% in the control group (from 38% to 41%) (Table S3) [21].
Appropriate and inappropriate prescribing was assessed as a secondary outcome (see definitions in outcome section) in Linder (2010), but no differences were observed between ARI Quality Dashboard and the control [23]. Jones (2021) showed less inappropriate AP in the intervention with 22% at baseline and 15% during follow-up compared to 23% and 24% in the control [21]. Davidson (2022) reported a significant reduction of 19% in inappropriate AP before and during the intervention period [29].
Daneman (2021) reported no difference in the antibiotic duration and antibiotic initiation in the intervention group (dashboard tool) compared to the control group (usual static email attachments) [20]. For dashboard engagement outcomes, Curtis (2021) observed a difference of 10% (95% CI 5% to 15%) between the intervention and control group in the number of times they interacted with the dashboard [22]. The effect of the intervention on other secondary outcomes is described in Table S3.

2.4. Risk of Bias

The RoB is summarised in Figure 4 for the eight RCTs, while the controlled before and after study and ITS are described in Table S4. RoB of the RCTs from Daneman (2021) and Hemkens (2017) had a low overall RoB [20,28]. The Du Yan (2021) study showed bias due to deviations from the intended intervention, measurement of outcome and selection of the reported result, because no participants were blinded, outcome assessors were aware of the intervention received by study participants, and the trial protocol was not available [26].
For the cluster RCTs, Shen (2018) had a high RoB due to the sequence generation process, the allocation sequence concealed, measurement of the outcome and selection of the reported result [27]. Two cluster RCTs had deviations from the intended interventions (effect of assignment to intervention) [22,25]. Linder (2010) reported challenges with the randomisation process and selection of the reported result, and the trial protocol was unavailable [23].
The crossover RCT from Chang (2020) had a high RoB arising from period and carryover effects (no washout period) [24]. Finally, the two non-randomised studies (Jones (2021) a controlled before and after study [21] and Davidson (2022) an ITS [29]), showed moderate RoB due to confounding and lack of protocol to evaluate or pre-specify the methods (Table S4).

2.5. Grading the Quality of Evidence

Based on the assessment of eight studies (Table S3), the overall level of certainty in the evidence (GRADE) for the primary outcome (overall change of AP) was judged at low (Table S5).

3. Discussion

This SR identified 10 studies that evaluated the impact of dashboards or visual analytical tools to optimise AP. Overall, seven studies indicated a slight reduction in AP [21,22,25,26,27,28,29] and six of these added other elements to the intervention [22,25,26,27,28,29]. Interventions that included an educational component (including national guidelines), public commitment, behavioural strategies (reminder, cost saving, written behaviour change message) were more likely to show an impact. Appropriate AP showed improvements between the intervention and control groups or over follow-up periods. The two studies that assessed dashboard engagement, one showed improved AP with increased engagement.
Our findings are consistent with a previous SR of Xie et al. (2022), which assessed the use of clinical dashboards on medication prescription and testing ordering [17]. Xie et al. reported limited evidence of dashboards in relation to medication adherence in general, reduction of opioids, AP and test ordering [17]. Furthermore, dashboard interventions are frequently part of multifaceted interventions to improve and generate positive changes in healthcare outcomes, making evaluating of each component challenging to separate [17]. Our findings complement the SR of Xie et al. extending the number of studies focused on AP (three RCTs [20,22,27] and two non-RCTs [21,29]), more detail description of the visualisation tool or dashboard and restriction to one priority setting (AP and primary care). Nevertheless, the level of certainty of evidence assessment differs from the SR of Xie et al. [17] which judged the primary outcome individually between moderate and very low quality [17] using the standard-GRADE approach [37] without considering inconsistency and publication bias. For our primary outcome (overall changes in AP) certainty was low using a GRADE approach focused on SRs summarised narratively without meta-analysis [38].
To implement an interactive dashboard, good knowledge of operating systems and user interaction, context factors, barriers and facilitators is required, combined with effective use of data, the application of best knowledge and continuous improvement [39]. New methods have a learning cycle of evaluation and monitoring users’ technological competence [11] which cover interaction effectiveness, user experiences and system efficacy [40]. The studies included in this SR do not describe the development and design of the dashboard and only include a short description (Table 1). In addition, only two studies reported the inclusion of qualitative and mixed method perspectives in the design and implementation phase. Daneman (2021) [20] explored how A&F influence AP through semi-structured interviews with prescribers [32]. Shen (2018) [27] used preliminary results of mixed methods [41] and qualitative research [42] to develop and adapt the intervention.
In any behavioural intervention, the situational context of delivering information is essential [36,43] and for feedback to be effective, it should be frequent, individualised and available at the recipients request [43]. An interesting aspect highlighted by Tasang et al. (2022) in their SR of computerised A&F systems in healthcare is to allow the feedback to be actionable (specific to user roles). However, organisational context, resources and user characteristics influence potential effects of A&F systems considerably [14]. Therefore, it is relevant to consider involving various perspective levels, including organisational, patient and public [44] in these systems, as Shen (2018) and Davidson (2022) briefly incorporated. Shen (2018) integrated a component of the intervention targeting patients and the general group through public commitment and detailed patient information explaining diagnosis, treatment, symptom relief and activities to prevent future infection [27]. Davidson (2022) included an antimicrobial stewardship education campaign for patients and providers which included dashboards. Moreover, both had access to information from consumer web pages and multimedia pitches on Facebook, Twitter, Instagram and other social media platforms [29]. In general, without regular individual engagement in antimicrobial stewardship strategies, there are no shortcuts to improving AP and using digital tools only seems to be insufficient.
A limitation of this SR was the high heterogeneity of the included studies, which did not allow an overall conclusion through meta-analysis to be determined. Nonetheless, the findings were consolidated, aggregated, and narratively reported. Additionally, an acceptable technique was employed to assess the certainty in the evidence of our primary outcome to make informed judgements. Another constraint was publication bias, which was mitigated by scanning electronic databases comprehensively and using other approaches, but it still remains a constraint.

4. Materials and Methods

The protocol of this systematic review was registered in PROSPERO (CRD42022313006) and followed the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guideline [30].

4.1. Criteria for Considering Studies for this Review

  • Participants: general practices and primary care settings focused on GPs or other health professionals.
  • Intervention: Any intervention using prescription data illustrated in a visual analytical tool (i.e., dashboard). Decision support tools which were incorporated as alerts or risk calculators were excluded.
  • Comparator: usual care or any other intervention without visual analytical tools (dashboard).
  • Outcomes of interest included:
    Change in AP (primary outcome)
    Prescribed antibiotic class
    Change in prescription of inappropriate (i.e., not recommended) antibiotics
    Antibiotic duration
    Patients’ re-consultation
    Dashboard engagement (not initially included in the protocol).
  • Types of studies: RCTs and non-randomised controlled trials (non-RCTs) (controlled before and after studies, interrupted time series studies (ITS) and controlled trials using non-random methods) assessing the effectiveness of dashboards including prescription data in general practice.

4.2. Search Methods for Identification of Studies

A systematic search strategy was carried out in the following electronic databases: Cochrane Central Register of Controlled Trials (OVID), MEDLINE (OVID), EMBASE, SCOPUS, Web of Science Core Collection, and LILACS (Table S6). Reference lists of included studies, google scholar and relevant webpages were searched for additional papers. The cut-off point for inclusion was 15 August 2022.

4.3. Data Collection and Analysis

4.3.1. Selection of Studies

Researchers (N.G.-O., S.P., D.A. and H.V.) independently (double) screened the abstracts and titles of articles retrieved from each search to assess eligibility. Copies of the full text of all eligible papers were obtained and independently evaluated by two researchers (N.G.-O., S.P., D.A. and H.V.) according to the prespecified selection criteria. In case of discrepancies, a third researcher (A.V.) provided resolution. Authors of abstract of conferences and protocols were contacted to ask for results and full text.

4.3.2. Data Extraction and Management

The data extraction form included the study ID, available information, study eligibility, a summary of assessment for inclusion, population and setting, methods, the risk of bias assessment, intervention and control groups, outcomes, results, limitations, conclusions of study authors and funding. One researcher (N.G.-O.) extracted the data which was checked by a second researcher (S.P., D.A. and H.V.).

4.3.3. Risk of Bias (Quality) Assessment

The risk of bias (RoB) of the RCTs (individual, cluster and crossover trial) was assessed independently by two researchers (N.G.-O., S.P., D.A. and H.V.) using the Cochrane “RoB 2” tool [45,46,47], with non-RCTs assessed with the ROBINS-I tool [48]. This assessment was based on the effect of the assignment to intervention (the ‘intention-to-treat’ effect) and focused on the primary outcome of each study. Disagreements were resolved by discussion.

4.3.4. Measures of Treatment Effect

Dichotomous outcomes were expressed as odds ratio (OR) or absolute risk difference (AR) with 95% confidence intervals (CI). Continuous data were reported as changes from baseline through mean differences (MD) with 95% CIs. Meta-analysis was considered but could not be performed due to the high heterogeneity between studies. When the study only reported the outcome in a proportion or percentage, an OR was estimated (fixed effect) in Review Manager 5.4.1 [49].

4.3.5. Missing Data

The data extraction form captured information on missing outcome data from each study. Imputation methods, where applied, were recorded.

4.3.6. Assessment of Heterogeneity

The evaluation of heterogeneity (I2 statistic) was not possible due to the small number of included studies with the same type of study and outcome. Results are illustrated in a forest plot, excluding an overall pooled effect diamond, using R software. Heterogeneity was assessed through a visual and qualitative assessment.

4.3.7. Data Synthesis

Measure effect sizes were presented for each study (with a range). A narrative synthesis of findings was conducted to assess the intervention processes and results.

4.3.8. Grading the Quality of Evidence

The quality of evidence for our primary outcome was assessed using the constructs of the GRADE “Grades of Recommendation, Assessment, Development, and Evaluation” approach for SRs summarised in a narrative without meta-analysis, which include RoB, imprecision, indirectness, inconsistency and risk of publication bias [38].

5. Conclusions

Dashboards visualising health data may reduce AP in primary care, but this is generally only in combination with frequent and individualised feedback. None of the included studies reported on the dashboards’ development and implementation phase and it was unclear whether users were involved in delivering tailored dashboards. Future research in the use of interactive dashboards to optimise AP in primary care should consider and report on operating systems, user interaction, involvement of stakeholders in design and testing, context factors, barriers and facilitators for implementation and sustainability of these visualisation tools.

Supplementary Materials

The following supporting information can be downloaded at:, Table S1. List of studies excluded and reasons for their exclusion; Table S2. Characteristic of studies included; Table S3. Result of outcomes; Table S4. Risk of bias in included studies; Table S5. GRADE (Grading of Recommendation, Assessment, Development and Evaluation) approach to assess the quality of evidence for the primary outcome (overall change of antibiotic prescriptions); Table S6. Evidence search report in electronic databases;

Author Contributions

Conceptualization, N.G.-O. and A.V.; methodology, N.G.-O., S.P., D.A., H.V., C.B. and A.V.; organizing, synthesizing and writing—original draft preparation, N.G.-O.; writing—review and editing, A.V. and C.B.; critically revising drafts, A.V.; visualization, N.G.-O.; supervision, A.V. and C.B. N.G.-O., S.P., D.A., H.V. and A.V. were involved in the screening process, risk of bias assessment, and data extraction. All authors have read and agreed to the published version of the manuscript.


This work was funded by grant number RL-20200-03 from Research Leader Awards (RL) 2020, Health Research Board, Ireland and conducted as part of the SPHeRE Programme.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Murray, C.J.; Ikuta, K.S.; Sharara, F.; Swetschinski, L.; Robles Aguilar, G.; Gray, A.; Han, C.; Bisignano, C.; Rao, P.; Wool, E.; et al. Global Burden of Bacterial Antimicrobial Resistance in 2019: A Systematic Analysis. Lancet 2022, 399, 629–655. [Google Scholar] [CrossRef] [PubMed]
  2. Klein, E.Y.; Milkowska-Shibata, M.; Tseng, K.K.; Sharland, M.; Gandra, S.; Pulcini, C.; Laxminarayan, R. Assessment of WHO Antibiotic Consumption and Access Targets in 76 Countries, 2000–2015: An Analysis of Pharmaceutical Sales Data. Lancet Infect. Dis. 2021, 21, 107–115. [Google Scholar] [CrossRef] [PubMed]
  3. Davies, S.C. Reducing Inappropriate Prescribing of Antibiotics in English Primary Care: Evidence and Outlook. J. Antimicrob. Chemother. 2018, 73, 833–834. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Research Reveals Levels of Inappropriate Prescriptions in England—GOV.UK. Available online: (accessed on 2 October 2022).
  5. Richards, A.R.; Linder, J.A. Behavioral Economics and Ambulatory Antibiotic Stewardship: A Narrative Review. Clin. Ther. 2021, 43, 1654–1667. [Google Scholar] [CrossRef]
  6. European Centre for Disease Prevention and Control. Antimicrobial Consumption in the EU/EEA (ESAC-Net)—Annual Epidemiological Report 2020; European Centre for Disease Prevention: Stockholm, Sweden, 2021. [Google Scholar]
  7. Tsuchida, R.E.; Haggins, A.N.; Perry, M.; Chen, C.M.; Medlin, R.P.; Meurer, W.J.; Burkhardt, J.; Fung, C.M. Developing an Electronic Health Record–Derived Health Equity Dashboard to Improve Learner Access to Data and Metrics. AEM Educ. Train. 2021, 5, S116–S120. [Google Scholar] [CrossRef]
  8. Carvalho, É.; Estrela, M.; Zapata-Cachafeiro, M.; Figueiras, A.; Roque, F.; Herdeiro, M.T. E-Health Tools to Improve Antibiotic Use and Resistances: A Systematic Review. Antibiotics 2020, 9, 505. [Google Scholar] [CrossRef] [PubMed]
  9. Holstiege, J.; Mathes, T.; Pieper, D. Effects of Computer-Aided Clinical Decision Support Systems in Improving Antibiotic Prescribing by Primary Care Providers: A Systematic Review. J. Am. Med. Informatics Assoc. 2015, 22, 236–242. [Google Scholar] [CrossRef]
  10. Chima, S.; Reece, J.C.; Milley, K.; Milton, S.; McIntosh, J.G.; Emery, J.D. Decision Support Tools to Improve Cancer Diagnostic Decision Making in Primary Care: A Systematic Review. Br. J. Gen. Pract. 2019, 69, e809–e818. [Google Scholar] [CrossRef]
  11. Sutton, R.T.; Pincock, D.; Baumgart, D.C.; Sadowski, D.C.; Fedorak, R.N.; Kroeker, K.I. An Overview of Clinical Decision Support Systems: Benefits, Risks, and Strategies for Success. NPJ Digit. Med. 2020, 3, 17. [Google Scholar] [CrossRef] [Green Version]
  12. Harada, T.; Miyagami, T.; Kunitomo, K.; Shimizu, T. Clinical Decision Support Systems for Diagnosis in Primary Care: A Scoping Review. Int. J. Environ. Res. Public Health 2021, 18, 8435. [Google Scholar] [CrossRef]
  13. Chung, G.W.; Wu, J.E.; Yeo, C.L.; Chan, D.; Hsu, L.Y. Antimicrobial Stewardship: A Review of Prospective Audit and Feedback Systems and an Objective Evaluation of Outcomes. Virulence 2013, 4, 151–157. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Tsang, J.Y.; Peek, N.; Buchan, I.; van der Veer, S.N.; Brown, B. Systematic Review and Narrative Synthesis of Computerized Audit and Feedback Systems in Healthcare. J. Am. Med. Inform. Assoc. 2022, 29, 1106–1119. [Google Scholar] [CrossRef] [PubMed]
  15. Tuti, T.; Nzinga, J.; Njoroge, M.; Brown, B.; Peek, N.; English, M.; Paton, C.; van der Veer, S.N. A Systematic Review of Electronic Audit and Feedback: Intervention Effectiveness and Use of Behaviour Change Theory. Implement. Sci. 2017, 12, 61. [Google Scholar] [CrossRef] [PubMed]
  16. Dowding, D.; Randell, R.; Gardner, P.; Fitzpatrick, G.; Dykes, P.; Favela, J.; Hamer, S.; Whitewood-Moores, Z.; Hardiker, N.; Borycki, E.; et al. Dashboards for Improving Patient Care: Review of the Literature. Int. J. Med. Inform. 2015, 84, 87–100. [Google Scholar] [CrossRef]
  17. Xie, C.X.; Chen, Q.; Hincapié, C.A.; Hofstetter, L.; Maher, C.G.; Machado, G.C. Effectiveness of Clinical Dashboards as Audit and Feedback or Clinical Decision Support Tools on Medication Use and Test Ordering: A Systematic Review of Randomized Controlled Trials. J. Am. Med. Inform. Assoc. 2022, 29, 1773–1785. [Google Scholar] [CrossRef] [PubMed]
  18. Palin, V.; Tempest, E.; Mistry, C.; Van Staa, T.P. Developing the Infrastructure to Support the Optimisation of Antibiotic Prescribing Using the Learning Healthcare System to Improve Healthcare Services in the Provision of Primary Care in England. BMJ Health Care Inform. 2020, 27, e100147. [Google Scholar] [CrossRef]
  19. BRIT Project. Available online: (accessed on 15 February 2022).
  20. Daneman, N.; Lee, S.M.; Bai, H.; Bell, C.M.; Bronskill, S.E.; Campitelli, M.A.; Dobell, G.; Fu, L.; Garber, G.; Ivers, N.; et al. Population-Wide Peer Comparison Audit and Feedback to Reduce Antibiotic Initiation and Duration in Long-Term Care Facilities with Embedded Randomized Controlled Trial. Clin. Infect. Dis. 2021, 73, e1296–e1304. [Google Scholar] [CrossRef]
  21. Jones, G.F.; Fabre, V.; Hinson, J.; Levin, S.; Toerper, M.; Townsend, J.; Cosgrove, S.E.; Saheed, M.; Klein, E.Y. Improving Antimicrobial Prescribing for Upper Respiratory Infections in the Emergency Department: Implementation of Peer Comparison with Behavioral Feedback. Antimicrob. Steward. Healthc. Epidemiol. 2021, 1, e70. [Google Scholar] [CrossRef]
  22. Curtis, H.J.; Bacon, S.; Croker, R.; Walker, A.J.; Perera, R.; Hallsworth, M.; Harper, H.; Mahtani, K.R.; Heneghan, C.; Goldacre, B. Evaluating the Impact of a Very Low-Cost Intervention to Increase Practices’ Engagement with Data and Change Prescribing Behaviour: A Randomized Trial in English Primary Care. Fam. Pract. 2021, 38, 373–380. [Google Scholar] [CrossRef]
  23. Linder, J.A.; Schnipper, J.L.; Tsurikova, R.; Yu, D.T.; Volk, L.A.; Melnikas, A.J.; Palchuk, M.B.; Olsha-Yehiav, M.; Middleton, B. Electronic Health Record Feedback to Improve Antibiotic Prescribing for Acute Respiratory Infections. Am. J. Manag. Care 2010, 16, 311–319. [Google Scholar]
  24. Chang, Y.; Sangthong, R.; McNeil, E.B.; Tang, L.; Chongsuvivatwong, V. Effect of a Computer Network-Based Feedback Program on Antibiotic Prescription Rates of Primary Care Physicians: A Cluster Randomized Crossover-Controlled Trial. J. Infect. Public Health 2020, 13, 1297–1303. [Google Scholar] [CrossRef] [PubMed]
  25. Elouafkaoui, P.; Young, L.; Newlands, R.; Duncan, E.M.; Elders, A.; Clarkson, J.E.; Ramsay, C.R. An Audit and Feedback Intervention for Reducing Antibiotic Prescribing in General Dental Practice: The RAPiD Cluster Randomised Controlled Trial. PLoS Med. 2016, 13, e1002115. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Du Yan, L.; Dean, K.; Park, D.; Thompson, J.; Tong, I.; Liu, C.; Hamdy, R.F. Education vs. Clinician Feedback on Antibiotic Prescriptions for Acute Respiratory Infections in Telemedicine: A Randomized Controlled Trial. J. Gen. Intern. Med. 2021, 36, 305–312. [Google Scholar] [CrossRef]
  27. Shen, X.R.; Lu, M.; Feng, R.; Cheng, J.; Chai, J.; Xie, M.; Dong, X.; Jiang, T.; Wang, D. Web-Based Just-in-Time Information and Feedback on Antibiotic Use for Village Doctors in Rural Anhui, China: Randomized Controlled Trial. J. Med. Internet Res. 2018, 20, e53. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Hemkens, L.G.; Saccilotto, R.; Reyes, S.L.; Glinz, D.; Zumbrunn, T.; Grolimund, O.; Gloy, V.; Raatz, H.; Widmer, A.; Zeller, A.; et al. Personalized Prescription Feedback Using Routinely Collected Data to Reduce Antibiotic Use in Primary Care: A Randomized Clinical Trial. JAMA Intern. Med. 2017, 177, 176–183. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Davidson, L.E.; Gentry, E.M.; Priem, J.S.; Kowalkowski, M.; Spencer, M.D. A Multimodal Intervention to Decrease Inappropriate Outpatient Antibiotic Prescribing for Upper Respiratory Tract Infections in a Large Integrated Healthcare System. Infect. Control Hosp. Epidemiol. 2022, 1–8. [Google Scholar] [CrossRef]
  30. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 Statement: An Updated Guideline for Reporting Systematic Reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
  31. Weisz, D.; Gusmano, M.K.; Wong, G.; Trombley, J. Emergency Department Use: A Reflection of Poor Primary Care Access? Am. J. Manag. Care 2015, 21, e152–e160. [Google Scholar]
  32. Laur, C.; Sribaskaran, T.; Simeoni, M.; Desveaux, L.; Daneman, N.; Mulhall, C.; Lam, J.; Ivers, N.M. Improving Antibiotic Initiation and Duration Prescribing among Nursing Home Physicians Using an Audit and Feedback Intervention: A Theory-Informed Qualitative Analysis. BMJ Open Qual. 2021, 10, e001088. [Google Scholar] [CrossRef]
  33. Hemkens, L.G.; Saccilotto, R.; Reyes, S.L.; Glinz, D.; Zumbrunn, T.; Grolimund, O.; Gloy, V.; Raatz, H.; Widmer, A.; Zeller, A.; et al. Personalized Prescription Feedback to Reduce Antibiotic Overuse in Primary Care: Rationale and Design of a Nationwide Pragmatic Randomized Trial. BMC Infect. Dis. 2016, 16, 421. [Google Scholar] [CrossRef] [Green Version]
  34. Prior, M.; Elouafkaoui, P.; Elders, A.; Young, L.; Duncan, E.M.; Newlands, R.; Clarkson, J.E.; Ramsay, C.R.; Black, I.; Bonetti, D.; et al. Evaluating an Audit and Feedback Intervention for Reducing Antibiotic Prescribing Behaviour in General Dental Practice (the RAPiD Trial): A Partial Factorial Cluster Randomised Trial Protocol. Implement. Sci. 2014, 9, 50. [Google Scholar] [CrossRef] [PubMed]
  35. Hueston, W.J. Improving Quality or Shifting Diagnoses?: What Happens When Antibiotic Prescribing Is Reduced for Acute Bronchitis? Arch. Fam. Med. 2000, 9, 933–935. [Google Scholar] [CrossRef] [PubMed]
  36. Meeker, D.; Linder, J.A.; Fox, C.R.; Friedberg, M.W.; Persell, S.D.; Goldstein, N.J.; Knight, T.K.; Hay, J.W.; Doctor, J.N. Effect of Behavioral Interventions on Inappropriate Antibiotic Prescribing Among Primary Care Practices. JAMA 2016, 315, 562. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Guyatt, G.H.; Oxman, A.D.; Vist, G.E.; Kunz, R.; Falck-Ytter, Y.; Alonso-Coello, P.; Schünemann, H.J. GRADE: An Emerging Consensus on Rating Quality of Evidence and Strength of Recommendations. BMJ 2008, 336, 924–926. [Google Scholar] [CrossRef] [Green Version]
  38. Murad, M.H.; Mustafa, R.A.; Schünemann, H.J.; Sultan, S.; Santesso, N. Rating the Certainty in Evidence in the Absence of a Single Estimate of Effect. Evid. Based. Med. 2017, 22, 85–87. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Wasylewicz, A.; Scheepers-Hoeks, A. Chapter 11 Clinical Decision Support Systems. In Fundamentals of Clinical Data Science; SpringerOpen: Cham, Switzerland, 2018; pp. 153–169. ISBN 9783319997131. [Google Scholar]
  40. Zhuang, M.; Concannon, D.; Manley, E. A Framework for Evaluating Dashboards in Healthcare. IEEE Trans. Vis. Comput. Graph. 2022, 28, 1715–1731. [Google Scholar] [CrossRef]
  41. Davis, M.E.; Liu, T.L.; Taylor, Y.J.; Davidson, L.; Schmid, M.; Yates, T.; Scotton, J.; Spencer, M.D. Exploring Patient Awareness and Perceptions of the Appropriate Use of Antibiotics: A Mixed-Methods Study. Antibiotics 2017, 6, 23. [Google Scholar] [CrossRef] [Green Version]
  42. Yates, T.D.; Davis, M.E.; Taylor, Y.J.; Davidson, L.; Connor, C.D.; Buehler, K.; Spencer, M.D. Not a Magic Pill: A Qualitative Exploration of Provider Perspectives on Antibiotic Prescribing in the Outpatient Setting. BMC Fam. Pract. 2018, 19, 96. [Google Scholar] [CrossRef]
  43. Linder, J.A. Moving the Mean with Feedback: Insights from Behavioural Science. npj Prim. Care Respir. Med. 2016, 26, 16018. [Google Scholar] [CrossRef]
  44. Perez Jolles, M.; Lengnick-Hall, R.; Mittman, B.S. Core Functions and Forms of Complex Health Interventions: A Patient-Centered Medical Home Illustration. J. Gen. Intern. Med. 2019, 34, 1032–1038. [Google Scholar] [CrossRef] [Green Version]
  45. Sterne, J.A.C.; Savović, J.; Page, M.J.; Elbers, R.G.; Blencowe, N.S.; Boutron, I.; Cates, C.J.; Cheng, H.-Y.; Corbett, M.S.; Eldridge, S.M.; et al. RoB 2: A Revised Tool for Assessing Risk of Bias in Randomised Trials. BMJ 2019, 366, l4898. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Eldridge, S.; Campbell, M.K.; Campbell, M.J.; Drahota, A.K.; Giraudeau, B.; Reeves, B.C.; Siegfried, N.; Higgins, J.P.T. Revised Cochrane Risk of Bias Tool for Randomized Trials (RoB 2) Additional Considerations for Cluster-Randomized Trials (RoB 2 CRT) Cluster-Randomized Trials in the Context of the Risk of Bias Tool Bias Arising from the Randomization Process. 2021. Available online: (accessed on 15 February 2022).
  47. Higgins, J.; Li, T.; Sterne, J. Revised Cochrane Risk of Bias Tool for Randomized Trials (RoB 2) Additional Considerations for Crossover Trials. 2021. Available online: (accessed on 15 February 2022).
  48. Sterne, J.A.; Hernán, M.A.; Reeves, B.C.; Savović, J.; Berkman, N.D.; Viswanathan, M.; Henry, D.; Altman, D.G.; Ansari, M.T.; Boutron, I.; et al. ROBINS-I: A Tool for Assessing Risk of Bias in Non-Randomised Studies of Interventions. BMJ 2016, 355, i4919. [Google Scholar] [CrossRef] [PubMed]
  49. The Cochrane Collaboration Review Manager (RevMan); Version 5.4 2020; Cochrane: London, UK, 2020.
Figure 1. Flowchart of studies selection. Source: The PRISMA 2020 statement: an updated guideline for reporting systematic reviews [30].
Figure 1. Flowchart of studies selection. Source: The PRISMA 2020 statement: an updated guideline for reporting systematic reviews [30].
Antibiotics 12 00136 g001
Figure 2. Forest plot of antibiotic prescribing change outcome in odd ratios. Source: elaborated by the authors with information reported of studies included. E+F: Education plus individualized prescribing feedback dashboard; ARI: Acute Respiratory Infection; ARI Quality Dashboard Use (line segment): intervention physicians who used the ARI Quality Dashboard at least once versus intervention clinicians who did not use it; JITIF: Just-in-Time Information and Feedback; APR: Antibiotic prescription rate; URI: Upper respiratory infection; RTIs: Respiratory tract infections; GTIs: Gastrointestinal tract infections; OR: Odd ratio; CI: confidence interval.
Figure 2. Forest plot of antibiotic prescribing change outcome in odd ratios. Source: elaborated by the authors with information reported of studies included. E+F: Education plus individualized prescribing feedback dashboard; ARI: Acute Respiratory Infection; ARI Quality Dashboard Use (line segment): intervention physicians who used the ARI Quality Dashboard at least once versus intervention clinicians who did not use it; JITIF: Just-in-Time Information and Feedback; APR: Antibiotic prescription rate; URI: Upper respiratory infection; RTIs: Respiratory tract infections; GTIs: Gastrointestinal tract infections; OR: Odd ratio; CI: confidence interval.
Antibiotics 12 00136 g002
Figure 3. Forest plot of antibiotic prescribing change outcome in percentage of change from baseline (between-group difference). Source: elaborated by the authors with information reported of studies included. PPF: Personalised prescription feedback; BCM: written behaviour change message; HB: health board; A&F: audit and feedback; BSA: Broad spectrum antibiotics (clindamycin, co-amoxiclav, clarithromycin, cefalexin, and cefradine); DDD: defined daily doses; CI: confidence interval. * Hemkens (2017) reported change from Baseline (between-group difference) [28] and Elouafkaoui (2016) reported change from baseline with all percentages standardised using control group baseline mean prescribing rate [25] (see Table S3).
Figure 3. Forest plot of antibiotic prescribing change outcome in percentage of change from baseline (between-group difference). Source: elaborated by the authors with information reported of studies included. PPF: Personalised prescription feedback; BCM: written behaviour change message; HB: health board; A&F: audit and feedback; BSA: Broad spectrum antibiotics (clindamycin, co-amoxiclav, clarithromycin, cefalexin, and cefradine); DDD: defined daily doses; CI: confidence interval. * Hemkens (2017) reported change from Baseline (between-group difference) [28] and Elouafkaoui (2016) reported change from baseline with all percentages standardised using control group baseline mean prescribing rate [25] (see Table S3).
Antibiotics 12 00136 g003
Figure 4. Risk of bias (RoB) summary. Source: elaborated by the authors.
Figure 4. Risk of bias (RoB) summary. Source: elaborated by the authors.
Antibiotics 12 00136 g004
Table 1. Description of dashboard or visualisation tools included of interventions groups of studies included.
Table 1. Description of dashboard or visualisation tools included of interventions groups of studies included.
Study IDData SummarizedFeaturesDevelopment DetailsExtracted Data fromTime Period of ReportAccessEngagement and Reminder Strategies
Du Yan 2021 [26]Rate of antibiotic prescription (AP) and practice-wide prescribing rates for upper respiratory infection (URI), bronchitis, sinusitis, and pharyngitis. Personalised for each clinician, including a practice summary (practice’s antibiotic prescription rates for target conditions), individual clinician prescription and the difference with their practice.No detailElectronic medical record without a separate database.Report from previous month starting May 2018.An online dashboard; the paper provided a sample in a figure (see Figure 2 from original paper [26]).No detail
Daneman 2021 [20,32]Percentages of AP and prolonged antibiotic treatment (longer than seven days). Additionally, antipsychotic, benzodiazepine, and other neurotropic medication prescribing was reported.A home page (overview) with key messages from prescribing data, peer comparisons (question mark icon if prescriptions were higher, similar or lower than their peers), and two links (to view trend data and change ideas).
The antibiotic page allowed comparing their overall rate with Ontario percentiles, showed key changes and answers to important questions (relating to resident characteristics, accurate data, the rate calculated, data limitation, and low AP that was reasonable and safe).
Input from infectious diseases, implementation science, information technology, and quality improvement specialists to improve its design through an iterative, user-centered design process.Administrative health databases and linked with drug, hospitalization, and emergency department databases.Four quarters of 2018 and four quarters of 2019An online dashboard; the paper provided a screenshot of a sample in a supplement (see Supplement S1 from original paper [20])No detail.
However, they explored how the intervention was perceived by those that engaged with it in its qualitative study [32].
Hemkens 2017 [28,33]Antibiotic prescriptions per 100 consultations in the preceding months and displaying the adjusted average in peer physicians, that is, the entire population of Swiss primary care physicians.Details on the prescriptions per age group or sex or for certain antibiotic types and answers to frequently asked questions on antibiotic use.No detailData from statutory health insurers for claims of drug prescriptions and health care services.Quarterly intervals (not more report details)An online dashboard; the paper provided a screenshot of a sample in a supplement (see Supplement Figure from original paper [28])Physicians received a quarterly updated personalised prescription feedback
Curtis 2021 [22]Change in APNo detailNo detailNational datasets published monthly by NHS Digital (Practice-level prescribing data).No detailAn online dashboard with a single measure highlighted (a link to their practice dashboard on The study provided a sample image in Supplement (see supplement Figure S1 from original paper [22])No detail.
However, an update was sent at 5-week intervals.
Linder 2010 [23]The proportion of Acute Respiratory Infection (ARI) visits with antibiotics, the proportion of individual ARI diagnoses (pneumonia, sinusitis, acute bronchitis) with antibiotics, the proportion of broad-spectrum AP, the distribution of ARI visits by evaluation and management billing codes, and individual patient visit details. Design based on the recommendations of the Centers for Disease Control and Prevention and the American College of Physicians. ASP.NET technology used to build the Dashboard.
Option to “drill down” to any patient’s medical record directly from the Dashboard to review patient details and export the report for additional follow-up or analysis.
A pilot to assess the users access, understand if it was useful to their antimicrobial prescribing patterns and validate its reports with primary data from the HER by drilling down to individual patient charts.Electronic health records (EHR)Dashboard displayed visit and prescribing data for the previous year and was automatically updated monthly.Physicians accessed the Dashboard from the EHR Reports Central area, which contained about 10 other reports about preventive and chronic disease management. The study provided a screenshot of the dashboard in Figure (see Figure 1 from original paper [23])Monthly e-mails reminding physicians about the ARI Quality Dashboard.
Shen 2018 [27]Their performance scores (PSs) and percentages of prescribed antibiotics use (ABU). The PS and ABU were presented in red, yellow, and green, respectively, if it fell below (or above), within, and above (or below) the interquartile range of the same PS or ABU). Additionally, it illustrated relevant performance feedback, performance scores for current doctor and their peers in total and by infections, public commitment, bulleted points of commitment letter, and frequently questions.No detailData was based on the records of their management of symptomatic infection patientsNo detailWeb-based aid (WBA). A slide of WBA in Multimedia Appendix (see appendix A3 from original paper [27])No detail
Elouafkaoui 2016 [25,34]Prescribing rate number of antibiotic items dispensed multiplied by 100 claims and the health board rate (the overall ordinary list prescribing rate for current dentists in non-salaried practices in NHS Example Board)No detailNo detailElectronic healthcare datasets held centrally by the Information Services Division of NHS National Services Scotland.MonthlyThis Audit and Feedback included a visualisation (line graph) which was delivered by post. The study provided an example in Supplement (see supplement Figure S1 from original paper [25])No detail
Chang 2020 [24]An individual ranking score of AP (peer comparison), statistic information about the diagnosis and AP (total and type of antibiotics).Top of the screen: the top five diseases of patients seen by the physician over the previous 10 days, the start and stop time for the previous 10 days, and the number of prescriptions given during this period and department ranking
Bottom: Statistics on the antibiotic frequency, prescription rate of each antibiotic prescribed, precautions and contraindications for antibiotics being used.
No detailHealth information system (HIS).Previous 10 daysA link on HIS to see the feedback information any time. The paper provided an example of feedback information displayed on a physician’s computer screen in Figure (see Figure 1 from original paper [24]) A pop-up window to automatically prompt to check for the feedback information every 10-days
Jones 2021 [21]Rate of inappropriate prescribing and stratified by diagnosis category.Top peer comparison (top 10% of performers (clinicians with the lowest prescribing rates) or not to be among the 10% best performers). Rolling over each column shows the percentage for each provider and the number of encounters on which the rate of inappropriate prescribing is based. Filters allow the provider to compare data over different timelines and across departments.No detailElectronic health record system.Unclear, but it mentioned “Dashboard was updated daily”Tableau dashboard. The paper provided a figure of provider feedback dashboard in Supplementary (see supplementary Figure S2 from original paper [21])Physicians’ review of their personal data was structured to satisfy the requirements for the American Board of Emergency Medicine Maintenance of Certification Improvement in Medical Practice Requirements.
Physicians received biannual e-mails
Davidson 2022 [29]Prescribing rate, target rate, antimicrobial encounters, total encounters and antimicrobial prescribing rateComparing AP behaviours among providers, practices and organisational groupings. Data viewable by indication, antibiotics class, and at the levels of provider, practice site, specialty medical director and administrator. Developed in Microsoft Power BI. Including coding, targeted indicators, instructional webinar and on-site dashboard navigation education given upon request for practice sites and leaders. The dashboard remains part of continuous, ongoing assessment of feedback from users and leadership.Electronic health record and administrative data sources.Prescribing data compared year-to-year and rolling 12 monthsOnline dashboard; the paper provided a figure of Dashboard Overview in Supplementary (see supplementary Figure S8 from original paper [29])Antibiotic education campaign (provider focused resources)
Source: elaborated with information reported of studies included.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Garzón-Orjuela, N.; Parveen, S.; Amin, D.; Vornhagen, H.; Blake, C.; Vellinga, A. The Effectiveness of Interactive Dashboards to Optimise Antibiotic Prescribing in Primary Care: A Systematic Review. Antibiotics 2023, 12, 136.

AMA Style

Garzón-Orjuela N, Parveen S, Amin D, Vornhagen H, Blake C, Vellinga A. The Effectiveness of Interactive Dashboards to Optimise Antibiotic Prescribing in Primary Care: A Systematic Review. Antibiotics. 2023; 12(1):136.

Chicago/Turabian Style

Garzón-Orjuela, Nathaly, Sana Parveen, Doaa Amin, Heike Vornhagen, Catherine Blake, and Akke Vellinga. 2023. "The Effectiveness of Interactive Dashboards to Optimise Antibiotic Prescribing in Primary Care: A Systematic Review" Antibiotics 12, no. 1: 136.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop