Next Article in Journal
Endoscopic Management of Biliary Strictures after Orthotopic Liver Transplantation: A Single Center Experience Study
Next Article in Special Issue
Diagnostic Performance of Antigen Rapid Diagnostic Tests, Chest Computed Tomography, and Lung Point-of-Care-Ultrasonography for SARS-CoV-2 Compared with RT-PCR Testing: A Systematic Review and Network Meta-Analysis
Previous Article in Journal
Morphological Biomarkers in the Amygdala and Hippocampus of Children and Adults at High Familial Risk for Depression
Previous Article in Special Issue
Validation of GeneFinder COVID-19 Ag Plus Rapid Test and Its Potential Utility to Slowing Infection Waves: A Single-Center Laboratory Evaluation Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Coronavirus Disease 2019 Spatial Care Path: Home, Community, and Emergency Diagnostic Portals

Fulbright Scholar 2020–2022, ASEAN Program, Point-of-Care Testing Center for Teaching and Research (POCT•CTR), Pathology and Laboratory Medicine, School of Medicine, University of California, Davis, CA 95616, USA
Diagnostics 2022, 12(5), 1216; https://doi.org/10.3390/diagnostics12051216
Submission received: 27 April 2022 / Revised: 10 May 2022 / Accepted: 10 May 2022 / Published: 12 May 2022

Abstract

:
This research uses mathematically derived visual logistics to interpret COVID-19 molecular and rapid antigen test (RAgT) performance, determine prevalence boundaries where risk exceeds expectations, and evaluate benefits of recursive testing along home, community, and emergency spatial care paths. Mathematica and open access software helped graph relationships, compare performance patterns, and perform recursive computations. Tiered sensitivity/specificity comprise: (T1) 90%/95%; (T2) 95%/97.5%; and (T3) 100%/≥99%, respectively. In emergency medicine, median RAgT performance peaks at 13.2% prevalence, then falls below T1, generating risky prevalence boundaries. RAgTs in pediatric ERs/EDs parallel this pattern with asymptomatic worse than symptomatic performance. In communities, RAgTs display large uncertainty with median prevalence boundary of 14.8% for 1/20 missed diagnoses, and at prevalence > 33.3–36.9% risk 10% false omissions for symptomatic subjects. Recursive testing improves home RAgT performance. Home molecular tests elevate performance above T1 but lack adequate validation. Widespread RAgT availability encourages self-testing. Asymptomatic RAgT and PCR-based saliva testing present the highest chance of missed diagnoses. Home testing twice, once just before mingling, and molecular-based self-testing, help avoid false omissions. Community and ER/ED RAgTs can identify contagiousness in low prevalence. Real-world trials of performance, cost-effectiveness, and public health impact could identify home molecular diagnostics as an optimal diagnostic portal.

1. Introduction

Now in the third year of the Coronavirus disease 19 (COVID-19) pandemic, we observe worldwide proliferation of novel COVID-19 diagnostic tests. Proliferation of COVID-19 commercial diagnostics authorized with limited clinical validation; emergence of highly contagious Omicron, BA.2, and other variants and sub-variants; distribution of one billion rapid antigen tests (RAgTs) [1,2]; the new White House “test (in pharmacies) and treat” program [3]; and relaxed preventative measures, such as safe spacing and masking, call for analysis of COVID-19 test performance along spatial care paths where people choose their own testing options—that is, select their own “diagnostic portals.” A spatial care path is the most efficient route taken by individuals and patients when receiving care in the healthcare small-world network of home, community, and emergency medicine settings [4,5,6].
Prevalence, the percentage of a population affected with COVID-19 at a given time, can vary unpredictably in different locales, because of severe acute respiratory syndrome Coronavirus 2 (SARS-CoV-2) variants, super spreaders, asymptomatic carriers, migrating hotspots, episodic re-openings, incomplete testing, delayed reporting, and other factors, such as erratic sampling and marginal reliability of rushed-to-market COVID-19 tests. Clinical performance depends on whether there are symptoms or not, the patient’s viral load, and the timing, location, and quality of the environment, specimen collection (e.g., saliva, and anterior nasal, mid-turbinate, or nasopharyngeal swab), and testing method.
Nonetheless, a large surge in clinical evaluations published recently paints a clear picture of what to expect. Expectations are at the heart of public acceptance and empowerment. Americans can choose from an array of self-testing options, use COVID-19 tests mailed to them at no cost by the government, and engage several different diagnostic portals. Future public health practices will hinge on fundamental understanding of how point-of-need testing meets or defeats attempts to keep families safe, temper contagiousness of new variants, safeguard schools and workplaces, and transition smoothly forward.
Therefore, the main goal of this article is to facilitate informed selection of diagnostic tests when faced with the multi-dimensional challenges of fluctuating endemic disease, newly emerging variants, increasing prevalence, and variably accessible testing options with complex performance patterns. Another goal is to minimize false omissions, that is, missed diagnoses that unknowingly elevate risk, create local recurrences, spread contagion, and adversely interrupt personal life, community activities, and work productivity.

2. Methods and Materials

2.1. Emergency Use Authorizations

Assays should be well balanced; that is, they should achieve both high sensitivity and high specificity. Adverse patterns identify tests that do not perform well but have received Food and Drug Administration (FDA) Emergency Use Authorization (EUA). Positive percent agreement (PPA) and negative percent agreement (NPA) data were extracted from FDA lists of EUAs [7] for home RAgTs and home LAMP (loop-mediated isothermal amplification) tests collated up to the beginning of January 2022. In the Supplementary Materials at the end of this article, Table S1 lists EUA PPA and NPA claims. Donato et al. [8] provided the only independent clinical evaluation of sensitivity and specificity for a LAMP molecular home self-test found during searches.

2.2. Clinical Evaluations

A total of 82 clinical studies were tabulated. Table 1 summarizes performance metrics for the primary study groups (left column) and lists supporting tabulations (right column) found in the Supplementary Materials. Papers generated by PubMed, other searches, and bibliographies of systematic reviews and meta-analyses comprised (a) 34 clinical evaluations [9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42] of the use of RAgTs in communities (see Table S2); (b) 30 clinical evaluations [43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72] of RAgTs applied in emergency medicine (emergency rooms and emergency departments), including 9 with results for pediatric patients [58,65,66,67,68,69,70,71,72] (Table S3); and (c) 18 clinical evaluations [73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90] that reported results for PCR-based testing of saliva in community groups of strictly asymptomatic subjects (Table S4). No human subjects were involved in this research.

2.3. Sensitivity and Specificity Metrics

Sensitivity and specificity metrics from community (Table S2) and emergency medicine evaluations (Table S3) were subdivided into symptomatic and asymptomatic groups (see Table 1). Merging raw data was not practical in light of heterogeneity, missing elements, unbalanced study designs, and inconsistent reporting in the clinical evaluations. Some studies were reported prior to peer review by medRxiv and preliminarily in various journals. In essence, each study generated results reflected in sensitivity and specificity median performance for each clinical setting.

2.4. Bayesian Mathematics and Performance Tiers

Please refer to open access papers by Kost [91,92,93] in the Archives of Pathology & Laboratory Medicine for descriptions of mathematical methods, visual logistics, computational design, and open access software. Table S5 lists Bayesian equations derived to generate the graphics displayed in this paper. Table 2 presents the mathematical design criteria for the three tiers, which are intended to systematically harmonize Bayesian post hoc performance criteria. The design criteria encompass and simultaneously integrate the performance level, sensitivity, specificity, target prevalence boundary, and false omission rate, RFO, which reflects risk of missed diagnoses.

2.5. Prevalence Boundaries

A prevalence boundary is defined as the prevalence at which the rate of false omissions, RFO, exceeds a specified risk tolerance, such as 5% (1 in 20 diagnoses missed) or 10% (1 in 10 missed). Please note that RFO = 1 − (Negative Predictive Value) [Equation (20)] in Supplementary Materials. Prevalence boundary is calculated using Equation (26) in Table S5 and is apparent where the RFO curve intersects the horizontal line demarcating risk tolerance. Users can select the level of risk based on COVID-19 management and mitigation objectives.
The sensitivity needed to achieve a desired prevalence boundary given the specificity, RFO, and prevailing prevalence can be calculated using Equation (27a,b) in Supplementary Materials (newly derived). For example, if the prevalence is 50.6%, and you do not want to miss more than 1 in 20 diagnoses of COVID-19 (RFO = 5%), then given a specificity of 97.5%, Equation (26a) in Supplementary Materials predicts you will need a test with at least 95% sensitivity, that is, Tier 2 performance (see Table 2).

2.6. Pattern Recognition

Visual logistics reveal performance patterns and diagnostic pitfalls over the entire range of prevalence. The approach to pattern recognition presented here, called “predictive value geometric mean-squared (PV GM2),” is strictly visual [91,92,93]. The PV GM2 curve represents a distinguishing “fingerprint” of performance, and thus far in this research, no two fingerprints have coincided. PV GM2 curves visualize how low (≤20%), moderate (20–70%), and high (≥70%) prevalence affect diagnostic performance in a single continuous graphic.
Point values of PV GM2 at fixed prevalence should not be compared because of potential duplicity of the values at different levels of prevalence. Unrealistic comparisons (e.g., test sensitivity 10% and specificity 100%) should be avoided as they produce meaningless curves. The point of PV GM2 visualization is to differentiate performance patterns for tests achieving at least Tier 1 sensitivity and specificity criteria (see Table 2), or if below Tier 1 (“subtier”), then to understand why and where performance fails and crosses below the Tier 1 threshold.

2.7. Recursion

The recursive formulas for positive predictive value (PPV) [Equation (22a)] in Supplementary Materials and negative predictive value (NPV) [Equation (22b)] in Supplementary Materials allow calculations of predictive value geometric mean-squared (PV GM2) and RFO performance for repeat testing. When testing only twice with the same assay, single equations can be derived to graph recursive PV GM2 and RFO versus prevalence from 0 to 100% and to conveniently determine graphically the recursive prevalence boundaries at user-defined risk levels of 5% and 10% for missed diagnoses.

3. Results

Figure 1 illustrates patterns of low, high, and median performance documented by evaluations of RAgTs conducted in communities of several countries and states in the United States. RAgTs display large uncertainty with a median prevalence boundary of 14.8% for 1 in 20 missed diagnoses (RFO 5%). Median sensitivity of 69.85% (range 30.6–97.6%) explains the rapid fall off of PV GM2 due to increasing false negatives as prevalence increases, while median specificity achieves Tier 3. Figure 2 compares performance for asymptomatic and symptomatic subjects, the latter showing peak performance at 9% prevalence and a prevalence boundary of 21.7% (at 5% RFO). Median sensitivity for symptomatic subjects was 81.0%. For automated instrument antigen tests, it is 73% (see Table 1 and Table S2).
Figure 3 presents patterns of RAgT performance for evaluations conducted in emergency medicine settings [emergency rooms (ERs) and emergency departments (EDs)] (see Table 1 and Table S3). Median sensitivity of 68.79% and specificity of 99.5% generate peak performance at 13.2% prevalence. The prevalence boundary for RFO of 5% was 14.4%, almost identical to that seen in Figure 1. Figure 4 compares performance for symptomatic and asymptomatic general populations and children seen in ERs and EDs, with the former marginally better than the latter. The right column of the inset table lists prevalence at peak performance.
There were no evaluations conducted directly in homes for EUA tests with real-world data generated by laypersons who perform the self-testing. Therefore, Figure 5 and Figure 6 illustrate performance based on manufacturer claims in information for users (IFUs) documents. Figure 5 also illustrates the theoretical improvement in performance achievable by repeat testing, typically within three days, as described in IFUs. Manufacturers made no claims regarding recursive testing. As can be seen in Figure 5, repeat testing pushes performance up to Tier 2. Figure 6 shows individual performance for three home molecular (LAMP) self-tests, as claimed by manufacturers. One independently conducted clinical evaluation by Donato et al. [8] was found. Real-world evidence revealed Tier 1 performance (curve “CCE”) versus the claim of Tier 2.
Figure 7 summarizes the foregoing results from a risk perspective by plotting RFO versus prevalence with the threshold of a missed diagnosis at 1 in 10 (RFO 10%). Recall, RFO = 1 − NPV. Interestingly, the curves group into clusters for asymptomatic (purple) and symptomatic (red) subjects, while the Donato et al. [8] evaluation of home molecular testing (curve “HMDx”) demonstrated a Tier 1 prevalence boundary of 56.9%. Even at a relatively high 10% risk of missed diagnoses, prevalence boundaries for asymptomatic subjects are low (17.5–23.2%), including for self-collected saliva specimens obtained in community sites from asymptomatic subjects with PCR-based testing performed later, typically within 24–72 h in reference laboratories (see Table 1 and Table S4).

4. Discussion

4.1. Missed Diagnoses

SARS-CoV-2 prevalence in South African blood donors skyrocketed to 71% even before the Omicron-driven wave arrived [94]. This variant peaked in the United States, only to be followed by the more contagious BA.2 variant. Omicron is sweeping Southeast Asia. For example, Thailand has reported over 50,000 new cases per day, and Vietnam, 120,000. These outbreaks bump prevalence to levels requiring Tier 2 performance to avoid excessive false negatives.
Mathematical transformation of pre-test to post-test probability of COVID-19 [92,93] allows computation of RFO, the false omission rate [Equation (20), Table S5], and determination of the prevalence boundary (PB) [Equation (26)] in Supplementary Materials. Shallow PBs limit the clinical usefulness of RAgTs, because of excessive missed diagnoses. If one knows test specificity, sets the RFO threshold (e.g., 5 or 10%), and establishes the PB appropriate for local prevalence, then Equation (27a) in Supplementary Materials can be used to calculate the minimum test sensitivity required. Frequent false omissions result in stealth spread of disease.

4.2. Transparency

Unfortunately, there is no way of singling out infectious patients who have false negative test results without repeat testing or additional evaluation. Using quantitative high sensitivity synchrotron X-ray fluorescence imaging, Koller et al. [95] showed that qualitative visual read-outs miss immobilized antigen-antibody-labeled conjugate complexes of SARS-CoV-2 signals on lateral flow detection devices. On the other hand, one report showed that ~92% of patients infected with SARS-CoV-2, but missed by antigen testing, contained no viable virus [96]. Even the performance of a so-called “ultra-sensitive” antigen test, after exclusion of samples with PCR cycle threshold >35, hovered around Tier 2 for PPA and below it for NPA [97].
There is no explanation for why the PPA and NPA of assays documented in FDA EUAs have not progressively improved. However, the FDA has not required improvement. Liberal FDA approval seems to have diminished competition. One wonders what will become of subtier rapid antigen tests in a competitive market following the end of the “EUA era.” People purchasing test kits or ordering them free from COVIDtests.gov should receive disclosure of specificity (to rule-in COVID-19) and sensitivity (to rule it out) documented in clinical evaluations, and may need interactive apps to access prevailing prevalence.
The “…successful implementation of rapid antigen testing protocols must closely consider technical, pre-analytical, analytical and clinical assay performance and interpret and verify test results depending on the pretest-probability of SARS-CoV-2 infection” [98]. With that advice, the International Federation of Clinical Chemistry (IFCC) COVID-19 Task Force underscored the need for careful analysis of how prevalence impacts antigen test results in different clinical settings. Consumer Reports and other public advocacy groups may eventually compare and rank commercially available point-of-care antigen and saliva tests for the public.

4.3. Public Health at Points of Need

The death toll of the pandemic, including excess deaths from neglect, is estimated to be as high as 18.2 million [99] or even higher. The COVID-19 crisis has confirmed and expanded worldwide what we learned during Ebola virus disease outbreaks in West Africa. There is unequivocal need for point-of-care testing [4,5,6,100,101,102,103,104,105]. People in the United States now have access to COVID-19 RAgTs and molecular diagnostics online, by mail, and in neighborhood stores. With ubiquitous access comes responsibility—on the part of academics, public health educational institutions, professional societies, governments, industry, and global organizations to promote high quality testing.
The CORONADx Project in the European Union [106] is planned to produce affordable “PATHAG,” ultra-rapid COVID-19 antigen test strips for first-line screening; “PATHPOD,” portable LAMP detection for mobile clinics and community health centers; and “PATHLOCK,” kit detection using CRISPR-Cas13 technology. This initiative will add to the massive expansion of point-of-care strategies and promote higher quality molecular self-testing [1,2,104,105]. Ready access to rapid testing along spatial care paths from home to hospital raises public expectations for controlling transmission, combatting COVID-19, and forestalling future pandemics.
Children, teens, and young adults, even if vaccinated, may quietly spread disease, especially now that the more infectious variants and sub-variants are elevating community prevalence. Complicating matters, hospitals in some limited resource countries administer fake vaccinations (injecting water) for monitory gain [107]. Nonetheless, the CDC views vaccination rates as indicators of pandemic status worldwide [108], a reasonable position confirmed by Omicron surges in Hong Kong and China where vaccination rates have been low and vaccines less effective.
Vaccination attenuates the severity of disease but does not necessarily eliminate SARS-CoV-2 infection. Thus, individuals and their medical providers face the daunting challenge of guessing local and regional prevalence in order to interpret test results. Public health officials can help by periodically documenting prevalence and by encouraging self-testing in communities and directly within homes. Pattern recognition by means of PV GM2 and RFO curves allows healthcare providers to quickly tailor the quality of testing to needs.

4.4. Focus, Standardization, and Risk Management

RAgTs now are ubiquitous worldwide. They enable people to test frequently and inexpensively wherever they wish. Progressive societal “normalization” increases demand for convenient, fast, and inexpensive test results for decision making in various settings comprising public gatherings, communities, homes, schools, workplaces, factories, convalescent care, prisons, university campuses, sports events, travel, airports, rural regions, and limited-resource settings abroad.
Figure 2, Figure 4 and Figure 7 show RAgTs detect SARS-CoV-2 infections in symptomatic subjects more effectively than asymptomatic, for whom community screening using PCR-based saliva testing offers no significant diagnostic advantage (see Figure 7). Asymptomatic RAgT and PCR-based saliva testing present the highest risk of missing diagnoses when highly contagious new variants increase prevalence. Figure 7 provides an essential “bottom line” for risk assessment. It illustrates the advantages of using the false omission rate (RFO) and prevalence boundary (PB) as criteria for the selection an appropriate diagnostic portal in the context of local prevalence.
Repeat rapid antigen home testing with the second test just before mingling is theoretically sound (see Figure 5) and is encouraged in commercial products containing two tests for screening but has not been validated. Home molecular testing one time shows promise (see “HMDx” in Figure 7). Controlled studies using some of the nationally distributed free tests could determine the efficacy of home self-testing, and should encompass both RAgT and LAMP assays, as well as investigation of the effectiveness of repeat testing and whether the second test should be independent (orthogonal) [93].
Weaknesses in COVID-19 rapid antigen test performance, even for products introduced more than one year after the FDA first started granting COVID-19 EUAs, call for standardization, or at least a process for attaining consistency and improving sensitivity. The contrast in performance based on FDA EUA PPA and NPA metrics versus real-world sensitivity and specificity from clinical investigations (Figure 1, Figure 2, Figure 3 and Figure 4) demands multicenter studies with diverse populations, well defined clinical settings, different age groups, and large sample sizes.

5. Conclusions

Widespread availability of RAgTs encourages self-testing, but people receiving millions of free RAgTs for self-testing need guidance. If public health educational institutions and practitioners adapt, learn, teach, and incorporate proven point-of-care strategies, they will better mitigate variant surges and the future outbreaks [105,109,110,111]. RAgTs facilitate transitioning risk avoidance to risk management, especially in limited-resource settings where people cannot afford extended lockdowns, expensive PCR tests with delayed results, and loss of employment. For those, we recommend mobile testing in vans and inexpensive sample collection kiosks at sites of need [112].
Subtier tests should be improved or retired from FDA EUA status, because of poor performance, uncertainty, and false omissions that increase exponentially with increasing prevalence. Every step beyond a prevalence boundary magnifies chances of missing a diagnosis (see Figure 7). Tier 3, with its 100% sensitivity, could eliminate false omissions and prevalence boundaries. However, Tier 3 appears out of reach for the current generations of RAgTs. The FDA should tighten authorization criteria and shift the evaluation paradigm from template-based to clinical proof.
Tier 2 performance represents an attainable, practical, and sustainable standard of excellence, in view of the fact that several EUA manufacturers have claimed that performance threshold (see Supplemental Figure S1). The scheme in Table 1 can be used to establish constraints on 95% confidence limits and reduce uncertainty. Supportive actions the FDA should take comprise (a) tightening authorization thresholds and integrating prevalence boundaries, (b) increasing sample sizes to generate robust confidence intervals, (c) requiring comparison of symptomatic versus asymptomatic patients, (d) validating environmental limits of reagents, (e) publishing post-market follow-up of performance.
Testing in homes and communities makes sense. When acute symptoms are present, the analyses in this paper shows that rapid response use of RAgTs in communities and ERs/EDs can mitigate spread by helping to immediately identify acute contagiousness in low to moderate prevalence. Real-world trials of repeat testing, diagnostic cost-effectiveness, and public health impact could identify home molecular diagnostics as an optimal diagnostic portal for self-testing and rapid decision making at most levels of prevalence. Point-of-care testing has and will continue to provide a valuable resource for crisis response in the current pandemic and whatever the future brings.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/diagnostics12051216/s1, Figure S1: Rate of False Omissions for FDA EUA COVID-19 Rapid Antigen Tests; Table S1: COVID-19 Tests with FDA Emergency Use Authorization for Home Self-testing; Table S2: COVID-19 Antigen Test Performance for Symptomatic and Asymptomatic Subjects in Community Settings; Table S3: COVID-19 Antigen Test Performance in Emergency Medicine; Table S4: COVID-19 Saliva Testing Sensitivity and Specificity Performance in Strictly Asymptomatic Sub-jects (no mixed populations)—Clinical Evidence; Table S5: Fundamental Definitions, Derived Equations, Ratios/Rates, Recursive Formulas, Predictive Value Geometric Mean-squared, and Prevalence Boundary; Table S6: Antigen Tests with FDA Emergency Use Authorization Ranked by Positive Percent Agreement.

Funding

This research was supported by the Point-of-Care Testing Center for Teaching and Research (POCT•CTR); by Kost, its Director; and by a Fulbright Scholar Award (to G.J.K.).

Data Availability Statement

All raw data are included in the paper and its Supplementary Materials.

Acknowledgments

The author thanks the creative students, research assistants, and foreign focus groups whose urgency of need inspired this paper. The author is grateful to have received a Fulbright Scholar Award 2020–2022, which supports analysis of COVID-19 diagnostics, strategic point-of-care testing field research in ASEAN Member States, mainly Cambodia, the Philippines, Thailand, and Vietnam with the goal of improving standards of crisis care throughout Southeast Asia. Figures and tables are provided courtesy and permission of Knowledge Optimization, Davis, California.

Conflicts of Interest

There are no conflict of interest to declare.

References

  1. Kost, G.J. Home antigen test recall affects millions: Beware false positives, but also uncertainty and potential false negatives. Arch. Pathol. Lab. Med. 2022, 146, 403. [Google Scholar] [CrossRef] [PubMed]
  2. Kost, G.J. The Coronavirus Disease 2019 Grand Challenge: Setting expectations and future directions for community and home testing. Arch. Pathol. Lab. Med. 2022; (published online ahead of print 25 March 2022). [Google Scholar] [CrossRef] [PubMed]
  3. White House. Nationwide Test to Treat Initiative. National COVID-19 Preparedness Plan. Available online: https://www.whitehouse.gov/covidplan/#protect (accessed on 26 April 2022).
  4. Kost, G.J.; Ferguson, W.J.; Kost, L.E. Principles of point of care culture, the spatial care path, and enabling community and global resilience. J. Int. Fed. Clin. Chem. Lab. Med. 2014, 25, 4–23. [Google Scholar]
  5. Kost, G.J.; Ferguson, W.J.; Truong, A.-T.; Prom, D.; Hoe, J.; Banpavichit, A.; Kongpila, S. The Ebola Spatial Care PathTM: Point-of-care lessons learned for stopping outbreaks. Clin. Lab. Int. 2015, 39, 6–14. [Google Scholar]
  6. Kost, G.J.; Ferguson, W.J.; Hoe, J.; Truong, A.T.; Banpavichit, A.; Kongpila, S. The Ebola Spatial Care Path™: Accelerating point-of-care diagnosis, decision making, and community resilience in outbreaks. Am. J. Disaster Med. 2015, 10, 121–143. [Google Scholar] [CrossRef] [PubMed]
  7. Food and Drug Administration. Individual EUAs for Antigen Diagnostic Tess for SARS-CoV-2. Available online: https://www.fda.gov/medical-devices/coronavirus-disease-2019-covid-19-emergency-use-authorizations-medical-devices/in-vitro-diagnostics-euas-antigen-diagnostic-tests-sars-cov-2 (accessed on 26 April 2022).
  8. Donato, L.J.; Trivedi, V.A.; Stransky, A.M.; Misra, A.; Pritt, B.S.; Binnicker, M.J.; Karon, B.S. Evaluation of the Cue Health point-of-care COVID-19 (SARS-CoV-2 nucleic acid amplification) test at a community drive through collection center. Diagn. Microbiol. Infect. Dis. 2021, 100, 115307. [Google Scholar] [CrossRef]
  9. Alghounaim, M.; Bastaki, H.; Essa, F.; Motlagh, H.; Al-Sabah, S. The performance of two rapid antigen tests during population-level screening for SARS-CoV-2 infection. Front. Med. 2021, 8, 797109. [Google Scholar] [CrossRef]
  10. García-Fiñana, M.; Hughes, D.M.; Cheyne, C.P.; Burnside, G.; Stockbridge, M.; Fowler, T.A.; Buchan, I. Performance of the Innova SARS-CoV-2 antigen rapid lateral flow test in the Liverpool asymptomatic testing pilot: Population based cohort study. BMJ 2021, 374, n1637. [Google Scholar] [CrossRef]
  11. Allan-Blitz, L.T.; Klausner, J.D. A real-world comparison of SARS-CoV-2 rapid antigen testing versus PCR testing in Florida. J. Clin. Microbiol. 2021, 59, e01107-21. [Google Scholar] [CrossRef]
  12. Mungomklang, A.; Trichaisri, N.; Jirachewee, J.; Sukprasert, J.; Tulalamba, W.; Viprakasit, V. Limited sensitivity of a rapid SARS-CoV-2 antigen detection assay for surveillance of asymptomatic individuals in Thailand. Am. J. Trop. Med. Hyg. 2021, 105, 1505–1509. [Google Scholar] [CrossRef]
  13. Jakobsen, K.K.; Jensen, J.S.; Todsen, T.; Kirkby, N.; Lippert, F.; Vangsted, A.M.; von Buchwald, C. Accuracy of anterior nasal swab rapid antigen tests compared with RT-PCR for massive SARS-CoV-2 screening in low prevalence population. Acta Path. Microbiol. Immunol. Scand. 2021, 130, 95–100. [Google Scholar] [CrossRef] [PubMed]
  14. Prince-Guerra, J.L.; Almendares, O.; Nolen, L.D.; Gunn, J.K.; Dale, A.P.; Buono, S.A.; Bower, W.A. Evaluation of Abbott BinaxNOW rapid antigen test for SARS-CoV-2 infection at two community-based testing sites—Pima County, Arizona, November 3–17, 2020. MMWR 2021, 70, 100–105. [Google Scholar] [CrossRef] [PubMed]
  15. Almendares, O.; Prince-Guerra, J.L.; Nolen, L.D.; Gunn, J.K.; Dale, A.P.; Buono, S.A. Performance characteristics of the Abbott BinaxNOW SARS-CoV-2 antigen test in comparison with real-time RT-PCR and viral culture in community testing sited during November 2020. J. Clin. Microbiol. 2022, 60, e01742-21. [Google Scholar] [CrossRef] [PubMed]
  16. Stohr, J.J.J.M.; Zwart, V.F.; Goderski, G.; Meijer, A.; Nagel-Imming, C.R.S.; Kluytmans-van den Bergh, M.F.Q.; Pas, S.D.; van den Oetelaar, F.; Hellwich, M.; Gan, K.H.; et al. Self-testing for the detection of SARS-CoV-2 infection with rapid antigen tests for people with suspected COVID-19 in the community. Clin. Microbiol. Infect. 2021, 28, 695–700. [Google Scholar] [CrossRef]
  17. Frediani, J.K.; Levy, J.M.; Rao, A.; Bassit, L.; Figueroa, J.; Vos, M.B. Multidisciplinary assessment of the Abbott BinaxNow Sars-CoV-2 point-of-care antigen test in the context of emerging viral variants and self-administration. Nat. Sci. Rep. 2021, 11, 14604. [Google Scholar] [CrossRef]
  18. Pollock, N.R.; Tran, K.; Jacobs, J.R.; Cranston, A.E.; Smith, S.; O’Kane, C.Y.; Smole, S.C. Performance and operational evaluation of the Access Bio CareStart Rapid Antigen Test in a high-throughput drive-through community testing site in Massachusetts. Open Forum Infect. Dis. 2021, 8, ofab243. [Google Scholar] [CrossRef]
  19. Pilarowski, G.; Lebel, P.; Sunshine, S.; Liu, J.; Crawford, E.; Marquez, C.; DeRisi, J. Performance characteristics of a rapid severe acute respiratory syndrome coronavirus 2 antigen detection assay at a public plaza testing site in San Francisco. J. Infect. Dis. 2021, 223, 1139–1144. [Google Scholar] [CrossRef]
  20. Boum, Y.; Fai, K.N.; Nikolay, B.; Mboringong, A.B.; Bebell, L.M.; Ndifon, M.; Mballa, G.A.E. Performance and operational feasibility of antigen and antibody rapid diagnostic tests for COVID-19 in symptomatic and asymptomatic patients in Cameroon: A clinical, prospective, diagnostic accuracy study. Lancet Infect. Dis. 2021, 21, 1089–1096. [Google Scholar] [CrossRef]
  21. Dřevínek, P.; Hurych, J.; Kepka, Z.; Briksi, A.; Kulich, M.; Zajac, M.; Hubacek, P. The sensitivity of SARS-CoV-2 antigen tests in the view of large-scale testing. Epidemiol. Mikrobiol. Imunol. 2021, 70, 156–160. [Google Scholar]
  22. Pollreis, R.E.; Roscoe, C.; Phinney, R.J.; Malesha, S.S.; Burns, M.C.; Ceniseros, A.; Ball, C.L. Evaluation of the Abbott BinaxNOW COVID-19 test Ag Card for rapid detection of SARS-CoV-2 infection by a local public health district with a rural population. PLoS ONE 2021, 16, e0260862. [Google Scholar] [CrossRef]
  23. Jakobsen, K.K.; Jensen, J.S.; Todsen, T.; Lippert, F.; Martel, C.J.M.; Klokker, M.; von Buchwald, C. Detection of Sars-CoV-2 infection by rapid antigen test in comparison with RT-PCR in a public setting. medRix 2021. [Google Scholar] [CrossRef]
  24. Nalumansi, A.; Lutalo, T.; Kayiwa, J.; Watera, C.; Balinandi, S.; Kiconco, J.; Kaleebu, P. Field evaluation of the performance of a SARS-CoV-2 antigen rapid diagnostic test in Uganda using nasopharyngeal samples. Int. J. Infect. Dis. 2021, 104, 282–286. [Google Scholar] [CrossRef] [PubMed]
  25. Fernandez-Montero, A.; Argemi, J.; Rodríguez, J.A.; Ariño, A.H.; Moreno-Galarraga, L. Validation of a rapid antigen test as a screening tool for SARS-CoV-2 infection in asymptomatic populations. Sensitivity, specificity and predictive values. E Clin. Med. 2021, 37, 100954. [Google Scholar]
  26. Gremmels, H.; Winkel, B.M.F.; Schuurman, R.; Rosingh, A.; Rigter, N.A.; Rodriguez, O.; Hofstra, L.M. Real-life validation of the Panbio™ COVID-19 antigen rapid test (Abbott) in community-dwelling subjects with symptoms of potential SARS-CoV-2 infection. E Clin. Med. 2021, 31, 100677. [Google Scholar] [CrossRef] [PubMed]
  27. Jian, M.J.; Perng, C.L.; Chung, H.Y.; Chang, C.K.; Lin, J.C.; Yeh, K.M.; Shang, H.S. Clinical assessment of SARS-CoV-2 antigen rapid detection compared with RT-PCR assay for emerging variants at a high-throughput community testing site in Taiwan. Intel. J. Infect. Dis. 2021, 115, 30–34. [Google Scholar] [CrossRef]
  28. Shah, M.M.; Salvatore, P.P.; Ford, L.; Kamitani, E.; Whaley, M.J.; Mitchell, K.; Tate, J.E. Performance of repeat BinaxNOW severe acute respiratory syndrome Coronavirus 2 antigen testing in a community setting, Wisconsin, November 2020-December 2020. Clin. Infect. Dis. 2021, 73 (Suppl. 1), S54–S57. [Google Scholar] [CrossRef]
  29. Pollock, N.R.; Jacobs, J.R.; Tran, K.; Cranston, A.E.; Smith, S.; O’Kane, C.Y.; Smole, S.C. Performance and implementation evaluation of the Abbott BinaxNOW Rapid Antigen Test in a high-throughput drive-through community testing site in Massachusetts. J. Clin. Microbiol. 2021, 59, e00083-21. [Google Scholar] [CrossRef]
  30. Ford, L.; Whaley, M.J.; Shah, M.M.; Salvatore, P.P.; Segaloff, H.E.; Delaney, A.; Kirking, H.L. Antigen test performance among children and adults at a SARS-CoV-2 community testing site. J. Pediatr. Infect. Dis. Soc. 2021, 10, 1052–1061. [Google Scholar] [CrossRef]
  31. Nsoga, M.T.N.; Kronig, I.; Perez Rodriguez, F.J.; Sattonnet-Roche, P.; Da Silva, D.; Helbling, J.; Eckerle, I. Diagnostic accuracy of Panbio rapid antigen tests on oropharyngeal swabs for detection of SARS-CoV-2. PLoS ONE 2021, 16, e0253321. [Google Scholar] [CrossRef]
  32. Siddiqui, Z.K.; Chaudhary, M.; Robinson, M.L.; McCall, A.B.; Peralta, R.; Esteve, R.; Ficke, J.R. Implementation and accuracy of BinaxNOW Rapid Antigen COVID-19 Test in asymptomatic and symptomatic populations in a high-volume self-referred testing site. Microbiol. Spectr. 2021, 9, e0100821. [Google Scholar] [CrossRef]
  33. Drain, P.; Sulaiman, R.; Hoppers, M.; Lindner, N.M.; Lawson, V.; Ellis, J.E. Performance of the LumiraDx microfluidic immunofluorescence point-of-care SARS-CoV-2 antigen test in asymptomatic adults and children. Am. J. Clin. Pathol. 2021, 157, 602–607. [Google Scholar] [CrossRef] [PubMed]
  34. Shrestha, B.; Neupane, A.K.; Pant, S.; Shrestha, A.; Bastola, A.; Rajbhandari, B.; Singh, A. Sensitivity and specificity of lateral flow antigen test kits for COVID-19 in asymptomatic population of quarantine centre of Province 3. Kathmandu Univ. Med. J. 2020, 18, 36–39. [Google Scholar] [CrossRef]
  35. Chiu, R.Y.T.; Kojima, N.; Mosley, G.L.; Cheng, K.K.; Pereira, D.Y.; Brobeck, M.; Klausner, J.D. Evaluation of the INDICAID COVID-19 rapid antigen test in symptomatic populations and asymptomatic community testing. Microbiol. Spectr. 2021, 9, e0034221. [Google Scholar] [CrossRef]
  36. Stokes, W.; Berenger, B.M.; Portnoy, D.; Scott, B.; Szelewicki, J.; Singh, T.; Tipples, G. Clinical performance of the Abbott Panbio with nasopharyngeal, throat, and saliva swabs among symptomatic individuals with COVID-19. Eur. J. Clin. Microbiol. Infect. Dis. 2021, 40, 1721–1726. [Google Scholar] [CrossRef] [PubMed]
  37. Agarwal, J.; Das, A.; Pandey, P.; Sen, M.; Garg, J. “David vs. Goliath”: A simple antigen detection test with potential to change diagnostic strategy for SARS-CoV-2. J. Infect. Dev. Ctries. 2021, 15, 904–909. [Google Scholar] [CrossRef] [PubMed]
  38. Kernéis, S.; Elie, C.; Fourgeaud, J.; Choupeaux, L.; Delarue, S.M.; Alby, M.L.; Le Goff, J. Accuracy of antigen and nucleic acid amplification testing on saliva and nasopharyngeal samples for detection of SARS-CoV-2 in ambulatory care: A multicentric cohort study. MedRxiv 2021. [Google Scholar] [CrossRef]
  39. Van der Moeren, N.; Zwart, V.F.; Lodder, E.B.; Van den Bijllaardt, W.; Van Esch, H.R.; Stohr, J.J.; Kluytmans, J.A. Evaluation of the test accuracy of a SARS-CoV-2 rapid antigen test in symptomatic community dwelling individuals in the Netherlands. PLoS ONE 2021, 16, e0250886. [Google Scholar] [CrossRef]
  40. Drain, P.K.; Ampajwala, M.; Chappel, C.; Gvozden, A.B.; Hoppers, M.; Wang, M.; Montano, M. A rapid, high-sensitivity SARS-CoV-2 nucleocapsid immunoassay to aid diagnosis of acute COVID-19 at the point of care: A clinical performance study. Infect. Dis. Ther. 2021, 10, 753–761. [Google Scholar] [CrossRef] [PubMed]
  41. Van der Moeren, N.; Zwart, V.F.; Goderski, G.; Rijkers, G.T.; van den Bijllaardt, W.; Veenemans, J.; Stohr, J.J.J.M. Performance of the Diasorin SARS-CoV-2 antigen detection assay on the LIAISON XL. J. Clin. Virol. 2021, 141, 104909. [Google Scholar] [CrossRef]
  42. Gili, A.; Paggi, R.; Russo, C.; Cenci, E.; Pietrella, D.; Graziani, A.; Mencacci, A. Evaluation of Lumipulse® G SARS-CoV-2 antigen assay automated test for detecting SARS-CoV-2 nucleocapsid protein (NP) in nasopharyngeal swabs for community and population screening. Int. J. Infect. Dis. 2021, 105, 391–396. [Google Scholar] [CrossRef] [PubMed]
  43. Bianco, G.; Boattini, M.; Barbui, A.M.; Scozzari, G.; Riccardini, F.; Coggiola, M.; Costa, C. Evaluation of an antigen-based test for hospital point-of-care diagnosis of SARS-CoV-2 infection. J. Clin. Virol. 2021, 139, 104838. [Google Scholar] [CrossRef] [PubMed]
  44. Burdino, E.; Cerutti, F.; Panero, F.; Allice, T.; Gregori, G.; Milia, M.G.; Ghisetti, V. SARS-CoV-2 microfluidic antigen point-of-care testing in Emergency Room patients during COVID-19 pandemic. J. Virol. Methods 2022, 299, 114337. [Google Scholar] [CrossRef] [PubMed]
  45. Caruana, G.; Croxatto, A.; Kampouri, E.; Kritikos, A.; Opota, O.; Foerster, M.; Greub, G. Implementing SARS-CoV-2 rapid antigen testing in the emergency ward of a Swiss university hospital: The INCREASE study. Microorganisms 2021, 9, 798. [Google Scholar] [CrossRef] [PubMed]
  46. Caruana, G.; Lebrun, L.L.; Aebischer, O.; Opota, O.; Urbano, L.; de Rham, M.; Greub, G. The dark side of SARS-CoV-2 rapid antigen testing: Screening asymptomatic patients. New Microbes New Infect. 2021, 42, 100899. [Google Scholar] [CrossRef] [PubMed]
  47. Cento, V.; Renica, S.; Matarazzo, E.; Antonello, M.; Colagrossi, L.; Di Ruscio, F.; S. Co. Va Study Group. Frontline screening for SARS-CoV-2 infection at emergency department admission by third generation rapid antigen test: Can we spare RT-qPCR? Viruses 2021, 13, 818. [Google Scholar] [CrossRef]
  48. Cerutti, F.; Burdino, E.; Milia, M.G.; Allice, T.; Gregori, G.; Bruzzone, B.; Ghisetti, V. Urgent need of rapid tests for SARS CoV-2 antigen detection: Evaluation of the SD-Biosensor antigen test for SARS-CoV-2. J. Clin. Virol. 2020, 132, 104654. [Google Scholar] [CrossRef]
  49. Ciotti, M.; Maurici, M.; Pieri, M.; Andreoni, M.; Bernardini, S. Performance of a rapid antigen test in the diagnosis of SARS-CoV-2 infection. J. Med. Virol. 2021, 93, 2988–2991. [Google Scholar] [CrossRef]
  50. Holzner, C.; Pabst, D.; Anastasiou, O.E.; Dittmer, U.; Manegold, R.K.; Risse, J.; Falk, M. SARS-CoV-2 rapid antigen test: Fast-safe or dangerous? An analysis in the emergency department of an university hospital. J. Med. Virol. 2021, 93, 5323–5327. [Google Scholar] [CrossRef]
  51. Koeleman, J.G.M.; Brand, H.; de Man, S.J.; Ong, D.S.Y. Clinical evaluation of rapid point-of-care antigen tests for diagnosis of SARS-CoV-2 infection. Eur. J. Clin. Microbiol. Infect. Dis. 2021, 40, 1975–1981. [Google Scholar] [CrossRef]
  52. Leixner, G.; Voill-Glaniger, A.; Bonner, E.; Kreil, A.; Zadnikar, R.; Viveiros, A. Evaluation of the AMMP SARS-CoV-2 rapid antigen test in a hospital setting. Int. J. Infect. Dis. 2021, 108, 353–356. [Google Scholar] [CrossRef]
  53. Leli, C.; Di Matteo, L.; Gotta, F.; Cornaglia, E.; Vay, D.; Megna, I.; Rocchetti, A. Performance of a SARS-CoV-2 antigen rapid immunoassay in patients admitted to the emergency department. Int. J. Infect. Dis. 2021, 110, 135–140. [Google Scholar] [CrossRef] [PubMed]
  54. Linares, M.; Perez-Tanoira, R.; Carrero, A.; Romanyk, J.; Pérez-García, F.; Gómez-Herruz, P.; Cuadros, J. Panbio antigen rapid test is reliable to diagnoses SARS-CoV-2 infection in the first 7 days after the onset of symptoms. J. Clin. Virol. 2021, 133, 104659. [Google Scholar] [CrossRef] [PubMed]
  55. Loconsole, D.; Centrone, F.; Morcavallo, C.; Campanella, S.; Sallustio, A.; Casulli, D.; Chironna, M. The challenge of using an antigen test as a screening tool for SARS-CoV-2 infection in an emergency department: Experience of a tertiary care hospital in Southern Italy. BioMed Res. Int. 2021, 2021, 3893733. [Google Scholar] [CrossRef] [PubMed]
  56. Masiá, M.; Fernández-González, M.; Sánchez, M.; Carvajal, M.; García, J.A.; Gonzalo-Jiménez, N.; Gutiérrez, F. Nasopharyngeal Panbio COVID-19 antigen test performed at point-of-care has a high sensitivity in symptomatic and asymptomatic patients with higher risk for transmission and older age. Open Forum Infect. Dis. 2021, 8, ofab059. [Google Scholar] [CrossRef]
  57. Merrick, B.; Noronha, M.; Batra, R.; Douthwaite, S.; Nebbia, G.; Snell, L.B.; Harrison, H.L. Real-world deployment of lateral flow SARS-CoV-2 antigen detection in the emergency department to provide rapid, accurate and safe diagnosis of COVID-19. Infect. Prev. Pract. 2021, 3, 100186. [Google Scholar] [CrossRef]
  58. Möckel, M.; Corman, V.M.; Stegemann, M.S.; Hofmann, J.; Stein, A.; Jones, T.C.; Somasundaram, R. SARS-CoV-2 antigen rapid immunoassay for diagnosis of COVID-19 in the emergency department. Biomarkers 2021, 26, 213–220. [Google Scholar] [CrossRef]
  59. Oh, S.M.; Jeong, H.; Chang, E.; Choe, P.G.; Kang, C.K.; Park, W.B.; Kim, N.J. Clinical application of the standard Q COVID-19 Ag test for the detection of SARS-CoV-2 infection. J. Korean Med. Sci. 2021, 36, e101. [Google Scholar] [CrossRef]
  60. Orsi, A.; Pennati, B.M.; Bruzzone, B.; Ricucci, V.; Ferone, D.; Barbera, P.; Icardi, G. On-field evaluation of a ultra-rapid fluorescence immunoassay as a frontline test for SARS-CoV-2 diagnostic. J. Virol. Methods 2021, 295, 114201. [Google Scholar] [CrossRef]
  61. Osterman, A.; Baldauf, H.; Eletreby, M.; Wettengel, J.M.; Afridi, S.Q.; Fuchs, T.; Keppler, O.T. Evaluation of two rapid antigen tests to detect SARS-CoV-2 in a hospital setting. Med. Microbiol. Immunol. 2021, 210, 65–72. [Google Scholar] [CrossRef]
  62. Thell, R.; Kallab, V.; Weinhappel, W.; Mueckstein, W.; Heschl, L.; Heschl, M.; Szell, M. Evaluation of a novel, rapid antigen detection test for the diagnosis of SARS-CoV-2. PLoS ONE 2021, 16, e0259527. [Google Scholar] [CrossRef]
  63. Turcato, G.; Zaboli, A.; Pfeifer, N.; Ciccariello, L.; Sibilio, S.; Tezza, G.; Ausserhofer, D. Clinical application of a rapid antigen test for the detection of SARS-CoV-2 infection in symptomatic and asymptomatic patients evaluated in the emergency department: A preliminary report. J. Infect. 2021, 82, e14–e16. [Google Scholar] [CrossRef] [PubMed]
  64. Turcato, G.; Zaboli, A.; Pfeifer, N.; Sibilio, S.; Tezza, G.; Bonora, A.; Ausserhofer, D. Rapid antigen test to identify COVID-19 infected patients with and without symptoms admitted to the Emergency Department. Am. J. Emerg. Med. 2022, 51, 92–97. [Google Scholar] [CrossRef] [PubMed]
  65. Carbonell-Sahuquillo, S.; Lázaro-Carreño, M.I.; Camacho, J.; Barrés-Fernández, A.; Albert, E.; Torres, I.; Navarro, D. Evaluation of a rapid antigen detection test (Panbio™ COVID-19 Ag Rapid Test Device) as a point-of-care diagnostic tool for COVID-19 in a pediatric emergency department. J. Med. Virol. 2021, 93, 6803–6807. [Google Scholar] [CrossRef] [PubMed]
  66. Denina, M.; Giannone, V.; Curtoni, A.; Zanotto, E.; Garazzino, S.; Urbino, A.F.; Bondone, C. Can we trust in Sars-CoV-2 rapid antigen testing? Preliminary results from a paediatric cohort in the emergency department. Ir. J. Med. Sci. 2021; Epub ahead of printing. [Google Scholar] [CrossRef] [PubMed]
  67. Gonzalez-Donapetry, P.; Garcia-Clemente, P.; Bloise, I.; García-Sánchez, C.; Sánchez Castellano, M.Á.; Romero, M.P.; García-Rodriguez, J. Think of the children. Rapid antigen test in pediatric population. Pediatr. Infect. Dis. J. 2021, 40, 385–388. [Google Scholar] [CrossRef]
  68. Jung, C.; Levy, C.; Varon, E.; Biscardi, S.; Batard, C.; Wollner, A.; Cohen, R. Diagnostic accuracy of SARS-CoV-2 antigen detection test in children: A real-life study. Front. Pediatr. 2021, 9, 727. [Google Scholar] [CrossRef]
  69. Lanari, M.; Biserni, G.B.; Pavoni, M.; Borgatti, E.C.; Leone, M.; Corsini, I.; Lazzarotto, T. Feasibility and effectiveness assessment of SARS-CoV-2 antigenic tests in mass screening of a pediatric population and correlation with the kinetics of viral loads. Viruses 2021, 13, 2071. [Google Scholar] [CrossRef]
  70. Quentin, O.; Sylvie, P.; Olivier, M.; Julie, G.; Charlotte, T.; Nadine, A.; Aymeric, C. Prospective evaluation of the point-of-care use of a rapid antigenic SARS-CoV-2 immunochromatographic test in a pediatric emergency department. Clin. Microbiol. Infect. 2022, 28, 734.e1–734.e6. [Google Scholar] [CrossRef]
  71. Reichert, F.; Enninger, A.; Plecko, T.; Zoller, W.G.; Paul, G. Pooled SARS-CoV-2 antigen tests in asymptomatic children and their caregivers: Screening for SARS-CoV-2 in a pediatric emergency department. Am. J. Infect. Cont. 2021, 49, 1242–1246. [Google Scholar] [CrossRef]
  72. Villaverde, S.; Dominquez-Rodriguez, S.; Sabrido, G.; Pérez-Jorge, C.; Plata, M.; Romero, M.P.; Segura, E. Diagnostic accuracy of the Panbio severe acute respiratory syndrome coronavirus 2 antigen rapid test compared with reverse-transcriptase polymerase chain reaction testing of nasopharyngeal samples in the pediatric population. J. Pediatr. 2021, 232, 287–289. [Google Scholar] [CrossRef]
  73. Nacher, M.; Mergeay-Fabre, M.; Blanchet, D.; Benois, O.; Pozl, T.; Mesphoule, P.; Demar, M. Diagnostic accuracy and acceptability of molecular diagnosis of COVID-19 on saliva samples relative to nasopharyngeal swabs in tropical hospital and extra-hospital contexts: The COVISAL study. PLoS ONE 2021, 16, e0257169. [Google Scholar] [CrossRef] [PubMed]
  74. Nacher, M.; Mergeay-Fabre, M.; Blanchet, D.; Benoit, O.; Pozl, T.; Mesphoule, P.; Demar, M. Prospective comparison of saliva and nasopharyngeal swab sampling for mass screening for COVID-19. Front. Med. 2021, 8, 621160. [Google Scholar] [CrossRef] [PubMed]
  75. Marx, G.E.; Biggerstaff, B.J.; Nawrocki, C.C.; Totten, S.E.; Travanty, E.A.; Burakoff, A.W. Detection of Severe Acute Respiratory Syndrome Coronavirus 2 on self-collected saliva or anterior nasal specimens compared with healthcare personnel-collected nasopharyngeal specimens. Clin. Infect. Dis. 2021, 73 (Suppl. 1), S65–S73. [Google Scholar] [CrossRef] [PubMed]
  76. Bosworth, A.; Whalley, C.; Poxon, C.; Wanigasooriya, K.; Pickles, O.; Aldera, E.L.; Beggs, A.D. Rapid implementation and validation of a cold-chain free SARS-CoV-2 diagnostic testing workflow to support surge capacity. J. Clin. Virol. 2020, 128, 104469. [Google Scholar] [CrossRef] [PubMed]
  77. Alkhateeb, K.J.; Cahill, M.N.; Ross, A.S.; Arnold, F.W.; Snyder, J.W. The reliability of saliva for the detection of SARS-CoV-2 in symptomatic and asymptomatic patients: Insights on the diagnostic performance and utility for COVID-19 screening. Diagn. Microbiol. Infect. Dis. 2021, 101, 115450. [Google Scholar] [CrossRef]
  78. LeGoff, J.; Kerneis, S.; Elie, C.; Mercier-Delarue, S.; Gastli, N.; Choupeaux, L.; Delaugerre, C. Evaluation of a saliva molecular point of care for the detection of SARS-CoV-2 in ambulatory care. Nat. Sci. Rep. 2021, 11, 21126. [Google Scholar] [CrossRef]
  79. Igloi, Z.; Velzing, J.; Huisman, R.; Geurtsvankessel, C.; Comvalius, A.; IJpelaar, J.; Molenkamp, R. Clinical evaluation of the SD Biosensor SARS-CoV-2 saliva antigen rapid test with symptomatic and asymptomatic, non-hospitalized patients. PLoS ONE 2021, 16, e0260894. [Google Scholar] [CrossRef]
  80. Lopes, J.I.F.; da Costa Silva, C.A.; Cunha, R.G.; Soares, A.M.; Lopes, M.E.D.; da Conceição Neto, O.C.; Amado Leon, L.A. A large cohort study of SARS-CoV-2 detection in saliva: A non-invasive alternative diagnostic test for patients with bleeding disorders. Viruses 2021, 13, 2361. [Google Scholar] [CrossRef]
  81. Fernández-González, M.; Agulló, V.; de la Rica, A.; Infante, A.; Carvajal, M.; García, J.A.; Gutiérrez, F. Performance of saliva specimens for the molecular detection of SARS-CoV-2 in the community setting: Does sample collection method matter? J. Clin. Microbiol. 2021, 59, e03033-20. [Google Scholar] [CrossRef]
  82. Nagura-Ikeda, M.; Imai, K.; Tabata, S.; Miyoshi, K.; Murahara, N.; Mizuno, T.; Kato, Y. Clinical evaluation of self-collected saliva by quantitative reverse transcription-PCR (RT-qPCR), direct RT-qPCR, reverse transcription-Loop-Mediated Isothermal Amplification, and a rapid antigen test to diagnose COVID-19. J. Clin. Microbiol. 2020, 58, e01438-20. [Google Scholar] [CrossRef]
  83. Babady, N.E.; McMillen, T.; Jani, K.; Viale, A.; Robilotti, E.V.; Aslam, A.; Kamboj, M. Performance of severe acute respiratory syndrome coronavirus 2 real-time RT-PCR tests on oral rinses and saliva samples. J. Mol. Diagn. 2021, 23, 3–9. [Google Scholar] [CrossRef]
  84. Chau, V.V.; Lam, V.T.; Dung, N.T.; Yen, L.M.; Minh, N.N.Q.; Hung, L.M.; Van Tan, L. The natural history and transmission potential of asymptomatic severe acute respiratory syndrome coronavirus 2 Infection. Clin. Infect. Dis. 2020, 71, 2679–2687. [Google Scholar] [CrossRef] [PubMed]
  85. Herrera, L.A.; Hidalgo-Miranda, A.; Reynoso-Noverón, N.; Meneses-García, A.A.; Mendoza-Vargas, A.; Reyes-Grajeda, J.P.; Escobar-Escamilla, N. Saliva is a reliable and accessible source for the detection of SARS-CoV-2. Int. J. Infect. Dis. 2021, 105, 83–90. [Google Scholar] [CrossRef] [PubMed]
  86. Yokota, I.; Shane, P.Y.; Okada, K.; Unoki, Y.; Yang, Y.; Inao, T.; Teshima, T. Mass screening of asymptomatic persons for severe acute respiratory syndrome coronavirus 2 using saliva. Clin. Infect. Dis. 2021, 73, e559–e565. [Google Scholar] [CrossRef] [PubMed]
  87. Kernéis, S.; Elie, C.; Fourgeaud, J.; Choupeaux, L.; Delarue, S.M.; Alby, M.L.; LeGoff, J. Accuracy of saliva and nasopharyngeal sampling for detection of SARS-CoV-2 in community screening: A multicentric cohort study. Eur. J. Clin. Microbiol. Infect. Dis. 2021, 40, 2379–2388. [Google Scholar] [CrossRef] [PubMed]
  88. Vogels, C.B.F.; Watkins, A.E.; Harden, C.A.; Brackney, D.E.; Shafer, J.; Wang, J.; Grubaugh, N.D. SalivaDirect: A simplified and flexible platform to enhance SARS-CoV-2 testing capacity. Med 2021, 2, 263–289. [Google Scholar] [CrossRef]
  89. Balaska, S.; Pilalas, D.; Takardaki, A.; Koutra, P.; Parasidou, E.; Gkeka, I.; Skoura, L. Evaluation of the Advanta Dx SARS-CoV-2 RT-PCR Assay, a high-throughput extraction-free diagnostic test for the detection of SARS-CoV-2 in saliva: A diagnostic accuracy study. Diagnostics 2021, 11, 1766. [Google Scholar] [CrossRef]
  90. Rao, M.; Rashid, F.A.; Sabri, F.; Jamil, N.N.; Seradja, V.; Abdullah, N.A.; Ahmad, N. COVID-19 screening test by using random oropharyngeal saliva. J. Med. Virol. 2021, 93, 2461–2466. [Google Scholar] [CrossRef]
  91. Kost, G.J. Designing and interpreting COVID-19 diagnostics: Mathematics, visual logistics, and low prevalence. Arch. Pathol. Lab. Med. 2020, 145, 291–307. [Google Scholar] [CrossRef]
  92. Kost, G.J. The impact of increasing prevalence, false omissions, and diagnostic uncertainty on Coronavirus Disease 2019 (COVID-19) test performance. Arch. Pathol. Lab. Med. 2021, 145, 797–813. [Google Scholar] [CrossRef]
  93. Kost, G.J. Diagnostic strategies for endemic Coronavirus disease 2019 (COVID-19): Rapid antigen tests, repeat testing, and prevalence boundaries. Arch. Pathol. Lab. Med. 2022, 146, 16–25. [Google Scholar] [CrossRef] [PubMed]
  94. Cable, R.; Coleman, C.; Glatt, T.; Mhlanga, L.; Nyano, C.; Welte, A. Estimates of prevalence of anti-SARS-CoV-2 antibodies among blood donors in eight provinces of South Africa in November 2021. Res. Sq. 2022. [Google Scholar] [CrossRef]
  95. Koller, G.; Morrell, A.P.; Galão, R.P.; Pickering, S.; MacMahon, E.; Johnson, J.; Addison, O. More than the eye can see: Shedding new light on SARS-CoV-2 lateral flow device-based immunoassays. ACS Appl. Mater. Interfaces 2021, 13, 25694–25700. [Google Scholar] [CrossRef] [PubMed]
  96. Homza, M.; Zelena, H.; Janosek, J.; Tomaskova, H.; Jezo, E.; Kloudova, A.; Prymula, R. Covid-19 antigen testing: Better than we know? A test accuracy study. Infect. Dis. 2021, 53, 661–668. [Google Scholar] [CrossRef] [PubMed]
  97. Wang, H.; Hogan, C.A.; Verghese, M.; Solis, D.; Sibai, M.; Huang, C.; Pinsky, B.A. Ultra-sensitive severe Acute respiratory syndrome coronavirus 2 (SARS-CoV-2) antigen detection for the diagnosis of Coronavirus Disease 2019 (COVID-19) in upper respiratory samples. Clin. Infect. Dis. 2021, 73, 2326–2328. [Google Scholar] [CrossRef]
  98. Bohn, M.K.; Lippi, G.; Horvath, A.R.; Erasmus, R.; Grimmler, M.; Gramegna, M.; Adeli, K. IFCC interim guidelines on rapid point-of-care antigen testing for SARS-CoV-2 detection in asymptomatic and symptomatic individuals. Clin. Chem. Lab. Med. 2021, 59, 1507–1515. [Google Scholar] [CrossRef]
  99. Wang, H.; Paulson, K.R.; Pease, S.A.; Watson, S.; Comfort, H.; Zheng, P.; Murray, C.J. Estimating excess mortality due to the COVID-19 pandemic: A systematic analysis of COVID-19-related mortality, 2020–2021. Lancet 2022, 399, 1468. [Google Scholar] [CrossRef]
  100. Kost, G.J.; Ferguson, W.; Truong, A.-T.; Hoe, J.; Prom, D.; Banpavichit, A.; Kongpila, S. Molecular detection and point-of-care testing in Ebola virus disease and other threats: A new global public health framework to stop outbreaks. Expert Rev. Mol. Diagn. 2015, 15, 1245–1259. [Google Scholar] [CrossRef]
  101. Kost, G.J. Point-of-care testing for Ebola and other highly infectious threats: Principles, practice, and strategies for stopping outbreaks. In A Practical Guide to Global Point-Of-Care Testing; Shephard, M., Ed.; CSIRO (Commonwealth Scientific and Industrial Research Organization): Canberra, Australia, 2016; Chapter 24; pp. 291–305. [Google Scholar]
  102. Kost, G.J. Molecular and point-of-care diagnostics for Ebola and new threats: National POCT policy and guidelines will stop epidemics. Expert Rev. Mol. Diagn. 2018, 18, 657–673. [Google Scholar] [CrossRef]
  103. Zadran, A.; Kost, G.J. Enabling rapid intervention and isolation for patients with highly infectious diseases. J. Hosp. Mgt. Health Policy 2019, 3, 1–7. [Google Scholar] [CrossRef]
  104. Kost, G.J. Geospatial science and point-of-care testing: Creating solutions for population access, emergencies, outbreaks, and disasters. Front. Public Health 2020, 7, 329. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  105. Kost, G.J. Geospatial hotspots need point-of-care strategies to stop highly infectious outbreaks: Ebola and Coronavirus. Arch. Pathol. Lab. Med. 2020, 144, 1166–1190. [Google Scholar] [CrossRef]
  106. CORONADx Consortium 2020–2023. Rapid. Portable, Affordable Testing for COVID-19. European Union Horizon 2020 Grant No. 101003562. Available online: https://coronadx-project.eu/project/ and https://coronadx-project.eu/wp-content/uploads/2021/05/CORONADXLeaflet.pdf (accessed on 26 April 2022).
  107. Ogao, E. Hospitals in Kenya have been injecting people with water instead of vaccine—The hospitals also charged people for the privilege. Vice World News. 8 June 2021. Available online: https://www.vice.com/en/article/qj8j4b/hospitals-in-kenya-have-been-injecting-people-with-water-instead-of-the-vaccine (accessed on 26 April 2022).
  108. Whyte, J.; Wallensky, R.P. Coronavirus in context—Interview with CDC Director. Medscape Critical Care. 8 June 2021. Available online: https://www.medscape.com/viewarticle/952645?src=mkm_covid_update_210608_MSCPEDIT&uac=372400HG&impID=3429136&faf=1 (accessed on 26 April 2022).
  109. Kost, G.J.; Zadran, A.; Zadran, L.; Ventura, I. Point-of-care testing curriculum and accreditation for public health—Enabling preparedness, response, and higher standards of care at points of need. Front. Public Health 2019, 8, 385. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  110. Kost, G.J.; Zadran, A. Schools of public health should be accredited for, and teach the principles and practice of point-of-care testing. J. Appl. Lab. Med. 2019, 4, 278–283. [Google Scholar] [CrossRef] [PubMed]
  111. Kost, G.J. Public health education should include point-of-care testing: Lessons learned from the covid-19 pandemic. J. Int. Fed. Clin. Chem. Lab. Med. 2021, 32, 311–327. [Google Scholar]
  112. Eng, M.; Zadran, A.; Kost, G.J. Covid-19 risk avoidance and management in limited-resource countries—Point-of-care strategies for Cambodia. Omnia Digital Health. June–July 2021, pp. 112–115. Available online: https://secure.viewer.zmags.com/publication/44e87ae6?page=115&nocache=1625391874149&fbclid=IwAR2gIObRf6ThX15vlBpqMRYSC83hiZysJ8uMxEae7Em16bhOrnfEJTgNWuo#/44e87ae6/115 (accessed on 26 April 2022).
Figure 1. Performance of rapid antigen testing in community settings. Predictive value geometric mean-squared (PV GM2) curves reflect performance (left vertical axis) over the entire range of prevalence (horizontal axis). RFO, the false omission rate (right axis), indicates the frequency of missed diagnoses. PV GM2 and RFO curves are plotted for low, median, and high sensitivity/specificity documented in published clinical studies conducted in community settings (see Supplementary Materials). Performance Tiers 1 and 2 are plotted in green. The black horizontal line represents a risk level of 5% where 1 in 20 diagnoses will be missed. A prevalence boundary (PB) is the point at which the risk of a missed diagnosis exceeds the threshold of 1 in 20, that is, the point at which risk of contagion rises significantly and quickly.
Figure 1. Performance of rapid antigen testing in community settings. Predictive value geometric mean-squared (PV GM2) curves reflect performance (left vertical axis) over the entire range of prevalence (horizontal axis). RFO, the false omission rate (right axis), indicates the frequency of missed diagnoses. PV GM2 and RFO curves are plotted for low, median, and high sensitivity/specificity documented in published clinical studies conducted in community settings (see Supplementary Materials). Performance Tiers 1 and 2 are plotted in green. The black horizontal line represents a risk level of 5% where 1 in 20 diagnoses will be missed. A prevalence boundary (PB) is the point at which the risk of a missed diagnosis exceeds the threshold of 1 in 20, that is, the point at which risk of contagion rises significantly and quickly.
Diagnostics 12 01216 g001
Figure 2. Comparison of symptomatic versus asymptomatic community rapid antigen testing performance based on evidence.
Figure 2. Comparison of symptomatic versus asymptomatic community rapid antigen testing performance based on evidence.
Diagnostics 12 01216 g002
Figure 3. Real-world performance of rapid antigen testing in emergency medicine.
Figure 3. Real-world performance of rapid antigen testing in emergency medicine.
Diagnostics 12 01216 g003
Figure 4. Comparison of symptomatic versus asymptomatic rapid antigen test performance in emergency medicine. The blue PV GM2 curves illustrate RAgT performance for pediatric patients presenting to emergency rooms (ERs) and emergency departments (EDs). The curves show that RAgTs are “unbalanced” in contrast to the design of the performance tiers. Please see the legend in the inset table for prevalence at points of best performance. For asymptomatic patients, performance falls off quickly as prevalence increases, which limits use in emergency medicine settings where, through self-selection by sick patients, one would expect prevalence to be higher than in the community. COVID-19 detection is more likely in symptomatic patients, in part because of higher viral loads.
Figure 4. Comparison of symptomatic versus asymptomatic rapid antigen test performance in emergency medicine. The blue PV GM2 curves illustrate RAgT performance for pediatric patients presenting to emergency rooms (ERs) and emergency departments (EDs). The curves show that RAgTs are “unbalanced” in contrast to the design of the performance tiers. Please see the legend in the inset table for prevalence at points of best performance. For asymptomatic patients, performance falls off quickly as prevalence increases, which limits use in emergency medicine settings where, through self-selection by sick patients, one would expect prevalence to be higher than in the community. COVID-19 detection is more likely in symptomatic patients, in part because of higher viral loads.
Diagnostics 12 01216 g004
Figure 5. Performance of rapid antigen testing for home self-testing based on manufacturer claims in FDA EUA authorizations. Theoretical analysis of manufacturer claims shows that repeat testing yields higher performance and prevalence boundaries. Median recursive performance achieves Tier 2. The median recursive prevalence boundary of 74.3% reflects a reasonable minimum for Omicron, BA.2, and other emerging variants and sub-variants.
Figure 5. Performance of rapid antigen testing for home self-testing based on manufacturer claims in FDA EUA authorizations. Theoretical analysis of manufacturer claims shows that repeat testing yields higher performance and prevalence boundaries. Median recursive performance achieves Tier 2. The median recursive prevalence boundary of 74.3% reflects a reasonable minimum for Omicron, BA.2, and other emerging variants and sub-variants.
Diagnostics 12 01216 g005
Figure 6. Performance of molecular diagnostics for home self-testing. In the case of home molecular diagnostics, one independent clinical evaluation, shown by the “CCE” curves, achieved Tier 1 performance.
Figure 6. Performance of molecular diagnostics for home self-testing. In the case of home molecular diagnostics, one independent clinical evaluation, shown by the “CCE” curves, achieved Tier 1 performance.
Diagnostics 12 01216 g006
Figure 7. False omission rate topology for asymptomatic and symptomatic subjects. The purple cluster shows performance for asymptomatic subjects in emergency medicine (EMA) and community (CA) settings, while the red cluster reflects results for symptomatic subjects. Saliva testing for asymptomatic subjects (SA, purple), with specimen collection at points of need and PCR analysis performed in laboratories, did not differ substantially from rapid antigen testing. “HMDx” represents one clinical evaluation of molecular self-testing, which achieved performance between Tier 1 and Tier 2. In this case the risk of a missed diagnosis (RFO) is 10%, shown by the red horizontal line.
Figure 7. False omission rate topology for asymptomatic and symptomatic subjects. The purple cluster shows performance for asymptomatic subjects in emergency medicine (EMA) and community (CA) settings, while the red cluster reflects results for symptomatic subjects. Saliva testing for asymptomatic subjects (SA, purple), with specimen collection at points of need and PCR analysis performed in laboratories, did not differ substantially from rapid antigen testing. “HMDx” represents one clinical evaluation of molecular self-testing, which achieved performance between Tier 1 and Tier 2. In this case the risk of a missed diagnosis (RFO) is 10%, shown by the red horizontal line.
Diagnostics 12 01216 g007
Table 1. Performance Metrics for Home, Community, and Emergency Medicine COVID-19 Testing.
Table 1. Performance Metrics for Home, Community, and Emergency Medicine COVID-19 Testing.
Clinical SpaceMedian [N, Range]Median [N, Range]Data Source
Home Self-Tests Supplemental Table S1
Rapid antigen testsPPA 86.6 [12, 83.5–95.3]NPA 99.25 [12, 97–100]Manufacturer FDA EUA claim (not substantiated)
Isothermal (LAMP) molecular testsPPA 91.7 [3, 90.9–97.4]NPA 98.2 [3, 97.5–99.1]Manufacturer FDA EUA claim (not substantiated)
Isothermal (LAMP) molecular testSensitivity 91.7 [1, CI NR]Specificity 98.4 [1, CI NR]One independent clinical evaluation, see Donato et al. [8]
Community RAgTsSensitivitySpecificitySupplemental Table S2
Overall69.85 [24, 30.6–97.6]99.5 [24, 92–100]Performance evaluations
Symptomatic81.0 [19, 47.7–96.5]99.85 [16, 85–100]Symptomatic subjects
Asymptomatic55.75 [20, 37–88]99.70 [16, 97.8–100]Asymptomatic subjects
Automated antigen tests-overall62.3 [3, 43.3–100]99.5 [3, 94.8–99.9]Evaluations using automated laboratory instruments (small set)
-symptomatic73 [3, 68.5–88.9]100 [3, 100]Symptomatic subjects for above
Emergency Medicine RAgTsSensitivitySpecificitySupplemental Table S3
EM Overall68.79 [20, 17.5–94.9]99.5 [20, 92.1–100]ER and ED evaluations
Symptomatic77.9 [15, 43.3–95.8]99.5 [14, 88.2–100]Symptomatic EM subjects
Asymptomatic48.1 [11, 28.6–92.1]98.85 [10, 92.3–100]Asymptomatic EM subjects
Pediatric EM71.3 [10, 42.9–94.1]99.55 [10, 91.9–100]Pediatric ER/ED patients only
Ped. Symptomatic74.89 [6, 45.4–87.9]99.79 [6, 98.5–100]Symptomatic EM children
Ped. Asymptomatic35.09 [2, 27.27–42.9]99.7 [2, 99.4–100]Asymptomatic EM children
Saliva TestingSensitivitySpecificitySupplemental Table S4
Asymptomatic, molecular diagnostics63.6 [19, 16.8–95]98.85 [14, 95–100]Community evaluations with strictly asymptomatic subjects
Abbreviations: CI, 95% confidence interval; ED, emergency department; EM, emergency medicine; ER, emergency room; EUA, Emergency Use Authorization; FDA, Food and Drug Administration; LAMP, reverse transcription loop-mediated isothermal amplification; NR, not reported; PPA, positive percent agreement; NPA, negative percent agreement; Ped., pediatric; POC, point of care; and RAgTs, rapid antigen tests.
Table 2. Performance Tiers with Coordinated and Integrated False Omission Rates and Prevalence Boundaries Bracketing Community Immunity from 50% to 85%.
Table 2. Performance Tiers with Coordinated and Integrated False Omission Rates and Prevalence Boundaries Bracketing Community Immunity from 50% to 85%.
TierPerformance LevelSensitivity, %Specificity, %Target Prevalence Boundary [Actual] at RFO of 5% 10% 20%
1Low909533% (33.3)50% (51.4)70% (70.3)
2Marginal9597.550% (50.6)70% (68.4)85% (83.0)
3High100≥99No BoundaryNo BoundaryNo Boundary
Abbreviation: RFO, false omission rate.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kost, G.J. The Coronavirus Disease 2019 Spatial Care Path: Home, Community, and Emergency Diagnostic Portals. Diagnostics 2022, 12, 1216. https://doi.org/10.3390/diagnostics12051216

AMA Style

Kost GJ. The Coronavirus Disease 2019 Spatial Care Path: Home, Community, and Emergency Diagnostic Portals. Diagnostics. 2022; 12(5):1216. https://doi.org/10.3390/diagnostics12051216

Chicago/Turabian Style

Kost, Gerald J. 2022. "The Coronavirus Disease 2019 Spatial Care Path: Home, Community, and Emergency Diagnostic Portals" Diagnostics 12, no. 5: 1216. https://doi.org/10.3390/diagnostics12051216

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop