Next Article in Journal
The Impact of Image Acquisition Parameters and ComBat Harmonization on the Predictive Performance of Radiomics: A Renal Cell Carcinoma Model
Previous Article in Journal
Protection of Phytoextracts against Rotenone-Induced Organismal Toxicities in Drosophila melanogaster via the Attenuation of ROS Generation
Previous Article in Special Issue
Accuracy Analysis of a Next-Generation Tissue Microarray on Various Soft Tissue Samples of Wistar Rats
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Pathological Digital Biomarkers: Validation and Application

1
College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul 06591, Korea
2
Department of Hospital Pathology, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 07345, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(19), 9823; https://doi.org/10.3390/app12199823
Submission received: 17 August 2022 / Revised: 22 September 2022 / Accepted: 26 September 2022 / Published: 29 September 2022
(This article belongs to the Special Issue Digital Pathology: Current Issues and Trends)

Abstract

:
Digital pathology offers powerful tools for biomarker discovery, analysis, and translation. Despite its advantages, the clinical adoption of digital pathology has been slow. A clinical and methodological validation is required for novel digital pathological biomarkers. Four steps are required to validate a novel pathological digital biomarker for clinical use: sample collection and processing, analytical validation, clinical validation, and clinical utility. The digital biomarkers and their diagnostic, monitoring, pharmacodynamic response, predictive, prognostic, safety, and risk assessment applications are discussed. Adopting pathological digital biomarkers can be used in conjunction with other diagnostic technologies to select the most appropriate patient treatment, thereby reducing patient suffering and healthcare costs.

1. Introduction

Pathology-based evaluation has been utilized to diagnose diseases and determine the efficacy of drug use in various areas [1]. The traditional pathology technique has the advantage of being inexpensive and versatile, and it may be used on formalin-fixed, paraffin-embedded tissue samples [2]. However, due to discrepancies in laboratory procedures and subjective interpretations, this conventional pathology can cause disagreements among observers. This leads to inconsistencies in diagnosis and, as a result, to poor treatment options [3].
Digital image analysis in pathology enables a quick and accurate identification and quantification of specific cell types and quantifies histological features, morphological patterns, and biologically relevant areas of interest (e.g., tumor or tumor-surrounding areas, various immune cell populations, relationships between regions) [4]. It could improve study inclusion criteria and outcomes. Image analysis can extract relevant measurements and features. Automated methods have been successfully used to quantify immunohistochemistry [5]. Image analysis can provide a more reproducible quantification of cell or gland morphology. Deep learning replaces traditional image analysis algorithms. By training complex computation models directly from data, algorithms can often surpass traditional image analysis methods including PD-L1 scoring [6], the quantification of immune infiltrates to predict testicular cancer outcomes [7], sentinel lymph node detection [8], and better colorectal cancer outcome prediction than a morphological assessment [9]. The incorporation of digital pathology into the clinical environment enables accurate prediction of the patient’s response to treatment and prognosis, as compared to current pathology practices [10].
Despite these advantages, it is not easy to adopt digital pathology in clinical practice. In order for an evaluation using digital pathology to be actually introduced into clinical practice, it is necessary to prove whether diagnostic and classification algorithms using digital pathology are useful in clinical practice, such as validation procedures for existing biomarkers. This article deals with how quality control is being implemented to achieve consistent results in recent digital pathology. It also deals with how new algorithms of digital pathology have been revealed to have clinical usefulness.

2. Workflow of Digital Pathology

Most pathology labs have been using digital technologies for a long time. The laboratory information management system is used at every step of a specimen’s journey through the lab, from when it is brought in to when the final report is published. Adding more digital technologies to this process can make it safer, better, and more efficient in the preanalytical, analytical, and postanalytical phases [11]. A laboratory system using end-to-end digital identification (barcodes or radio frequency ID tags) would eliminate these manual steps, ideally at specimen retrieval [12]. Barcoding provides real-time information about laboratory specimens, assets, and processes. Barcodes can store specimen type information, which can be used to route a digital slide to a specific pathologist or add instructions for special stains or additional levels automatically [13], and real-world data make biomarker development easier.

3. Validation of Pathological Digital Biomarker

Tumor biomarkers are used for diagnosis and management of tumors in patients. Because there are so many various types of tumor biomarkers now, it is crucial to pick the right one. This is a topic that is directly related to the patient’s treatment outcomes, hence scientifically validated biomarker testing should be performed. Despite the enormous number of publications on tumor “biomarkers” in recent years, the lack of clinically usable biomarkers appears to be largely due to the lack of a well-defined validation method for bringing a newly found “biomarker” into the clinic [14].
Preanalytic, analytic, and clinical validation and a demonstration of clinical utility are the four important processes in moving a novel pathological digital biomarker from a laboratory to daily clinical use [15]. The validation process of a pathological digital biomarker is comprehensively illustrated in Figure 1. The sample collection and processing step evaluates preanalytical factors that may alter biomarker concentration measurement, and this step is essential for all biomarkers utilized clinically, not just cancer biomarkers. Analytical validity, clinical validity, and clinical utility are three crucial semantic concepts proposed for biomarker research [16]. Analytic validity refers to the ability of biomarker tests to accurately and reliably quantify biomarkers in patient samples. Clinical validity refers to the ability of a tumor biomarker test to divide patients into two or more distinct groups with statistical significance based on biological or clinical results. For tumor biomarker tests to be clinically useful, they must either improve clinical outcomes or demonstrate that clinical results are the same at low cost or toxicity.

3.1. Preanalytic Validation

A whole-slide image (WSI) is a critical procedure for digitizing data when using pathological digital biomarkers in diagnostics [17]. As a result, in order to apply a pathological digital biomarker in clinical practice, the accuracy of the WSI must be guaranteed [18]. This is predicated on the assumption that glass slides have been precisely scanned and duplicated using a scanner [19]. Atallah et al. gathered data on the failure rates of scanned digital images before and after the use of quality control (QC) measures for routine scanning. The QC failure rate of WSIs was 20%. Glass slides and WSIs were subjected to strict quality control techniques three months later. The failure rate of images was dramatically lowered to 2% [20].
The WSI quality control consists of prescan QC, real-time QC, and postscan QC processes [20]. Before scanning, it should be checked if the glass slides are not damaged, stained or dried properly, clean and free of ink markings, that the covers are properly positioned, and that there are no air bubbles [21]. Barcode detection failures, tissue detection failures, macro focus image failures, and image quality errors are all checked by scanners [20]. After the scanning, image thumbnails are checked to ensure that all of the tissues on the glass slide have been digitized [22].
The College of American Pathologists published a guideline for the validation of whole-slide imaging systems in pathology for diagnostic purposes. Three recommendations and nine statements of good practice were included in this guideline. According to the guideline, the validation process should include at least 60 cases for one application or use case (e.g., hematoxylin–eosin-stained sections of fixed tissue, frozen sections, hematology) that reflect the spectrum and complexity of specimen types and diagnoses likely to be encountered during routine practice [23]. Current histology QC requirements allow for a wide heterogeneity in preanalytical and analytical artifacts across and within labs [24,25]. Artifact severity and quality vary. Some influence the glass slide (pen markings, filth, bubbles), while others affect the tissue (folds, thickness, stain intensity) or the scanning process (for example, differences in focus or the gridding effect of the WSI). While pathologists learn to interpret using their usual stain methodology and read artifacts on a glass slide, tissue section, or digital picture, computers must adapt to such heterogeneity [26]. Computer-aided algorithms must be generalizable across datasets for digital pathology measurement, categorization, and prognostication [27]. The variability in WSI training sets may be an advantage for generalizability; however, an algorithm’s success is tied to analytics control and dataset homogeneity [28].

3.2. Analytical Validation

A new biomarker assay, regardless of format, must go through analytical (technical) validation [29]. Analytic validity refers to the ability of biomarker assays to accurately and reliably quantify biomarkers in patient samples. The initial development and validation of an assay is frequently done in a research laboratory (academic or industrial). However, such laboratories’ sample numbers are limited, and the importance of results is less crucial than in a clinical context [30]. To ensure that the assay is adequately validated, clinical laboratories should conduct additional validation using large sets of data. This simultaneous validation by a research lab and a clinical lab can give evidence of an assay’s practicability [29]. The double verification is required because pathological digital biomarkers demand precision with large numbers of samples. Pathological digital biomarkers also deal with vast quantities of specimens, thus they must provide an automated analytical method. For automated testing, double validation both by the research laboratory and the clinical laboratory, where the test is employed, is required [4].

3.3. Clinical Validation

Pathological digital biomarkers must be clinically validated after analytic validation. Patients should be able to be divided into two or more groups using biomarkers. Such divided groups can include those with or without disease, those who are expected to have a good or poor result [31], and those who are at a higher or lower risk of disease recurrence [32]. Central pathology review is the main benefit of digital pathology clinical validation. Digital pathology allows a simultaneous case review and quick access to international experts. Digital imaging eliminates the need to physically transfer slides and tissue blocks, preventing damage or loss [33]. Digital pathology provides new information from histological samples. Digital image analysis can be used to quickly quantify and reproduce immune cell infiltrates [34,35]. Pathologists can mine and link novel digital morphometric signatures to clinical outcome [36]. However, pathologists face significant challenges in the arduous process of manually assessing specimens to determine their clinical utility. Methods based on artificial intelligence can help make accurate quantitative assessments of digital images at a level that could be better than that of a human observer [37]. Examples of clinical validation of digital pathological digital biomarker are shown below.
Sobottka et al. developed a diagnostic algorithm using digital pathology to quantitatively evaluate tumor-infiltrating CD8+ T cell spatial densities, which is one of the most important predictive indicators in immuno-oncology [31]. The diagnostic algorithm translated spatial densities of tumor-infiltrating CD8+ T cells into the clinically relevant immunological diagnostic categories “inflamed”, “excluded”, and “desert” based on intratumoral and stromal CD8+ T cell densities in the tumor center compartment. For tumor classification in the desert category, a precision of 100 percent was achieved, with a few excluded tumors (sensitivity 83 percent, specificity 100 percent) allocated to the inflamed tumors. (sensitivity 100 percent, specificity 62.5 percent).
Surgically resected hepatocellular carcinoma (HCC) has a significant recurrence rate but cannot be predicted. Based on digital pathologic images and machine learning using a support vector machine (SVM), Saito et al. predicted the early recurrence of HCC following resection [32]. For the prediction, three integrated SVM models were used: one based on the region of interest (ROI) of HCC area, one based on the ROI of non-HCC area, and one based on nuclei characteristics. The prediction was precise 89.9% of the time. The potential clinical benefits for disease-specific digital biomarkers included a more rapid and accurate disease diagnosis, prediction, and potential reduction in size and duration of clinical trials. Continuous efforts and clinical trials (summarized in Table 1) are currently being performed. Due to the fact that clinical validation of pathological digital biomarkers is possible under digital pathology systems, slide scanners and image analysis algorithms, with or without the assistance of artificial intelligence, which are intended for medical use and are classified as medical devices, have been utilized [38]. Regulatory requirements for clinical performance studies of in vitro diagnostics (IVDs) and for the use of IVDs as digital pathology in clinical trials of medicinal products are debated and evolving [39].

3.4. Clinical Utility

Pathological digital biomarkers must be shown to be therapeutically beneficial after analytical and clinical validity has been established. Biomarkers must either improve clinical outcomes or show that they can achieve the same results at a lower cost or with less toxicity [40]. The clinical utility of biomarkers can be demonstrated in two ways: by directly demonstrating the utility of novel biomarkers or by demonstrating the correlation between existing and new biomarkers [41]. Among published studies, the following are examples of establishing utility in clinical practice employing digital pathology technology.
Studies utilized pathological digital biomarker to predict tumor response to immune checkpoint inhibitor in advanced non-small-cell lung cancer (NSCLC). When the programmed death ligand-1 (PDL-1) tumor proposition score (TPS) was greater than 50%, previous trials demonstrated that Pembrolizumab had a higher tumor response rate (TRR) and survival in advanced NSCLC than chemotherapy [42]. When the PDL-1 TPS was 1%–49%, however, the effects of Pembrolizumab and chemotherapy treatments were identical [43], indicating the need for a novel biomarker to predict tumor response in patients. Park et al. used an artificial-intelligence-powered spatial analysis of tumor-infiltrating lymphocytes (TILs) as an immune checkpoint inhibitor (ICI)’s complementary biomarker in advanced NSCLC to achieve this goal [44]. Patients with advanced NSCLC who had received ICI monotherapy were studied retrospectively. When compared to patients with immune-excluded or immunological-desert phenotypes, patients with the inflamed immune phenotype (IP) had more local immune cytolytic activity, a higher TRR, and a longer progression-free survival.
There is disagreement over the regulation and inspection of clinical trials using these technologies as medical devices. Although there are AI algorithms in clinical practice that have received regulatory clearance, constantly evolving AI applications pose additional difficulties [45]. To the best of our knowledge, there are no regulations governing the application of image analysis or digital pathology in clinical trials. The development of the REMARK recommendations for biomarker studies was prompted by the low yield of clinically actionable biomarkers from a large number of research studies involving significant resource expenditure [46]. Additionally, there is no consensus on the best way for pathologists to integrate computational pathology systems into their daily operations [47]. To solve these problems, explainable artificial intelligence (xAI) methods can be used to build computational pathology systems as a powerful and effective alternative to opaque AI models. [48]. Developing similar recommendations for digital pathology applications would reduce resource waste in biomarker studies. The inefficient development and adoption of clinically relevant biomarkers has been linked to a lack of reproducibility caused by inadequate experimental reporting [49]. To avoid repeating research practice failures, it is important to report how digital pathology applications were validated and consider adopting existing biomarker guidelines. Digital pathology parameters meet the definition of a biomarker as an indicator of normal biological processes, pathogenic processes, or responses to an exposure or intervention, including therapeutic interventions [50].

4. Clinical Application of a Pathological Digital Biomarker

Pathological digital biomarkers have been subdivided by application. One biomarker may meet multiple criteria for different uses, but each definition needs evidence. While definitions may overlap, they have distinguishing features that specify their uses. Figure 2 provides a summary of clinical applications of pathological digital biomarkers.

4.1. Diagnostic Biomarker

A diagnostic biomarker detects or verifies the existence of a disease or condition of interest, or identifies an individual with a disease subtype [51]. A previous study built a supervised AI model based on ten cellular features provided by an experienced breast pathologist to distinguish between malignant and benign breast tumors using pictures of fine-needle aspiration biopsy specimens [52]. A tissue microarray of male breast cancers revealed the predictive value from nuclear shape and texture [53]. In pathology images of prostate tissues, a unique gland angularity feature correlated with the degree of abnormality of the glandular architecture was identified, which appeared more frequently in aggressive prostate cancers than indolent prostate cancers in addition to presenting characteristics related with abnormalities in nuclear shape, orientation, and architecture in tumor and tumor-associated benign areas [54]. Immune phenotyping can also be done with pathological digital biomarkers. Immunological desert, excluded, and inflamed melanoma could be phenotyped in metastatic melanoma, and immunological diagnoses coincided with a pathologist’s assessment [31]. Furthermore, in advanced melanoma, the response to immunotherapy was predicted using a machine learning system [55]. When PD-L1 TPS was less than 50%, NSCLC patients could use a pathological digital biomarker to evaluate whether they should get chemotherapy or ICI [44]. As a result, pathological digital biomarkers can be used as a complement to determine the optimal treatment for a patient. Immune phenotyping also aids in identifying the disease’s prognosis. A quantitative examination of neoplasmatic infiltrates in kidney transplant patients can predict interstitial fibrosis and tubular atropies, a considerable risk factor for delayed graft function [56].

4.2. Monitoring Biomarker

Monitoring biomarkers are those that may be evaluated serially to examine the status of a disease or medical condition for evidence of exposure to a medical product or environmental agent, or to detect the influence of a medical product or biological agent [57]. Monitoring is a broad concept, thus there is overlap with other biomarker categories [57]. A quantitative digital pathology revealed the potential role of recurrence probability in patient follow-up decisions [58]. Disease recurrence is predicted using digital pathology biomarker in prostate cancer [58], hepatocellular carcinoma [32], colorectal cancer [59], and nonalcoholic fatty liver disease [60]. Such a biomarker can influence the disease monitoring schedule decision.

4.3. Pharmacodynamic Biomarker

A pharmacodynamic biomarker is a type of biomarker whose level changes when a person is exposed to a drug or an environmental agent. This kind of biomarker is very helpful both in clinical practice and in the early stages of drug development [57]. In recent years, AI-based drug discovery and development techniques have gained popularity. A large proportion of patients receiving specific therapeutic modalities, such as cytotoxic drugs or immune checkpoint inhibitors, do not respond to treatment, prompting interest in merging AI with digital pathology to identify those most likely to benefit from treatment [61]. A quantitative approach can help the researcher determine if there is a cutoff or expression level that results in the medicine reaching the target population, leading to pathway inhibition and tumor cell death. This cutoff assessment could be a clinical study’s patient selection indicator [62].

4.4. Predictive Biomarker

A predictive biomarker is characterized by the finding that the presence or change in the biomarker predicts that an individual or group of persons are more likely to experience a positive or negative outcome from exposure to a medicinal product or environmental contaminant [57,63]. Pathological digital biomarkers can be used in clinical practice as an aid for predicting the patient’s optimal treatment. Digital pathology can be used to stratify different cancers. On a computerized platform, noninvasive urothelial carcinomas were assessed and found to be noninferior to uropathologists [64]. The hormone receptor status of breast cancer can be determined via digital pathology [65]. By distinguishing these cancers, it is also capable of predicting the disease’s prognosis. A novel follow-up technique for HCC patients after resection was suggested using a machine-learning-based HCC recurrence prediction method [32]. LARC has demonstrated that a pathological digital biomarker can be utilized as a predictor of chemoradiation therapy response, thereby creating a new criterion for avoiding undue suffering and expense for patients [66]. Image analysis can help measure a threshold-based metric for the effectiveness of a targeted therapy and confirm that the target has been blocked [62]. Target expression is crucial in antibody–drug conjugate projects to deliver the therapeutic payload to the appropriate target-expressing (tumor) cells.

4.5. Prognostic Biomarker

A prognostic biomarker provides information regarding the likely cancer outcome (e.g., disease recurrence, disease progression, and death) regardless of the treatment received [63]. A method was proposed for modeling and analyzing the spatial distribution of lymphocytes among tumor cells on triple-negative breast cancer WSIs. Three types of lymphocytes were found based on how close they were to cancer cells. Tissue microarray gene expression profiling found that the ratio of lymphocytes to cancer cells inside a tumor was an independent predictor of survival and was linked to the levels of cytotoxic T lymphocyte protein-4 expression [67]. Stromal characteristics affect cancer prognosis. In order to identify tumor-infiltrating lymphocytes and stromal-specific characteristics, automated algorithms are being developed to determine epithelial and stromal tissue regions [36].

4.6. Safety and Risk Biomarker

Clinical uses of digital pathology include pathological diagnosis, assessment of immunohistochemistry, tumor board, frozen section diagnosis, receiving and requesting second opinions, remote working, and insourcing or outsourcing diagnostic work [68]. All these image-analysis-based approaches can be utilized to quantify biomarker for therapeutic or care-schedule decision including safety and risk assessment. Image analysis platforms can classify risk in colon cancer patients [69]. However, these kinds of studies are only available as single-site, stand-alone tests, which makes them less useful to the pathology community as a whole. Studies that have looked at the use of full digital pathology workflows have shown that they make operations more efficient and useful [70]. Manually assessing hundreds of specimens for safety investigations is tedious, time-consuming, and expensive. Quantitative image analysis in digital pathology helps scientists evaluate. Traditionally, safety studies evaluate changes based on histochemical stains (such as H&E and Masson’s trichrome). Several studies have shown the feasibility of automated scoring in murine studies of liver fibrosis [71,72], the quantification of hepatic lipid droplets and steatosis [73], heart ischemic injury [74], lung fibrosis [75,76], kidney injury [77], and pancreatic toxicity [78].

5. Discussion

Digital pathology is being actively developed for numerous reasons, including the increased volume and complexity of histopathological workloads, the pressure on turnaround times, the shortage of staff, the need to expand capacity, the drive toward networks and service mergers, and the drive toward healthcare digitalization [68]. Digital pathology has a number of possible clinical applications, including pathology sample diagnostic test, immunohistochemistry evaluation, MDT/tumor board, and frozen section diagnosis [68]. DP can also improve diagnostic and predictive clinical decision-making in cancer treatment since AI and machine learning can display clinical and pathological characteristics of many clinical samples [10].
Despite these advantages, digital pathology is not easily adopted in clinical practice. Pathologists are working on a number of projects to better comprehend AI-based predictions. However, there are so many variables that influence judgments that humans are unable to fully comprehend them [79]. Because of the lack of understanding, AI-based predictions are prone to bias and misdiagnoses, making AI challenging to apply in clinical practice. As a result, the novel pathological digital biomarker must be rigorously validated. However, because digital pathology is still being developed, validation approaches are currently being debated. This review article discusses validation methods and examples for novel pathological digital biomarkers, based on the known validation approach that biomarkers must go through for clinical application. Another new technological advance in digital pathology is 3D pathology [80]. Three-dimensional pathology aided computational analysis and deep learning such as generative adversarial network to provide multistage features such as nuclei [81,82], collagen fiber [83], and glands [84]. Three-dimensional pathology with deep-learning-assisted prostate gland-biopsy analysis has opened the possibility of nondestructive, superior prognostic stratification of patient disease, demonstrating the value of computational 3D pathology for clinical decision support, especially for low- to intermediate-risk prostate cancer [84].
In terms of insurance, ensuring the therapeutic utility of pathological digital biomarkers is critical. The difficulty of use, evidence of effectiveness, and financial return on investment are all factors that contribute to AI deployment failures in health care [85]. As previously noted, clinical usefulness is a critical consideration when determining if pathological digital biomarkers may be employed in clinical practice. It would be difficult to employ pathological digital biomarkers clinically until the technology’s clinical value is demonstrated through clinical studies, because clinical usefulness is intimately linked to the question of reimbursement. In digital pathology, there is currently no particular procedural code for applying AI techniques for diagnostic or prognostic purposes [17]. The US Food and Drug Administration (FDA) approval will be required for digital biomarkers to demonstrate their clinical utility and be utilized in clinics. A medical image management and processing system is defined by the FDA as a device that provides one or more capabilities related to the review and digital processing of medical images for the purpose of interpretation by a trained practitioner for disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in medical image interpretation and analysis. Image segmentation, multimodal image registration, and 3D visualization are examples of advanced image manipulation functions. Complex quantitative functions may include semiautomated measurements or time-series measurements [86]. Indeed, The FDA has allowed the use of AI in ophthalmology and radiography, which means that physicians will have more personal availability when using this instrument, a higher possibility of earning reimbursements, and less of a burden on the library for self-validation in pathology [85].
Recently, suitable regulation has been required in order to deploy the AI-based algorithm securely and effectively [79]. The FDA is working on developing acceptable regulations for AI-assisted digital pathology. The FDA has issued "Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device Action Plan" in this regard [87]. Although the details have not been completed, the FDA is focusing on the safety and efficacy of digital pathology.
One of the primary difficulties that needs to be overcome when using pathological digital biomarkers in ordinary clinical practice for medical decision-making is the reimbursement of costs [85]. Reagents were used to examine traditional pathologic biomarkers. However, unlike the previous method, a pathological digital biomarker employs a WSI technology, thus no further testing is required to use the markers. As a result, medical institutions may be able to lower the actual cost of testing. However, it is difficult to use technology without difficulty because it requires money to get it into the institution, which raises the issue of reimbursement. In this regard, if the cost is estimated using the decrease in the actual cost of traditional biomarker tests by a digital biomarker, the new biomarker test could be adopted more widely.

6. Conclusions

We discussed the current limitations but tremendous potential of digital technologies in pathology, as well as how digital pathology could provide immediate solutions to the search for new therapeutic and prognostic biomarkers. The future direction of combining digital pathology and artificial intelligence will lead to new therapeutic stratification and advanced diagnostics, allowing researchers and clinical teams to share and use computational algorithms to assess and contribute valuable insights, eventually leading to a more informed and detailed multimodal diagnosis.

Author Contributions

Conceptualization, T.-J.K. and Y.S.; writing—original draft preparation, Y.S.; writing—review and editing, K.K. and I.K.; supervision, T.-J.K.; funding acquisition, T.-J.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) through a grant funded by the Korean government (MSIT) (grant number 2017R1E1A1A01078335 and 2022R1A2C1092956) and the Institute of Clinical Medicine Research in the Yeouido St. Mary’s Hospital.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board of Yeouido St. Mary’s hospital (SC18RNSI0005 approved at 22 January 2018).

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jubb, A.M.; Koeppen, H.; Reis-Filho, J.S. Pathology in drug discovery and development. J. Pathol. 2014, 232, 99–102. [Google Scholar] [CrossRef] [PubMed]
  2. Glaser, A.K.; Reder, N.P.; Chen, Y.; McCarty, E.F.; Yin, C.; Wei, L.; Wang, Y.; True, L.D.; Liu, J.T. Light-sheet microscopy for slide-free non-destructive pathology of large clinical specimens. Nat. Biomed. Eng. 2017, 1, 1–10. [Google Scholar] [CrossRef] [PubMed]
  3. Raab, S.S. Variability of practice in anatomic pathology and its effect on patient outcomes. In Seminars in Diagnostic Pathology; Elsevier: Amsterdam, The Netherlands, 2005; Volume 22, pp. 177–185. [Google Scholar]
  4. Robertson, S.; Azizpour, H.; Smith, K.; Hartman, J. Digital image analysis in breast pathology—From image processing techniques to artificial intelligence. Transl. Res. 2018, 194, 19–35. [Google Scholar] [CrossRef] [PubMed]
  5. Koelzer, V.H.; Sirinukunwattana, K.; Rittscher, J.; Mertz, K.D. Precision immunoprofiling by image analysis and artificial intelligence. Virchows Arch. 2019, 474, 511–522. [Google Scholar] [CrossRef] [PubMed]
  6. Kapil, A.; Meier, A.; Zuraw, A.; Steele, K.E.; Rebelatto, M.C.; Schmidt, G.; Brieu, N. Deep semi supervised generative learning for automated tumor proportion scoring on NSCLC tissue needle biopsies. Sci. Rep. 2018, 8, 17343. [Google Scholar] [CrossRef] [PubMed]
  7. Linder, N.; Taylor, J.C.; Colling, R.; Pell, R.; Alveyn, E.; Joseph, J.; Protheroe, A.; Lundin, M.; Lundin, J.; Verrill, C. Deep learning for detecting tumour-infiltrating lymphocytes in testicular germ cell tumours. J. Clin. Pathol. 2019, 72, 157–164. [Google Scholar] [CrossRef]
  8. Bejnordi, B.E.; Veta, M.; Van Diest, P.J.; Van Ginneken, B.; Karssemeijer, N.; Litjens, G.; Van Der Laak, J.A.; Hermsen, M.; Manson, Q.F.; Balkenhol, M.; et al. Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA 2017, 318, 2199–2210. [Google Scholar] [CrossRef]
  9. Bychkov, D.; Linder, N.; Turkki, R.; Nordling, S.; Kovanen, P.E.; Verrill, C.; Walliander, M.; Lundin, M.; Haglund, C.; Lundin, J. Deep learning based tissue analysis predicts outcome in colorectal cancer. Sci. Rep. 2018, 8, 3395. [Google Scholar] [CrossRef]
  10. Baxi, V.; Edwards, R.; Montalto, M.; Saha, S. Digital pathology and artificial intelligence in translational medicine and clinical practice. Mod. Pathol. 2022, 35, 23–32. [Google Scholar] [CrossRef]
  11. Fraggetta, F.; Caputo, A.; Guglielmino, R.; Pellegrino, M.G.; Runza, G.; L’Imperio, V. A Survival Guide for the Rapid Transition to a Fully Digital Workflow: The “Caltagirone Example”. Diagnostics 2021, 11, 1916. [Google Scholar] [CrossRef]
  12. Zarbo, R.J.; D’Angelo, R. The Henry Ford Production System: Effective reduction of process defects and waste in surgical pathology. Am. J. Clin. Pathol. 2007, 128, 1015–1022. [Google Scholar] [CrossRef] [PubMed]
  13. Nakhleh, R.E. Role of informatics in patient safety and quality assurance. Surg. Pathol. Clin. 2015, 8, 301–307. [Google Scholar] [CrossRef] [PubMed]
  14. Hayes, D.F.; Allen, J.; Compton, C.; Gustavsen, G.; Leonard, D.G.; McCormack, R.; Newcomer, L.; Pothier, K.; Ransohoff, D.; Schilsky, R.L.; et al. Breaking a vicious cycle. Sci. Transl. Med. 2013, 5, 196cm6. [Google Scholar] [CrossRef] [PubMed]
  15. Madabhushi, A.; Lee, G. Image analysis and machine learning in digital pathology: Challenges and opportunities. Med Image Anal. 2016, 33, 170–175. [Google Scholar] [CrossRef] [PubMed]
  16. Teutsch, S.M.; Bradley, L.A.; Palomaki, G.E.; Haddow, J.E.; Piper, M.; Calonge, N.; Dotson, W.D.; Douglas, M.P.; Berg, A.O. The evaluation of genomic applications in practice and prevention (EGAPP) initiative: Methods of the EGAPP working group. Genet. Med. 2009, 11, 3–14. [Google Scholar] [CrossRef] [PubMed]
  17. Bera, K.; Schalper, K.A.; Rimm, D.L.; Velcheti, V.; Madabhushi, A. Artificial intelligence in digital pathology—New tools for diagnosis and precision oncology. Nat. Rev. Clin. Oncol. 2019, 16, 703–715. [Google Scholar] [CrossRef]
  18. Jahn, S.W.; Plass, M.; Moinfar, F. Digital pathology: Advantages, limitations and emerging perspectives. J. Clin. Med. 2020, 9, 3697. [Google Scholar] [CrossRef]
  19. Pantanowitz, L.; Valenstein, P.N.; Evans, A.J.; Kaplan, K.J.; Pfeifer, J.D.; Wilbur, D.C.; Collins, L.C.; Colgan, T.J. Review of the current state of whole slide imaging in pathology. J. Pathol. Inform. 2011, 2, 36. [Google Scholar] [CrossRef]
  20. Atallah, N.M.; Toss, M.S.; Verrill, C.; Salto-Tellez, M.; Snead, D.; Rakha, E.A. Potential quality pitfalls of digitalized whole slide image of breast pathology in routine practice. Mod. Pathol. 2022, 35, 903–910. [Google Scholar] [CrossRef]
  21. Hanna, M.G.; Ardon, O.; Reuter, V.E.; Sirintrapun, S.J.; England, C.; Klimstra, D.S.; Hameed, M.R. Integrating digital pathology into clinical practice. Mod. Pathol. 2022, 35, 152–164. [Google Scholar] [CrossRef]
  22. Fraggetta, F.; Garozzo, S.; Zannoni, G.F.; Pantanowitz, L.; Rossi, E.D. Routine digital pathology workflow: The Catania experience. J. Pathol. Inform. 2017, 8, 51. [Google Scholar] [CrossRef] [PubMed]
  23. Evans, A.J.; Brown, R.W.; Bui, M.M.; Chlipala, E.A.; Lacchetti, C.; Milner, D.A.; Pantanowitz, L.; Parwani, A.V.; Reid, K.; Riben, M.W.; et al. Validating whole slide imaging systems for diagnostic purposes in pathology: Guideline update from the College of American Pathologists in collaboration with the American Society for Clinical Pathology and the Association for Pathology Informatics. Arch. Pathol. Lab. Med. 2022, 146, 440–450. [Google Scholar] [CrossRef] [PubMed]
  24. Masucci, G.V.; Cesano, A.; Hawtin, R.; Janetzki, S.; Zhang, J.; Kirsch, I.; Dobbin, K.K.; Alvarez, J.; Robbins, P.B.; Selvan, S.R.; et al. Validation of biomarkers to predict response to immunotherapy in cancer: Volume I—Pre-analytical and analytical validation. J. Immunother. Cancer 2016, 4, 76. [Google Scholar] [CrossRef] [PubMed]
  25. Koyuncu, C.F.; Lu, C.; Bera, K.; Zhang, Z.; Xu, J.; Toro, P.; Corredor, G.; Chute, D.; Fu, P.; Thorstad, W.L.; et al. Computerized tumor multinucleation index (MuNI) is prognostic in p16+ oropharyngeal carcinoma. J. Clin. Investig. 2021, 131, e145488. [Google Scholar] [CrossRef]
  26. Barisoni, L.; Lafata, K.J.; Hewitt, S.M.; Madabhushi, A.; Balis, U.G. Digital pathology and computational image analysis in nephropathology. Nat. Rev. Nephrol. 2020, 16, 669–685. [Google Scholar] [CrossRef] [PubMed]
  27. Niazi, M.K.K.; Parwani, A.V.; Gurcan, M.N. Digital pathology and artificial intelligence. Lancet Oncol. 2019, 20, e253–e261. [Google Scholar] [CrossRef]
  28. Yagi, Y. Color standardization and optimization in whole slide imaging. In Diagnostic Pathology; Springer: Berlin/Heidelberg, Germany, 2011; Volume 6, pp. 1–12. [Google Scholar]
  29. Duffy, M.J.; Sturgeon, C.M.; Sölétormos, G.; Barak, V.; Molina, R.; Hayes, D.F.; Diamandis, E.P.; Bossuyt, P.M. Validation of new cancer biomarkers: A position statement from the European group on tumor markers. Clin. Chem. 2015, 61, 809–820. [Google Scholar] [CrossRef]
  30. Sturgeon, C.; Hill, R.; Hortin, G.L.; Thompson, D. Taking a new biomarker into routine use–a perspective from the routine clinical biochemistry laboratory. PROTEOMICS- Appl. 2010, 4, 892–903. [Google Scholar] [CrossRef]
  31. Sobottka, B.; Nowak, M.; Frei, A.L.; Haberecker, M.; Merki, S.; Levesque, M.P.; Dummer, R.; Moch, H.; Koelzer, V.H. Establishing standardized immune phenotyping of metastatic melanoma by digital pathology. Lab. Investig. 2021, 101, 1561–1570. [Google Scholar] [CrossRef]
  32. Saito, A.; Toyoda, H.; Kobayashi, M.; Koiwa, Y.; Fujii, H.; Fujita, K.; Maeda, A.; Kaneoka, Y.; Hazama, S.; Nagano, H.; et al. Prediction of early recurrence of hepatocellular carcinoma after resection using digital pathology images assessed by machine learning. Mod. Pathol. 2021, 34, 417–425. [Google Scholar] [CrossRef]
  33. Mroz, P.; Parwani, A.V.; Kulesza, P. Central pathology review for phase III clinical trials: The enabling effect of virtual microscopy. Arch. Pathol. Lab. Med. 2013, 137, 492–495. [Google Scholar] [CrossRef] [PubMed]
  34. Pagès, F.; Berger, A.; Camus, M.; Sanchez-Cabo, F.; Costes, A.; Molidor, R.; Mlecnik, B.; Kirilovsky, A.; Nilsson, M.; Damotte, D.; et al. Effector memory T cells, early metastasis, and survival in colorectal cancer. N. Engl. J. Med. 2005, 353, 2654–2666. [Google Scholar] [CrossRef] [PubMed]
  35. Pagès, F.; Kirilovsky, A.; Mlecnik, B.; Asslaber, M.; Tosolini, M.; Bindea, G.; Lagorce, C.; Wind, P.; Marliot, F.; Bruneval, P.; et al. In situ cytotoxic and memory T cells predict outcome in patients with early-stage colorectal cancer. J. Clin. Oncol. 2009, 27, 5944–5951. [Google Scholar] [CrossRef]
  36. Beck, A.H.; Sangoi, A.R.; Leung, S.; Marinelli, R.J.; Nielsen, T.O.; Van De Vijver, M.J.; West, R.B.; Van De Rijn, M.; Koller, D. Systematic analysis of breast cancer morphology uncovers stromal features associated with survival. Sci. Transl. Med. 2011, 3, 108ra113. [Google Scholar] [CrossRef] [PubMed]
  37. Sirinukunwattana, K.; Raza, S.E.A.; Tsang, Y.W.; Snead, D.R.; Cree, I.A.; Rajpoot, N.M. Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images. IEEE Trans. Med. Imaging 2016, 35, 1196–1206. [Google Scholar] [CrossRef] [PubMed]
  38. Pell, R.; Oien, K.; Robinson, M.; Pitman, H.; Rajpoot, N.; Rittscher, J.; Snead, D.; Verrill, C.; UK National Cancer Research Institute (NCRI) Cellular-Molecular Pathology (CM-Path) Quality Assurance Working Group; Driskell, O.J.; et al. The use of digital pathology and image analysis in clinical trials. J. Pathol. Clin. Res. 2019, 5, 81–90. [Google Scholar] [CrossRef] [PubMed]
  39. Lujan, G.; Quigley, J.C.; Hartman, D.; Parwani, A.; Roehmholdt, B.; Van Meter, B.; Ardon, O.; Hanna, M.G.; Kelly, D.; Sowards, C.; et al. Dissecting the business case for adoption and implementation of digital pathology: A white paper from the digital pathology association. J. Pathol. Inform. 2021, 12, 17. [Google Scholar] [CrossRef] [PubMed]
  40. Wang, B.; Canestaro, W.J.; Choudhry, N.K. Clinical evidence supporting pharmacogenomic biomarker testing provided in US Food and Drug Administration drug labels. JAMA Intern. Med. 2014, 174, 1938–1944. [Google Scholar] [CrossRef]
  41. Morgenthaler, N.G.; Struck, J.; Jochberger, S.; Dünser, M.W. Copeptin: Clinical use of a new biomarker. Trends Endocrinol. Metab. 2008, 19, 43–49. [Google Scholar] [CrossRef]
  42. Reck, M.; Rodríguez-Abreu, D.; Robinson, A.G.; Hui, R.; Csőszi, T.; Fülöp, A.; Gottfried, M.; Peled, N.; Tafreshi, A.; Cuffe, S.; et al. Pembrolizumab versus chemotherapy for PD-L1–positive non–small-cell lung cancer. N. Engl. J. Med. 2016, 375, 1823–1833. [Google Scholar] [CrossRef] [Green Version]
  43. Mok, T.S.; Wu, Y.L.; Kudaba, I.; Kowalski, D.M.; Cho, B.C.; Turna, H.Z.; Castro, G., Jr.; Srimuninnimit, V.; Laktionov, K.K.; Bondarenko, I.; et al. Pembrolizumab versus chemotherapy for previously untreated, PD-L1-expressing, locally advanced or metastatic non-small-cell lung cancer (KEYNOTE-042): A randomised, open-label, controlled, phase 3 trial. Lancet 2019, 393, 1819–1830. [Google Scholar] [CrossRef]
  44. Park, S.; Ock, C.Y.; Kim, H.; Pereira, S.; Park, S.; Ma, M.; Choi, S.; Kim, S.; Shin, S.; Aum, B.J.; et al. Artificial Intelligence–Powered Spatial Analysis of Tumor-Infiltrating Lymphocytes as Complementary Biomarker for Immune Checkpoint Inhibition in Non–Small-Cell Lung Cancer. J. Clin. Oncol. 2022, 40, 1916–1928. [Google Scholar] [CrossRef] [PubMed]
  45. Amann, J.; Blasimme, A.; Vayena, E.; Frey, D.; Madai, V.I. Explainability for artificial intelligence in healthcare: A multidisciplinary perspective. BMC Med. Inform. Decis. Mak. 2020, 20, 310. [Google Scholar] [CrossRef] [PubMed]
  46. Altman, D.G.; McShane, L.M.; Sauerbrei, W.; Taube, S.E.; Cavenagh, M.M. REMARK (REporting recommendations for tumor MARKer prognostic studies). In Guidelines for Reporting Health Research: A User’s Manual; Wiley: Hoboken, NJ, USA, 2014; pp. 241–249. [Google Scholar]
  47. Parwani, A.V. Next generation diagnostic pathology: Use of digital pathology and artificial intelligence tools to augment a pathological diagnosis. Diagn. Pathol. 2019, 14, 138. [Google Scholar] [CrossRef] [PubMed]
  48. Tosun, A.B.; Pullara, F.; Becich, M.J.; Taylor, D.; Fine, J.L.; Chennubhotla, S.C. Explainable AI (xAI) for anatomic pathology. Adv. Anat. Pathol. 2020, 27, 241–250. [Google Scholar] [CrossRef] [PubMed]
  49. Bustin, S.A.; Huggett, J.F. Reproducibility of biomedical research–the importance of editorial vigilance. Biomol. Detect. Quantif. 2017, 11, 1–3. [Google Scholar] [CrossRef] [PubMed]
  50. Group, B.D.W.; Atkinson, A.J., Jr.; Colburn, W.A.; DeGruttola, V.G.; DeMets, D.L.; Downing, G.J.; Hoth, D.F.; Oates, J.A.; Peck, C.C.; Schooley, R.T.; et al. Biomarkers and surrogate endpoints: Preferred definitions and conceptual framework. Clin. Pharmacol. Ther. 2001, 69, 89–95. [Google Scholar]
  51. FDA-NIH Biomarker Working Group. BEST (Biomarkers, Endpoints, and Other Tools) Resource [Internet]; Food and Drug Administration (US): Silver Spring, MD, USA, 2016.
  52. Osareh, A.; Shadgar, B. Machine learning techniques to diagnose breast cancer. In Proceedings of the 2010 5th International Symposium on Health Informatics and Bioinformatics, Ankara, Turkey, 20–22 April 2010; pp. 114–120. [Google Scholar]
  53. Veta, M.; Kornegoor, R.; Huisman, A.; Verschuur-Maes, A.H.; Viergever, M.A.; Pluim, J.P.; Van Diest, P.J. Prognostic value of automatically extracted nuclear morphometric features in whole slide images of male breast cancer. Mod. Pathol. 2012, 25, 1559–1565. [Google Scholar] [CrossRef]
  54. Lee, G.; Sparks, R.; Ali, S.; Shih, N.N.; Feldman, M.D.; Spangler, E.; Rebbeck, T.; Tomaszewski, J.E.; Madabhushi, A. Co-occurring gland angularity in localized subgraphs: Predicting biochemical recurrence in intermediate-risk prostate cancer patients. PLoS ONE 2014, 9, e97954. [Google Scholar] [CrossRef]
  55. Johannet, P.; Coudray, N.; Donnelly, D.M.; Jour, G.; Illa-Bochaca, I.; Xia, Y.; Johnson, D.B.; Wheless, L.; Patrinely, J.R.; Nomikou, S.; et al. Using machine learning algorithms to predict immunotherapy response in patients with advanced melanoma. Clin. Cancer Res. 2021, 27, 131–140. [Google Scholar] [CrossRef]
  56. Hermsen, M.; Volk, V.; Bräsen, J.H.; Geijs, D.J.; Gwinner, W.; Kers, J.; Linmans, J.; Schaadt, N.S.; Schmitz, J.; Steenbergen, E.J.; et al. Quantitative assessment of inflammatory infiltrates in kidney transplant biopsies using multiplex tyramide signal amplification and deep learning. Lab. Investig. 2021, 101, 970–982. [Google Scholar] [CrossRef] [PubMed]
  57. Califf, R.M. Biomarker definitions and their applications. Exp. Biol. Med. 2018, 243, 213–221. [Google Scholar] [CrossRef] [PubMed]
  58. Lo, A.; MacAulay, C.; Keyes, M.; Bristow, R.; Collins, C.; Fazli, L.; Gleave, M.; Guillaud, M.; Hayes, M.; Korbelik, J.; et al. Evaluation of a Novel Quantitative Digital Pathology Technique as a Tool for Predicting Prostate Cancer Recurrence. Int. J. Radiat. Oncol. Biol. Phys. 2015, 93, E223. [Google Scholar] [CrossRef]
  59. Caie, P.D. Discovery of Novel Prognostic Tools to Stratify High Risk Stage II Colorectal Cancer Patients Utilising Digital Pathology. 2015. Available online: http://hdl.handle.net/1842/19527 (accessed on 25 September 2022).
  60. Naoumov, N.V.; Brees, D.; Loeffler, J.; Chng, E.; Ren, Y.; Lopez, P.; Tai, D.; Lamle, S.; Sanyal, A.J. Digital pathology with artificial intelligence analyses provides greater insights into treatment-induced fibrosis regression in NASH. J. Hepatol. 2022; in press. [Google Scholar]
  61. Vamathevan, J.; Clark, D.; Czodrowski, P.; Dunham, I.; Ferran, E.; Lee, G.; Li, B.; Madabhushi, A.; Shah, P.; Spitzer, M.; et al. Applications of machine learning in drug discovery and development. Nat. Rev. Drug Discov. 2019, 18, 463–477. [Google Scholar] [CrossRef] [PubMed]
  62. Lara, H.; Li, Z.; Abels, E.; Aeffner, F.; Bui, M.M.; ElGabry, E.A.; Kozlowski, C.; Montalto, M.C.; Parwani, A.V.; Zarella, M.D.; et al. Quantitative image analysis for tissue biomarker use: A white paper from the digital pathology association. Appl. Immunohistochem. Mol. Morphol. 2021, 29, 479. [Google Scholar] [CrossRef]
  63. Ballman, K.V. Biomarker: Predictive or prognostic? J. Clin. Oncol. Off. J. Am. Soc. Clin. Oncol. 2015, 33, 3968–3971. [Google Scholar] [CrossRef] [PubMed]
  64. Colling, R.; Colling, H.; Browning, L.; Verrill, C. Validation of grading of non-invasive urothelial carcinoma by digital pathology for routine diagnosis. BMC Cancer 2021, 21, 1–7. [Google Scholar] [CrossRef]
  65. Naik, N.; Madani, A.; Esteva, A.; Keskar, N.S.; Press, M.F.; Ruderman, D.; Agus, D.B.; Socher, R. Deep learning-enabled breast cancer hormonal receptor status determination from base-level H&E stains. Nat. Commun. 2020, 11, 5727. [Google Scholar]
  66. Zhang, F.; Yao, S.; Li, Z.; Liang, C.; Zhao, K.; Huang, Y.; Gao, Y.; Qu, J.; Li, Z.; Liu, Z. Predicting treatment response to neoadjuvant chemoradiotherapy in local advanced rectal cancer by biopsy digital pathology image features. Clin. Transl. Med. 2020, 10, e110. [Google Scholar] [CrossRef]
  67. Yuan, Y. Modelling the spatial heterogeneity and molecular correlates of lymphocytic infiltration in triple-negative breast cancer. J. R. Soc. Interface 2015, 12, 20141153. [Google Scholar] [CrossRef]
  68. Williams, B.J.; Bottoms, D.; Treanor, D. Future-proofing pathology: The case for clinical adoption of digital pathology. J. Clin. Pathol. 2017, 70, 1010–1018. [Google Scholar] [CrossRef] [PubMed]
  69. Pagès, F.; Mlecnik, B.; Marliot, F.; Bindea, G.; Ou, F.S.; Bifulco, C.; Lugli, A.; Zlobec, I.; Rau, T.T.; Berger, M.D.; et al. International validation of the consensus Immunoscore for the classification of colon cancer: A prognostic and accuracy study. Lancet 2018, 391, 2128–2139. [Google Scholar] [CrossRef]
  70. Retamero, J.A.; Aneiros-Fernandez, J.; Del Moral, R.G. Complete digital pathology for routine histopathology diagnosis in a multicenter hospital network. Arch. Pathol. Lab. Med. 2020, 144, 221–228. [Google Scholar] [CrossRef] [PubMed]
  71. Xu, S.; Wang, Y.; Tai, D.C.; Wang, S.; Cheng, C.L.; Peng, Q.; Yan, J.; Chen, Y.; Sun, J.; Liang, X.; et al. qFibrosis: A fully-quantitative innovative method incorporating histological features to facilitate accurate fibrosis scoring in animal model and chronic hepatitis B patients. J. Hepatol. 2014, 61, 260–269. [Google Scholar] [CrossRef] [PubMed]
  72. Nault, R.; Colbry, D.; Brandenberger, C.; Harkema, J.R.; Zacharewski, T.R. Development of a computational high-throughput tool for the quantitative examination of dose-dependent histological features. Toxicol. Pathol. 2015, 43, 366–375. [Google Scholar] [CrossRef]
  73. Sethunath, D.; Morusu, S.; Tuceryan, M.; Cummings, O.W.; Zhang, H.; Yin, X.M.; Vanderbeck, S.; Chalasani, N.; Gawrieh, S. Automated assessment of steatosis in murine fatty liver. PLoS ONE 2018, 13, e0197242. [Google Scholar] [CrossRef]
  74. Nascimento, D.S.; Valente, M.; Esteves, T.; De Pina, M.D.F.; Guedes, J.G.; Freire, A.; Quelhas, P.; Pinto-Do-Ó, P. MIQuant–semi-automation of infarct size assessment in models of cardiac ischemic injury. PLoS ONE 2011, 6, e25045. [Google Scholar] [CrossRef]
  75. Seger, S.; Stritt, M.; Vezzali, E.; Nayler, O.; Hess, P.; Groenen, P.M.; Stalder, A.K. A fully automated image analysis method to quantify lung fibrosis in the bleomycin-induced rat model. PLoS ONE 2018, 13, e0193057. [Google Scholar] [CrossRef]
  76. Gilhodes, J.C.; Julé, Y.; Kreuz, S.; Stierstorfer, B.; Stiller, D.; Wollin, L. Quantification of pulmonary fibrosis in a bleomycin mouse model using automated histological image analysis. PLoS ONE 2017, 12, e0170561. [Google Scholar] [CrossRef]
  77. Macconi, D.; Bonomelli, M.; Benigni, A.; Plati, T.; Sangalli, F.; Longaretti, L.; Conti, S.; Kawachi, H.; Hill, P.; Remuzzi, G.; et al. Pathophysiologic implications of reduced podocyte number in a rat model of progressive glomerular injury. Am. J. Pathol. 2006, 168, 42–54. [Google Scholar] [CrossRef]
  78. Brenneman, K.A.; Ramaiah, S.K.; Rohde, C.M.; Messing, D.M.; O’Neil, S.P.; Gauthier, L.M.; Stewart, Z.S.; Mantena, S.R.; Shevlin, K.M.; Leonard, C.G.; et al. Mechanistic investigations of test article–induced pancreatic toxicity at the endocrine–exocrine interface in the rat. Toxicol. Pathol. 2014, 42, 229–242. [Google Scholar] [CrossRef] [PubMed]
  79. Cheng, J.Y.; Abel, J.T.; Balis, U.G.; McClintock, D.S.; Pantanowitz, L. Challenges in the development, deployment, and regulation of artificial intelligence in anatomic pathology. Am. J. Pathol. 2021, 191, 1684–1692. [Google Scholar] [CrossRef] [PubMed]
  80. Liu, J.T.; Glaser, A.K.; Bera, K.; True, L.D.; Reder, N.P.; Eliceiri, K.W.; Madabhushi, A. Harnessing non-destructive 3D pathology. Nat. Biomed. Eng. 2021, 5, 203–218. [Google Scholar] [CrossRef] [PubMed]
  81. Chandramouli, S.; Leo, P.; Lee, G.; Elliott, R.; Davis, C.; Zhu, G.; Fu, P.; Epstein, J.I.; Veltri, R.; Madabhushi, A. Computer extracted features from initial H&E tissue biopsies predict disease progression for prostate cancer patients on active surveillance. Cancers 2020, 12, 2708. [Google Scholar]
  82. Lefebvre, A.E.; Ma, D.; Kessenbrock, K.; Lawson, D.A.; Digman, M.A. Automated segmentation and tracking of mitochondria in live-cell time-lapse images. Nat. Methods 2021, 18, 1091–1102. [Google Scholar] [CrossRef]
  83. Bhargava, H.K.; Leo, P.; Elliott, R.; Janowczyk, A.; Whitney, J.; Gupta, S.; Fu, P.; Yamoah, K.; Khani, F.; Robinson, B.D.; et al. Computationally Derived Image Signature of Stromal Morphology Is Prognostic of Prostate Cancer Recurrence Following Prostatectomy in African American PatientsStroma Predicts Prostate Cancer Outcome in African Americans. Clin. Cancer Res. 2020, 26, 1915–1923. [Google Scholar] [CrossRef]
  84. Xie, W.; Reder, N.P.; Koyuncu, C.; Leo, P.; Hawley, S.; Huang, H.; Mao, C.; Postupna, N.; Kang, S.; Serafin, R.; et al. Prostate cancer risk stratification via non-destructive 3D pathology with deep learning-assisted gland analysis. Cancer Res. 2022, 82, 334. [Google Scholar] [CrossRef]
  85. Tizhoosh, H.R.; Pantanowitz, L. Artificial intelligence and digital pathology: Challenges and opportunities. J. Pathol. Inform. 2018, 9, 38. [Google Scholar] [CrossRef]
  86. Food and Drug Administration. CFR—Code of Federal Regulations Title 21—Food and Drugs Chapter I—Food and Drug Administration Department of Health and Human Services Subchapter B—Food for Human Consumption [Internet]; Food and Drug Administration: Silver Spring, MD, USA, 2017.
  87. Food and Drug Administration. Artificial Intelligence and Machine Learning (AI/ML) Software as a Medical Device Action Plan; Food and Drug Administration: Silver Spring, MD, USA, 2021; 1.
Figure 1. Validation of pathological digital biomarker.
Figure 1. Validation of pathological digital biomarker.
Applsci 12 09823 g001
Figure 2. Clinical applications of a pathological digital biomarker.
Figure 2. Clinical applications of a pathological digital biomarker.
Applsci 12 09823 g002
Table 1. Currently registered clinical trials with digital pathology.
Table 1. Currently registered clinical trials with digital pathology.
NCT No.SpecimenPrimary OutcomeSourceInterventionCountry
NCT02470572Surgical specimenDiagnosisWSIOmnyx IDP systemN/A
NCT05447221Gastric biopsyDiagnosis of intestinal metaplasiaWSIDPAIDSChina
NCT04846933Ovarian cancer tissueDiagnosis, therapeutic predictionWSIMultilayer data analysisFinland
NCT05046366Not specifiedDiagnosis, therapeutic predictionDigital imageMultimodal AIChina
NCT04879056Not specifiedDiagnosisWSIPIPS 5.1 scannerUSA
NCT05221814Lung surgical specimenDiagnosisGross photo imageDeep learning modelUSA
NCT04695015Ocular tumor tissueDiagnosisDigital imageMultimodal AIChina
NCT04425941Colon polypDiagnosisColonoscopic imageAI-aided diagnosisHungary
NCT04840056Gastric biopsyDiagnosisWSIMachine learningChina
NCT04217044Brain tumor tissueMolecular profile predictionHistopathology imageDeep learning algorithmChina
NCT05300113Brain tumor tissueDiagnosisWSIAI-aided DiagnosisChina
Search date: 18 September 2022; ClinicalTrials.gov.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Song, Y.; Kang, K.; Kim, I.; Kim, T.-J. Pathological Digital Biomarkers: Validation and Application. Appl. Sci. 2022, 12, 9823. https://doi.org/10.3390/app12199823

AMA Style

Song Y, Kang K, Kim I, Kim T-J. Pathological Digital Biomarkers: Validation and Application. Applied Sciences. 2022; 12(19):9823. https://doi.org/10.3390/app12199823

Chicago/Turabian Style

Song, Youngjae, Kyungmin Kang, Inho Kim, and Tae-Jung Kim. 2022. "Pathological Digital Biomarkers: Validation and Application" Applied Sciences 12, no. 19: 9823. https://doi.org/10.3390/app12199823

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop