Next Article in Journal
Unemployment Status Subsequent to Cancer Diagnosis and Therapies: A Systematic Review and Meta-Analysis
Previous Article in Journal
p16 Immunohistochemical Expression as a Surrogate Assessment of CDKN2A Alteration in Gliomas Leading to Prognostic Significances
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inter- and Intra-Observer Agreement of PD-L1 SP142 Scoring in Breast Carcinoma—A Large Multi-Institutional International Study

1
Institute of Cancer and Genomic Sciences, University of Birmingham, Birmingham B15 2TT, UK
2
Cancer Pathology, National Cancer Institue, Cairo University, Cairo 12613, Egypt
3
Department of Pathology, Cliniques Universitaires Saint-Luc Bruxelles, 1200 Brussels, Belgium
4
Institut de Recherche Expérimentale et Clinique, Université Catholique de Louvain, 1348 Brussels, Belgium
5
Discipline of Pathology, School of Medicine, Lambe Institute for Translational Research, University of Galway, H91 TK33 Galway, Ireland
6
NIHR Cambridge Biomedical Research Centre, Cambridge CB2 0QQ, UK
7
Addenbrookes Hospital, Cambridge CB2 0QQ, UK
8
Department of Histopathology, Cambridge University NHS Foundation Trust, Cambridge CB2 0QQ, UK
9
Department of Histopathology, Wythenshawe Hospital, Manchester M23 9LT, UK
10
Poundbury Cancer Institute, Dorchester DT1 3BJ, UK
11
Department of Pathology, Faculty of Medicine, Menoufia University, Shebin El-Kom 32952, Egypt
12
Cellular Pathology, Queen Elizabeth Hospital Birmingham, Birmingham B15 2GW, UK
13
Cellular Pathology, Heart of England NHS Foundation Trust, Birmingham B9 5ST, UK
14
Pathology, Royal Liverpool and Broadgreen University Hospitals, Liverpool L7 8YE, UK
15
Medical School, Swansea University, Singleton Park, Swansea SA2 8PP, UK
*
Author to whom correspondence should be addressed.
Cancers 2023, 15(5), 1511; https://doi.org/10.3390/cancers15051511
Submission received: 1 November 2022 / Revised: 15 February 2023 / Accepted: 24 February 2023 / Published: 28 February 2023
(This article belongs to the Section Cancer Biomarkers)

Abstract

:

Simple Summary

PD-L1 analysis in TNBC is essential for selecting patients eligible for immunotherapy. Limited data are available on pathologists’ concordance regarding PD-L1 assessment. Twelve pathologists of various expertise from three European countries digitally analysed 100 breast cancer core biopsies stained using the SP142 PD-L1 assay in two rounds. The overall inter-observer agreement among the pathologists was substantial. The intra-observer agreement was substantial to almost perfect. The expert scorers were more concordant in evaluating staining percentage compared with those of the non-experts. Challenging cases around the 1% cut-off value for positivity were identified and represented a small 6–8%) proportion of all cases. The experts were more concordant in scoring those cases. The study shows reassuringly strong inter- and intra-observer concordance among pathologists in PD-L1 scoring. A proportion of low-expressors remain challenging to assess, and these would benefit from addressing the technical issues, testing a different sample and/or referring for expert opinions.

Abstract

The assessment of PD-L1 expression in TNBC is a prerequisite for selecting patients for immunotherapy. The accurate assessment of PD-L1 is pivotal, but the data suggest poor reproducibility. A total of 100 core biopsies were stained using the VENTANA Roche SP142 assay, scanned and scored by 12 pathologists. Absolute agreement, consensus scoring, Cohen’s Kappa and intraclass correlation coefficient (ICC) were assessed. A second scoring round after a washout period to assess intra-observer agreement was carried out. Absolute agreement occurred in 52% and 60% of cases in the first and second round, respectively. Overall agreement was substantial (Kappa 0.654–0.655) and higher for expert pathologists, particularly on scoring TNBC (6.00 vs. 0.568 in the second round). The intra-observer agreement was substantial to almost perfect (Kappa: 0.667–0.956), regardless of PD-L1 scoring experience. The expert scorers were more concordant in evaluating staining percentage compared with the non-experienced scorers (R2 = 0.920 vs. 0.890). Discordance predominantly occurred in low-expressing cases around the 1% value. Some technical reasons contributed to the discordance. The study shows reassuringly strong inter- and intra-observer concordance among pathologists in PD-L1 scoring. A proportion of low-expressors remain challenging to assess, and these would benefit from addressing the technical issues, testing a different sample and/or referring for expert opinions.

1. Introduction

Advances in biomarker assessment, companion diagnostics and genomics have revolutionised the way breast cancer is currently classified and managed [1,2]. The immune microenvironment of solid tumours, including in breast cancer, plays a pivotal role in tumour development and progression [3,4,5]. Cancer cells can evade the regulatory pathways of Programmed death-1 (PD-1) and its ligand (PD-L1), thus overcoming the cytotoxic effect of T cells. Immune checkpoint blockades using anti-PD-L1 inhibitors have been investigated in various trials in lung, melanoma and, more recently, breast cancer, with confirmed efficacy [6,7,8]. This has led to the approval of immune modulators for the treatment of PD-L1-positive breast cancer, and this is currently being incorporated in various guidelines [9]. The first-approved and most established immune checkpoint inhibitor in breast cancer is atezolizumab, for which a companion diagnostic assay (the VENTANA SP142) is required for selecting patients eligible for this drug.
The limited data available in the literature on non-breast cancer suggest the poor reproducibility of PD-L1 SP142 scoring [10]. Some studies compared the performance of various PD-L1 assays [11,12], and only few analysed pathologist concordance in the scoring of breast cancer [13,14,15]. Those latter studies were small and heterogeneous, with some including training sets [14]. Furthermore, the nature of discordant cases was not analysed, nor was there an assessment of the intra-observer agreement or the effect of the pathologist’s experience. In addition, all previous studies focused on TNBC, and, therefore, information on pathologist concordance in the scoring of PD- L1 in HER2-positive and/or luminal breast cancer does not exist. Emerging data suggest cross-talk between HER2 and PD-L1 and potentially support the use of immunotherapy in HER2-positive breast cancer [16]. PD-L1 expression is correlated with the response to neoadjuvant chemotherapy in HER2-positive breast cancer [17].
We therefore aimed to assess the inter- and intra-observer concordance of breast pathologists of various expertise and geographical locations in reporting a large cohort of PD-L1 SP142-stained invasive breast carcinomas of various molecular subtypes to assess if particular molecular subtypes would be more or less prone to poor inter-observer concordance. We also sought to analyse discordant cases in detail to gain insight into the reasons for discrepancies in PD-L1 results, allowing for a subsequent search for strategies on how to tackle them.

2. Materials and Methods

Core biopsies from a total of 100 cases of primary breast cancers were included in the study. Cases were selected retrospectively from the files of a single large UK institution (Queen Elizabeth Hospital Birmingham) to include all molecular subtypes with enrichment for the TNBC group.
First, 4 μm sections of formalin-fixed, paraffin-embedded tumour blocks were cut and stained using the VENTANA SP142 anti-PD-L1 rabbit monoclonal primary antibody and a VENTANA Benchmark ULTRA automated staining platform, according to the manufacturer’s protocol. A section from a cell block containing three cell lines with various staining intensities and a section of normal tonsil were included as on-slide controls. Paired H&E sections and PD-L1-stained immunohistochemistry slides were digitally scanned using a Leica Aperio AT2 slide scanner at x40 and uploaded to the University of Birmingham digital platform via a secure link: https://eslidepath.bham.ac.uk, Last accessed 23 February 2023. Each participant was provided with a unique username and password to allow for access to the digital platform for whole slide scoring. Twelve pathologists from eight institutions representing three European countries (United Kingdom, Republic of Ireland, Belgium) evaluated all cases in round one, of whom 10 re-scored the same cases in round two, separated by at least 3 months of a washout period designed to assess intra-observer variability. All pathologists had previously received Roche training for SP142 PD-L1 scoring in TNBC and passed a proficiency test.
PD-L1 SP142 Immune cell (IC) scoring was conducted according to the recommended scoring algorithm [18], using a cut-off value of ≥1% to indicate positivity. In addition, the pathologists were asked to provide their percentage of immune cells with positive staining for each case, including those cases scored as negative. All scorers completed a survey assessing their experience in breast pathology reporting as well as their training and real-life reporting of PD-L1.

Statistical Analysis

The data were tabulated and statistically analysed using the SPSS (IBMS) software version 28. We used standard statistical analyses for assessing intra/inter-rater concordance/agreement, which have been previously described [19]. Intraclass correlation coefficient (ICC), which is a measure of the reliability of ratings (using median percentage scores), was used to determine if subjects/items can be rated reliably by different raters. ICC is a descriptive statistic used to assess the consistency or reproducibility of quantitative measurements made by different observers measuring the same quantity. The value of an ICC can range from 0 to 1, with 0 indicating no reliability among raters and 1 indicating perfect reliability among raters. The ICC results are interpreted as follows: values < 0.5 indicate poor reliability, values from 0.5 to 0.75 indicate moderate reliability, values from 0.75 to 0.9 indicate good reliability and values greater than 0.9 indicate excellent reliability [20]. In our study, we used a Two-Way Random model, testing both the consistency and the absolute agreement relationships and the mean of ratings as the unit of measurement.
Fleiss multiple-rater Kappa statistics of inter-observer and intra-observer agreement for designating cases as PD-L1-positive versus -negative using a cut-off value of 1% were calculated. Fleiss’ Kappa κ is a measure of inter-rater agreement used to determine the level of agreement between two or more raters when the method of assessment, known as the response variable, is measured on a categorical scale. The Kappa results are interpreted as follows: values ≤ 0 indicate no agreement, values from 0.01 to 0.20 indicate none to slight agreement, values from 0.21 to 0.40 indicate fair agreement, values from 0.41 to 0.60 indicate moderate agreement, values from 0.61 to 0.80 indicate substantial agreement and values from 0.81 to 1.00 indicate almost perfect agreement.
A case was regarded as PD-L1 positive or -negative if more than 50% of the participants designated it as positive or negative, respectively. The consensus score was considered a majority score if 67% or more of the participants agreed on the categorisation. If all participants agreed (100%), this was regarded as absolute agreement (AA). The cases with agreement less than 67% and above 50% were considered challenging. In cases of no agreement (50% or less), a case was considered as PD-L1-positive or -negative based on the consensus of the experienced pathologists only.
Scatter plots were used to visualise percentage PD-L1 scores, and the strength of the relationship between scores was expressed as a squared correlation coefficient (R2). All analyses were supervised by an expert in pathology informatics (PL).
An outline of the study methodology is shown in Figure 1.

3. Results

3.1. Cohort Characteristics

A total of 100 breast cancers were assessed, comprising 29/93 (33.3%) grade 2 and 62/93 (66%) grade 3 cases, while 7 cases had missing grades. The patient ages ranged from 42 to 59 years, with a median of 49 years. Fifty-eight carcinomas were triple-negative, 28 were luminal and 14 were Her2-positive. All cases were evaluated independently by twelve pathologists, including nine specialist consultant breast pathologists, of whom six had 1–3 years’ experience in PD-L1 scoring, as per the survey responses. All scorers had significant experience in breast pathology reporting, and six had experience in scoring PD-L1 SP142 in TNBC in routine practice. A consultant Biomedical Scientist and two trainee pathologists were among the scorers. Ten pathologists scored round two, following a washout period. The overall percentage of PD-L1 positivity for all of the breast cancer molecular subtypes was 36–38%, and the highest was for TNBC (55%) (Table 1).

3.2. Inter-Observer Agreement and Pathologist Experience

The Kappa of the inter-observer agreement between the participants in classifying cases as PD-L1-positive vs. -negative and the number of cases with absolute agreement (AA) for rounds one and two were calculated. The overall agreement was substantial (Kappa 0.654 and 0.655 for the first and second rounds, respectively) (Table 2).
There was absolute agreement on scoring cases as either positive or negative in 52 cases in the first round (Figure 2A–D). This increased to 60 cases in the second round (Table 3). A further 42 and 32 cases achieved majority agreement in the first and second rounds, respectively. Overall, the Kappa value, for all cases, was similar between experienced pathologists and those without considerable experience in PD-L1 reporting. However, it was higher for expert pathologists scoring PD-L1 in the TNBC group (0.6 vs. 0.568 in the second round) (Table 4).

3.3. Concordance of PD-L1 Percentage Expression

When the median percentage of PD-L1 expression was considered, the expert pathologists had a higher and tighter concordance compared with the non-experts (R2 = 0.920 vs. 0.89). The overall concordance was excellent (R2 = 0.935). The distribution of the percentage scoring among all raters (those with and without experience in PD-L1 routine reporting) in both rounds is shown in Figure 3A–E.

3.4. Reasons for Discordance

Ten cases were regarded as challenging, with low (<67–>50%) or no agreement (<50%), most of which (8/10) were of the TNBC phenotype (Table 5). All ten cases had a low PD-L1 score, with a median range of 0.5–1%, highlighting the difficulties in classifying cases close to the cut-off value of 1%. Four cases were challenging in both rounds, indicating of the innate difficulty of the cases, regardless of the pathologists’ expertise in PD-L1 scoring. We analysed the reasons for the difficulties in scoring those cases by reviewing the digital images and referring to pathologists’ comments on scoring. Those cases were reassuringly recognised as difficult by most scorers due to the nature of the tumour and/or technical issues. Reasons for discordance included uncertainty as to the presence/extent of the in situ carcinoma, a small amount of invasive carcinoma, positive staining around the normal mammary epithelium, very focal staining, background staining and staining within areas of necrosis (Figure 2E–H).
It is of note that the consensus in scoring those challenging cases among the expert pathologists ranged from no agreement (50%) to absolute agreement (100%), with a higher percentage of the former in the first round (3/10; 30%) compared to the second (1/10; 10%). For the experts, concordance improved in the second round, with all but one case showing absolute agreement. Strong to almost perfect agreement among the experts was seen in 6/10 of those challenging cases, in both rounds. For the non-experts, the proportion of cases with low or no agreement was higher than that for the experts and increased from 30% in the first round to 50% in the second round (Table 5).

3.5. Inter- and Intra-Observer Agreement

Cohen’s Kappa was calculated to assess the inter-observer agreement between each of the scoring pathologists and the intra-observer agreement for each scorer across the two rounds. Inter-observer agreement was, overall, moderate (0.5) to substantial (0.75) (Table 6). The highest Kappa values for inter-observer agreement were 0.871 (in the first round) and 0.88 (in the second round), while the lowest values were moderate: 0.475 (in the first round) and 0.498 (in the second round). The intra-observer agreement was substantial to almost perfect, ranging from 0.667 to 0.956 (Table 6).

3.6. Intraclass Correlation Coefficient (ICC)

ICC was used to assess the reliability of scoring between different groups of raters using the median percentage expression. The ICC for different groups (all scorers, experienced scorers and non-experienced scorers) ranged from moderate (0.5–0.75) to excellent (>0.9), with the predominance of the latter (Table 7). The highest ICC was 0.974 (between all scorers in first round and experienced ones in the second round), while the lowest value was 0.619 (between the non-experienced in the first round and the experienced in the second round).

3.7. Intra-Observer Agreement and Scoring Reliability in Relation to Pathologists’ Experience

All scorers had significant experience in breast pathology reporting, but only six scored PD-L1 SP142 in breast cancer in routine practice. The experience in PD-L1 reporting did not appear to affect the intra-observer agreement, with all scorers showing substantial or almost perfect agreement. On the other hand, the intra-observer reliability in the percentage assessment of PD-L1 expression was higher for experienced pathologists compared with non-experienced pathologists (Table 8).

4. Discussion

We present comprehensive data of a large PD-L1 concordance cohort, scored twice by pathologists from eight institutions, representing three countries. Our data show reassuring inter- and intra-observer agreements, which were the highest among experts, and highlight cancers with low levels of PD-L1 expression as the most challenging in classifying as either PD-L1-positive or -negative.
Unlike standard diagnostic and prognostic markers for breast cancer, SP142 PD-L1 immunohistochemistry is assessed in the immune micro-environment of breast cancer and not in the neoplastic cells themselves. PD-L1 expression in foci of ductal carcinoma in situ (DCIS), necrotic debris, normal mammary tissue and normal nodal tissue is excluded. Therefore, experience in both tumour morphology and PD-L1 assessment is required and may affect the reproducibility of scoring.
Few studies, summarised in Table 9, have addressed the consistency of PD-L1 reporting among pathologists. A prospective multi-institutional study showed the poor reproducibility of PD-L1 scoring, with pathologists disagreeing on the classification of cases as PD-L1-positive or -negative in over half of the scored cases, and the with complete agreement of SP-142 scoring in only 38% of cases [21]. In a cohort of 426 tumours of Chinese women, the concordance between two pathologists in PDL-1 scoring was 78.2%, with a Kappa value of 0.567, and 61.4% in primary tumours and nodal metastasis, respectively, indicating moderate agreement [22]. Using “Observers Needed to Evaluate Subjective Tests” (ONEST), Reisenbichler et al. [21] reported a decreased overall percentage agreement with the increase in the number of pathologists assessing each case, with the lowest concordance at eight pathologists or more. Another study of 79 PD-L1 SP142-stained breast cancers scored by experienced breast pathologists at the Memorial Sloan-Kettering Cancer Centre revealed strong agreement [23]. Our data, based on a larger cohort of TNBC cases, confirm the substantial agreement and show that concordance was higher among experts than among those with no experience in reporting PD-L1. More importantly, the agreement among experts was observed as substantial to perfect in those challenging cases, and those experts showed a much higher consistency in reporting challenging, low-expressing TNBC, a finding that is relevant to clinical practice. This is in accordance with findings in other biomarkers [24] and reflects the importance of testing at regional institutions with quality-assured protocols and experienced scorers and the value of discussing/referring difficult/equivocal cases to expert pathologists for their opinions.
While several antibodies/assays for PD-L1 assessment are available (e.g., 22C3, 28-8, SP142, SP263 and 73-10), the VENTANA Roche SP142 assay is the only companion FDA- and CE-IVD (European Commission in vitro diagnostics)-approved test for atezolizumab therapy. An expert round table in 2019 [25] recommended the assay as the only approved companion diagnostic for selecting patients for immunotherapy and recommended using the primary tumour samples, where available, over metastases for assessment. In the UK, atezolizumab plus chemotherapy, and its companion diagnostic assay, were granted approval by the National Institute of Health and Care Excellence (NICE) for the treatment of locally advanced/metastatic PD-L1-positive TNBC. More recently, pembrolizumab plus chemotherapy has been approved for the same indication for PD-L1-positive TNBC using the companion diagnostic Agilent 22C3 assay.
In this study, we assessed both the inter- and intra-observer concordance among the participating pathologists. It is notable that the intra-observer concordance was high (0.667 to 0.956) among both expert and non-expert pathologists in PD-L1 scoring, indicating that pathologists are likely to stick to their parameters on scoring. When the median percentage of PD-1 expression was compared among the raters, the highest ICC (0.974) was achieved among experienced raters in the second round. We observed the lowest concordance value of 0.619 when comparing non-experienced to experienced scorers. Similarly, a higher concordance among those experienced in PD-L1 scoring (93.3%) compared with non-experts (81.5%) was previously reported by Pang et al. [26].
While, overall, there was a high concordance among pathologists in PD-L1 SP142 scoring, some cases were challenging to score. Those cases comprised 6–8% of all cases and generally showed very low levels of expression spanning the threshold for positivity. These may represent a so called “borderline category” where expression cannot readily be designated into a clear-cut positive or negative status. Ideally, information on the tumour response to immunotherapy should determine how those cases should be classified. It is of interest that expert pathologists, who routinely reported PD-L1 in breast cancer, showed substantial concordance in scoring those difficult cases. We therefore recommend that those cases of very low expression (i.e., close to the 1% cut-off value) are scored by an expert pathologist either via double-reporting or via a second opinion referral.
Table 9. Summary of studies evaluating the SP142 PD-L1 concordance of scoring.
Table 9. Summary of studies evaluating the SP142 PD-L1 concordance of scoring.
ReferenceNumber of Cases (Type)Clone(s)SP142 Scoring MethodScorersInter-Observer AgreementIntra-Observer Agreement
Downes et al. 2020 [19]30 surgical excisions TMAs22C3, SP142, E1L3NIC ≥ 1%3 pathologistsKappa for IC1%: 0.6681 month washout period. Kappa = 0.798
Noske et al. [13]30 (resections)SP263, SP142, 22C3, 28–8IC ≥ 1%7 trained + one Ventana SP142 expert for SP142 onlyICC for SP142: 0.805 (0.710–0.887)Not tested
Dennis et al. (abstract) [14]28 test sets through the Roche International Training ProgrammeSP142IC ≥ 1%432 (trained multiple institutions), from several countriesOPA: was 98.2%, with PPA of 99.4% and NPA of 96.6%.Not tested
Hoda et al. [23]75 (cores and excision), primary and metastasesSP142IC ≥ 1%8 experienced
(single institution)
Kappa 0.727Not tested
Reisenbichler et al. 2021 [21]68 cases for SP142 and 67 cases for SP263SP142, SP263IC ≥ 1% & % expression for cases scored as positive only19 randomly selected pathologists from 14 US institutions; breast pathologists, with few non-breast pathologists. Experience in reporting PD-L1 not statedComplete agreement for SP142 categorisation into positive vs. negative in 38%. Agreement decreased with the increasing number of scorers, reaching a low plateau of 0.41 at eight scorers or moreNot tested
Pang et al. [26]60 TNBC
TMAs
VENTANA SP142, DAKO 22C3IC ≥ 1%10 pathologists including 5 PD-L1 who were naïve and 5 who passed a proficiency test93.3% for experts;
81.5% for non-experts.
Tested after a 1 h training video and an overnight washout period. OPA increased from 81.5% to 85.7% for non-experts after video training. OPA was 96.3% for experts.
Van Bockstal et al. 2021 [15]49 metastatic TNBC (biopsies and resections)VENTANA SP142IC ≥ 1%10 pathologists; all passed a proficiency testSubstantial variability at the individual patient level. In 20% of cases, chance of allocation to treatment was random, with a 50–50 split among pathologists in designating as PD-L1-positive or -negativeNot tested
Ahn et al. 2021 [27]30 surgical excisionsSP142, SP263, 22C3 and E1L3NICs and TCs were scored in both continuous scores (0–100%) and five categorical scores (<1%, 1–4%, 5–9%, 10–49% and ≥50%).10 pathologists with no special training, of whom 6 underwent Ventana Roche training80.7% inter-observer agreement at a 1% cut-off valueProportion of cases with identical scoring at a 1% IC cut-off value increased from 40% to 70.0% after training
Abreu et al. 2022 (Conference abstract) [28]168 in tissue microarrays22C3 and SP142Not stated4 pathologists including 2 breast pathologists and 2 surgical pathologists with no specific PD-L1 trainingOverall concordance for SP142 was 64.8%; overall κ = 0.331, with κ = 0.420 for breast pathologists and κ = 0.285 for general pathologistsNot tested
Chen et al. 2022 [22]426 primary and metastatic surgical excisionsSP142IC ≥ 1%Two experienced pathologists78.2% concordance; κ = 0.567Not tested
Current study100 (cores), primary breast cancerSP142IC ≥ 1% & % expression for all cases; two rounds of scoring separated by a 3-month washout period12 experienced breast pathologists from 8 institutions in the UK, Ireland and Belgium. All passed a proficiency test.Absolute agreement was substantial in 52% and 60% of cases in the first and second rounds, with Kappa values of 0.654 and 0.655 for the first and second rounds, respectively. Higher concordance among experts, particularly in TNBC and challenging cases.Tested after 3 months of a washout period. Almost perfect agreement regardless of pathologists’ PD-L1 experience
Similar challenges in PD-1 scoring have been highlighted in carcinomas in other tissues. For example, the concordance between the assays used for PD-L1 assessment in head and neck squamous cell carcinoma (HNSCC) was fair to moderate, with a tendency for the SP142 assay to better stain the immune cells [29]. Furthermore, using 3 PD-L1 tests for HNSCC tissue microarrays (standard SP263, standard 22C3 and in-house-developed 22C3), significant differences were found among the three tests using clinically relevant cut-off values, i.e., ≥20 and ≥50%, for the combined positive score (CPS) and Tumour positive score (TPS). Intra-tumour heterogeneity was generally higher when CPS was used [30]. On the other hand, Cerbelli et al. showed a high concordance between the 22C3 PharmDx assay and the SP263 assay on 43 whole sections of HNSCC [31]. The data collectively highlight the challenges in PD-L1 assessment in various cancers, including the differences in the results between the available antibody clones and staining platforms.
Our data also confirm previous studies showing the highest proportion of PD-L1 positivity in TNBC [32]. PD-L1 was previously shown to be associated with higher tumour grades and higher pCR rates. Low levels of expression were associated with shorter recurrence-free survival (RFS), including following subtype adjustment [32].
The current study and previous lessons from the IMpassion trial [33] shed some light on issues related to the immunohistochemical assessment of PD-L1 in breast cancer tissue. The strengths of the study include the large cohort of cases, the inclusion of 12 pathologists from three countries, the inclusion of both expert and non-expert assessors, the robust design, with the assessment of inter- and intra-observer concordance in two rounds, and the detailed statistical analysis. The digital analysis of whole slide images, rather than scoring glass slides, may be a weakness for pathologists who are not used to digital reporting. More recently, the use of digital image analysis algorithms and/or artificial intelligence (AI) has been proposed for PD-L1 scoring in various solid tumours [34]. Going forward, this is an exciting and promising endeavour that requires thorough validation in comparison to the gold-standard pathologist scoring before implementation and the determination of whether those algorithms are superior to manual scoring in identifying responders to immune therapy. Currently, PD-L1 Artificial Intelligence (AI) scoring in breast cancer is limited to research studies and has not been validated for routine clinical use.

5. Conclusions

In summary, we present a detailed analysis of 12 pathologists who scored 100 digitally scanned breast cancer slides for PD-L using the Ventana SP142 assay in two rounds separated by a washout period. Absolute (100%) agreement was substantial in 52% and 60% of cases in the first and second rounds, with Kappa values of 0.654 and 0.655 for rounds one and two, respectively. We provide reassuring evidence of a high concordance of PD-L1 reporting among pathologists, the highest being among experts and in reporting challenging, low-expressing TNBC. The intra-observer agreement was substantial for all raters. Despite experience and the adherence to current reporting guidelines, there remains a minority of tumours (6–8%) that are challenging to assign to either a positive or negative category. Those are PD-L1 low-expressing and/or heterogeneous tumours that suffer from the least concordance among pathologists. Consensus scoring and referrals for expert opinions should be considered in those cases. If uncertainly persists, this should be recognised and well communicated to clinicians in the context of a multidisciplinary approach. For inconclusive cases, testing on another tumour sample and/or using another assay (e.g., the DAKO 22C3 assay for selecting patients for pembrolizumab therapy) could be performed.
Pathologists’ training and experience are paramount in evaluating PD-L1 expression and selecting patients for immune checkpoint anti-PD-L1 inhibitors. Further work on refining the criteria for scoring, pathologists’ training and assessing pathologist concordance is needed. This will ensure the accurate classification of tumours into a positive or negative category and, hence, the accurate selection of patients for atezolizumab therapy.
This study also shows that digital pathology is a useful tool that allows for the instantaneous sharing of high-quality whole slide scans with colleagues. This is particularly helpful for consensus scoring and/or seeking expert opinions.

Author Contributions

M.Z.: Virtual slide scoring, data curation, statistical analysis, results interpretation, designing tables and figures, writing; M.V.B., C.G., G.C., E.P., R.H., C.D., N.M.B., J.S., B.T. and Y.M.: Virtual slide scoring, editing the manuscript; B.O.: Slide staining; P.L.: Statistical analysis, statistical writeup, results interpretation; A.M.S.: Conceptualisation, methodology, virtual slide scoring, statistical analysis, writing, overall project overview. All authors have read and agreed to the published version of the manuscript.

Funding

M.Z. was funded by a grant from the Egyptian Cultural and Educational Bureau and sponsored by the Ministry of Higher Education and Scientific Research, General Mission Sector (Egypt). A.M.S. is supported by a Birmingham CR-UK Centre Grant, CA17422/A25154.

Institutional Review Board Statement

This study did not require ethical approval, as it was conducted as an audit of the consistency of scoring of anonymous breast cancer images.

Informed Consent Statement

Not applicable.

Data Availability Statement

Full data are available from the corresponding author upon reasonable request. Digital slides are password-protected and available at the University of Birmingham platform: https://eslidepath.bham.ac.uk, Last accessed 23 February 2023.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shaaban, A.M.; Shaw, E.C. Bench to bedside: Research influencing clinical practice in breast cancer. Diagn. Histopathol. 2022, 28, 473–479. [Google Scholar] [CrossRef]
  2. Vennapusa, B.; Baker, B.; Kowanetz, M.; Boone, J.; Menzl, I.; Bruey, J.-M.; Fine, G.; Mariathasan, S.; McCaffery, I.; Mocci, S.; et al. Development of a PD-L1 Complementary Diagnostic Immunohistochemistry Assay (SP142) for Atezolizumab. Appl. Immunohistochem. Mol. Morphol. 2019, 27, 92–100. [Google Scholar] [CrossRef] [PubMed]
  3. Melaiu, O.; Lucarini, V.; Cifaldi, L.; Fruci, D. Influence of the Tumor Microenvironment on NK Cell Function in Solid Tumors. Front. Immunol. 2020, 10, 3038. [Google Scholar] [CrossRef] [PubMed]
  4. Simiczyjew, A.; Dratkiewicz, E.; Mazurkiewicz, J.; Ziętek, M.; Matkowski, R.; Nowak, D. The Influence of Tumor Microenvironment on Immune Escape of Melanoma. Int. J. Mol. Sci. 2020, 21, 8359. [Google Scholar] [CrossRef] [PubMed]
  5. Badr, N.M.; McMurray, J.L.; Danial, I.; Hayward, S.; Asaad, N.Y.; El-Wahed, M.M.A.; Abdou, A.G.; El-Dien, M.M.S.; Sharma, N.; Horimoto, Y.; et al. Characterization of the Immune Microenvironment in Inflammatory Breast Cancer Using Multiplex Immunofluorescence. Pathobiology 2022, 90, 31–43. [Google Scholar] [CrossRef] [PubMed]
  6. Conte, P.F.; Dieci, M.V.; Bisagni, G.; De Laurentiis, M.; Tondini, C.A.; Schmid, P.; De Salvo, G.L.; Moratello, G.; Guarneri, V. Phase III randomized study of adjuvant treatment with the ANTI-PD-L1 antibody avelumab for high-risk triple negative breast cancer patients: The A-BRAVE trial. Am. Soc. Clin. Oncol. 2020, 38, TPS598. [Google Scholar] [CrossRef]
  7. Schmid, P.; Cortes, J.; Dent, R. Pembrolizumab in Early Triple-Negative Breast Cancer. Reply. N. Engl. J. Med. 2022, 386, 1771–1772. [Google Scholar] [CrossRef]
  8. Emens, L.A.; Adams, S.; Barrios, C.H.; Diéras, V.; Iwata, H.; Loi, S.; Rugo, H.S.; Schneeweiss, A.; Winer, E.P.; Patel, S.; et al. First-line atezolizumab plus nab-paclitaxel for unresectable, locally advanced, or metastatic triple-negative breast cancer: IMpassion130 final overall survival analysis. Ann. Oncol. 2021, 32, 983–993. [Google Scholar] [CrossRef]
  9. Bartsch, R. ESMO 2020: Highlights in breast cancer. Memo-Mag. Eur. Med. Oncol. 2021, 14, 184–187. [Google Scholar] [CrossRef]
  10. Büttner, R.; Gosney, J.R.; Skov, B.G.; Adam, J.; Motoi, N.; Bloom, K.J.; Dietel, M.; Longshore, J.W.; López-Ríos, F.; Penault-Llorca, F.; et al. Programmed Death-Ligand 1 Immunohistochemistry Testing: A Review of Analytical Assays and Clinical Implementation in Non–Small-Cell Lung Cancer. J. Clin. Oncol. 2017, 35, 3867–3876. [Google Scholar] [CrossRef]
  11. Guo, H.; Ding, Q.; Gong, Y.; Gilcrease, M.Z.; Zhao, M.; Zhao, J.; Sui, D.; Wu, Y.; Chen, H.; Liu, H.; et al. Comparison of three scoring methods using the FDA-approved 22C3 immunohistochemistry assay to evaluate PD-L1 expression in breast cancer and their association with clinicopathologic factors. Breast Cancer Res. 2020, 22, 69. [Google Scholar] [CrossRef] [PubMed]
  12. Huang, X.; Ding, Q.; Guo, H.; Gong, Y.; Zhao, J.; Zhao, M.; Sui, D.; Wu, Y.; Chen, H.; Liu, H.; et al. Comparison of three FDA-approved diagnostic immunohistochemistry assays of PD-L1 in triple-negative breast carcinoma. Hum. Pathol. 2021, 108, 42–50. [Google Scholar] [CrossRef] [PubMed]
  13. Noske, A.; Wagner, D.-C.; Schwamborn, K.; Foersch, S.; Steiger, K.; Kiechle, M.; Oettler, D.; Karapetyan, S.; Hapfelmeier, A.; Roth, W.; et al. Interassay and interobserver comparability study of four programmed death-ligand 1 (PD-L1) immunohistochemistry assays in triple-negative breast cancer. Breast 2021, 60, 238–244. [Google Scholar] [CrossRef] [PubMed]
  14. Dennis, E.; Kockx, M.; Harlow, G.; Cai, Z.; Bloom, K.; ElGabry, E. Abstract PD5-02: Effective and globally reproducible digital pathologist training program on PD-L1 immunohistochemistry scoring on immune cells as a predictive biomarker for cancer immunotherapy in triple negative breast cancer. Cancer Res. 2020, 80 (Suppl. 4), PD5-02. [Google Scholar] [CrossRef]
  15. Van Bockstal, M.R.; Cooks, M.; Nederlof, I.; Brinkhuis, M.; Dutman, A.; Koopmans, M.; Kooreman, L.; van der Vegt, B.; Verhoog, L.; Vreuls, C.; et al. Interobserver Agreement of PD-L1/SP142 Immunohistochemistry and Tumor-Infiltrating Lymphocytes (TILs) in Distant Metastases of Triple-Negative Breast Cancer: A Proof-of-Concept Study. A Report on Behalf of the International Immuno-Oncology Biomarker Working Group. Cancers 2021, 13, 4910. [Google Scholar]
  16. Padmanabhan, R.; Kheraldine, H.S.; Meskin, N.; Vranic, S.; Al Moustafa, A.-E. Crosstalk between HER2 and PD-1/PD-L1 in breast cancer: From clinical applications to mathematical models. Cancers 2020, 12, 636. [Google Scholar] [CrossRef] [Green Version]
  17. Kurozumi, S.; Inoue, K.; Matsumoto, H.; Fujii, T.; Horiguchi, J.; Oyama, T.; Kurosumi, M.; Shirabe, K. Clinicopathological values of PD-L1 expression in HER2-positive breast cancer. Sci. Rep. 2019, 9, 16662. [Google Scholar] [CrossRef]
  18. Marletta, S.; Fusco, N.; Munari, E.; Luchini, C.; Cimadamore, A.; Brunelli, M.; Querzoli, G.; Martini, M.; Vigliar, E.; Colombari, R.; et al. Atlas of PD-L1 for Pathologists: Indications, Scores, Diagnostic Platforms and Reporting Systems. J. Pers. Med. 2022, 12, 1073. [Google Scholar] [CrossRef]
  19. Downes, M.R.; Slodkowska, E.; Katabi, N.; Jungbluth, A.; Xu, B. Inter- and intraobserver agreement of programmed death ligand 1 scoring in head and neck squamous cell carcinoma, urothelial carcinoma and breast carcinoma. Histopathology 2020, 76, 191–200. [Google Scholar] [CrossRef]
  20. Koo, T.K.; Li, M.Y. A Guideline of Selecting and Reporting Intraclass Correlation Coefficients for Reliability Research. J. Chiropr. Med. 2016, 15, 155–163. [Google Scholar] [CrossRef] [Green Version]
  21. Reisenbichler, E.S.; Han, G.; Bellizzi, A.; Bossuyt, V.; Brock, J.; Cole, K.; Fadare, O.; Hameed, O.; Hanley, K.; Harrison, B.T.; et al. Prospective multi-institutional evaluation of pathologist assessment of PD-L1 assays for patient selection in triple negative breast cancer. Mod. Pathol. 2020, 33, 1746–1752. [Google Scholar] [CrossRef] [PubMed]
  22. Chen, C.; Ma, X.; Li, Y.; Ma, J.; Yang, W.; Shui, R. Concordance of PD-L1 expression in triple-negative breast cancers in Chinese patients: A retrospective and pathologist-based study. Pathol. Res. Pract. 2022, 238, 154137. [Google Scholar] [CrossRef] [PubMed]
  23. Hoda, R.S.; Brogi, E.; D’Alfonso, T.M.; Grabenstetter, A.; Giri, D.; Hanna, M.G.; Kuba, M.G.; Murray, D.M.P.; Vallejo, C.E.; Zhang, H.; et al. Interobserver Variation of PD-L1 SP142 Immunohistochemistry Interpretation in Breast Carcinoma: A Study of 79 Cases Using Whole Slide Imaging. Arch. Pathol. Lab. Med. 2021, 145, 1132–1137. [Google Scholar] [CrossRef] [PubMed]
  24. A Rakha, E.; E Pinder, S.; Bartlett, J.M.S.; Ibrahim, M.; Starczynski, J.; Carder, P.J.; Provenzano, E.; Hanby, A.; Hales, S.; Lee, A.H.S.; et al. Updated UK Recommendations for HER2 assessment in breast cancer. J. Clin. Pathol. 2014, 68, 93–99. [Google Scholar] [CrossRef] [Green Version]
  25. Peg, V.; López-García, M.; Comerma, L.; Peiró, G.; García-Caballero, T.; López, Á.C.; Suárez-Gauthier, A.; Ruiz, I.; Rojo, F. PD-L1 testing based on the SP142 antibody in metastatic triple-negative breast cancer: Summary of an expert round-table discussion. Future Oncol. 2021, 17, 1209–1218. [Google Scholar] [CrossRef]
  26. Pang, J.-M.B.; Castles, B.; Byrne, D.J.; Button, P.; Hendry, S.; Lakhani, S.R.; Sivasubramaniam, V.; Cooper, W.A.; Armes, J.; Millar, E.K.; et al. SP142 PD-L1 Scoring Shows High Interobserver and Intraobserver Agreement in Triple-negative Breast Carcinoma But Overall Low Percentage Agreement With Other PD-L1 Clones SP263 and 22C3. Am. J. Surg. Pathol. 2021, 45, 1108–1117. [Google Scholar] [CrossRef]
  27. Ahn, S.; Woo, J.W.; Kim, H.; Cho, E.Y.; Kim, A.; Kim, J.Y.; Kim, C.; Lee, H.J.; Lee, J.S.; Bae, Y.K.; et al. Programmed Death Ligand 1 Immunohistochemistry in Triple-Negative Breast Cancer: Evaluation of Inter-Pathologist Concordance and Inter-Assay Variability. J. Breast Cancer 2021, 24, 266–279. [Google Scholar] [CrossRef]
  28. Abreu, R.; Peixoto; Corassa, M.; Nunes, W.; Neotti, T.; Rodrigues, T.; Toledo, C.; Domingos, T.; Carraro, D.; Gobbi, H.; et al. Determination of Inter-Observer Agreement in the Immunohistochemical Interpretation of PD-L1 Clones 22C3 and SP142 in Triple-Negative Breast Cancer (TNBC). In Virchows Archiv; Springer One New York Plaza; Springer: New York, NY, USA, 2022. [Google Scholar]
  29. Girolami, I.; Pantanowitz, L.; Barberis, M.; Paolino, G.; Brunelli, M.; Vigliar, E.; Munari, E.; Satturwar, S.; Troncone, G.; Eccher, A. Challenges facing pathologists evaluating PD-L1 in head & neck squamous cell carcinoma. J. Oral Pathol. Med. 2021, 50, 864–873. [Google Scholar]
  30. de Ruiter, E.J.; Mulder, F.; Koomen, B.; Speel, E.-J.; van den Hout, M.; de Roest, R.; Bloemena, E.; Devriese, L.; Willems, S. Comparison of three PD-L1 immunohistochemical assays in head and neck squamous cell carcinoma (HNSCC). Mod. Pathol. 2021, 34, 1125–1132. [Google Scholar] [CrossRef]
  31. Cerbelli, B.; Girolami, I.; Eccher, A.; Costarelli, L.; Taccogna, S.; Scialpi, R.; Benevolo, M.; Lucante, T.; Alò, P.L.; Stella, F.; et al. Evaluating programmed death-ligand 1 (PD-L1) in head and neck squamous cell carcinoma: Concordance between the 22C3 PharmDx assay and the SP263 assay on whole sections from a multicentre study. Histopathology 2021, 80, 397–406. [Google Scholar] [CrossRef]
  32. Vidula, N.; Yau, C.; Rugo, H.S. Programmed cell death 1 (PD-1) receptor and programmed death ligand 1 (PD-L1) gene expression in primary breast cancer. Breast Cancer Res. Treat. 2021, 187, 387–395. [Google Scholar] [CrossRef] [PubMed]
  33. Badve, S.S.; Penault-Llorca, F.; Reis-Filho, J.S.; Deurloo, R.; Siziopikou, K.P.; D’Arrigo, C.; Viale, G. Determining PD-L1 Status in Patients With Triple-Negative Breast Cancer: Lessons Learned From IMpassion130. Gynecol. Oncol. 2021, 114, 664–675. [Google Scholar] [CrossRef] [PubMed]
  34. IInge, L.; Dennis, E. Development and applications of computer image analysis algorithms for scoring of PD-L1 immunohistochemistry. Immunooncol. Technol. 2020, 6, 2–8. [Google Scholar] [CrossRef] [PubMed]
Figure 1. A flowchart showing the outline of the study.
Figure 1. A flowchart showing the outline of the study.
Cancers 15 01511 g001
Figure 2. Examples of breast cancer PD-L1 scores using the Ventana SP142 assay and challenging cases: (A) H&E staining of three cores of invasive no-special-type carcinoma (×50); (B) Only focal PD-L1 staining is noted (<1%), and the case was classified as PD-L- negative (×100); (C) H&E staining of one core of invasive no-special-type carcinoma (×100); (D) Higher magnification of PD-L1 immunohistochemistry shows strong positivity with absolute agreement among all scorers in both rounds (×100); (E) Challenging case due to uncertainty as to whether the PD-L1 staining observed is associated with in situ or invasive carcinoma (×100); (F) Challenging cases showing low levels of expression. Experts’ consensus was to designate the case as PD-L1-negative (×100); (G) A case with no consensus in either round. Low-power view showing areas of tumour necrosis and background staining (×15); (H) Higher magnification showing focal expression in tumour stroma and adjacent to an area of necrosis (×50). This case was also challenging for experts; in the first round, it showed low agreement (60% as negative), and in the second round, it showed no agreement (50%).
Figure 2. Examples of breast cancer PD-L1 scores using the Ventana SP142 assay and challenging cases: (A) H&E staining of three cores of invasive no-special-type carcinoma (×50); (B) Only focal PD-L1 staining is noted (<1%), and the case was classified as PD-L- negative (×100); (C) H&E staining of one core of invasive no-special-type carcinoma (×100); (D) Higher magnification of PD-L1 immunohistochemistry shows strong positivity with absolute agreement among all scorers in both rounds (×100); (E) Challenging case due to uncertainty as to whether the PD-L1 staining observed is associated with in situ or invasive carcinoma (×100); (F) Challenging cases showing low levels of expression. Experts’ consensus was to designate the case as PD-L1-negative (×100); (G) A case with no consensus in either round. Low-power view showing areas of tumour necrosis and background staining (×15); (H) Higher magnification showing focal expression in tumour stroma and adjacent to an area of necrosis (×50). This case was also challenging for experts; in the first round, it showed low agreement (60% as negative), and in the second round, it showed no agreement (50%).
Cancers 15 01511 g002
Figure 3. Scatter plot showing the distribution of the median percentage of PD-L1 in the four groups; experienced consultants (Exp; red line), non-experienced consultants (Con; green line) and all (All; blue line) participants. (A) The distribution of percentage scores among all scorers in both rounds; (B) All pathologists’ (including non-experts and trainees) percentage scores in both rounds (R2 = 0.935); (C) All consultants’ (including experts and non-experts) percentage scores in both rounds (R2 = 0.902); (D) Experienced consultants’ percentage scores in both rounds (R2 = 0.920); (E) Non-experienced pathologists’ (including trainees) percentage scores in both rounds (R2 = 0.89).
Figure 3. Scatter plot showing the distribution of the median percentage of PD-L1 in the four groups; experienced consultants (Exp; red line), non-experienced consultants (Con; green line) and all (All; blue line) participants. (A) The distribution of percentage scores among all scorers in both rounds; (B) All pathologists’ (including non-experts and trainees) percentage scores in both rounds (R2 = 0.935); (C) All consultants’ (including experts and non-experts) percentage scores in both rounds (R2 = 0.902); (D) Experienced consultants’ percentage scores in both rounds (R2 = 0.920); (E) Non-experienced pathologists’ (including trainees) percentage scores in both rounds (R2 = 0.89).
Cancers 15 01511 g003
Table 1. Frequency of PD-L1 positivity in breast cancer in both rounds, stratified by the molecular type.
Table 1. Frequency of PD-L1 positivity in breast cancer in both rounds, stratified by the molecular type.
No.First RoundSecond Round
Positive (%)Negative (%)Positive (%)Negative (%)
TNBC5832 (55%)26 (45%)32 (55%)26 (45%)
Median (range)4 (0.75–30)0 (0–1)5 (0.5–30)0 (0–1)
Luminal284 (14%)24 (86%)4 (14%)24 (86%)
Median (range)2 (1–4)0 (0–0.75)3.5 (1.5–5)0 (0–0.5)
Her2-positive142 (14%)12 (86%)1 (7%)13 (93%)
Median (range)5.5 (1–10)0 (0–0.5)100 (0–0.5)
Total10038 (38%)62 (62%)36 (36%)64 (64%)
Median (range)2 (0.75–30)0 (0–1)5 (0.5–30)0 (0–1)
Table 2. Absolute agreement in scoring among raters in the two rounds.
Table 2. Absolute agreement in scoring among raters in the two rounds.
RatersP1P2P3P4 cP5 cP6 cP7 eP8 eP9 eP10 eP11 eP12 e
First RoundNeg626461586368755164676363
Pos383531413732254936333734
Total10099929910010010010010010010097
Kappa0.654
AA52/100 cases; 36 scored negative and 16 scored positive
Second RoundNeg6064 644662 5272696966
Pos4035 355437 4828313130
Total10099 99100100 10010010010097
Kappa0.655
AA60/100 cases; 40 scored negative and 20 scored positive
e Experienced consultant, c Consultant, (AA) Absolute agreement. P3 and P7 did not score the second round.
Table 3. Agreement categories in the first and second scoring rounds.
Table 3. Agreement categories in the first and second scoring rounds.
Round Consensus (Agreement)No Agreement
MajorityChallenging/Low Agreement≤50%
100% (AA)67–99%<67–>50%
FirstNegative362420
Positive161840
Total524260
9460
1000
SecondNegative402022
Positive201240
Total603262
9262
982
Table 4. Fleiss Kappa of agreement between the pathologists in both rounds.
Table 4. Fleiss Kappa of agreement between the pathologists in both rounds.
Fleiss Kappa First RoundFleiss Kappa Second Round
Scoring CategoriesScoring Categories
Overall (TNBC)NEGPOSOverall (TNBC)NEGPOS
All0.654 (0.61)0.6600.6780.655 (0.602)0.6560.669
Consultants0.663 (0.616)0.6640.6730.633 (0.568)0.6360.650
Experienced0.659 (0.642)0.6610.6720.674 (0.600)0.6770.695
Table 5. Distribution of ten challenging (low concordance and no agreement) cases in both rounds.
Table 5. Distribution of ten challenging (low concordance and no agreement) cases in both rounds.
FIRST ROUNDSECOND ROUND
TypeALL (12)PD-L1 StatusMNON (6)PD-L1 StatusMEXP (6)PD-L1 StatusMAll (10)PD-L1 StatusMNon (5)PD-L1 StatusMEXP (5)PD-L1 StatusM
TNBC6/11 (55%)0.53/6 (50%) 0.753/5 (60%)0.56/9 (67%)0.54/5 (80%)02/4 (50%) 1
Her27/12 (58%)14/6 (67%)13/6 (50%) 0.757/10 (70%)0.53/5 (60%)+15/5 (100%)0.5
TNBC7/12 (58%)0.54/6 (67%)0.253/6 (50%) 0.756/10 (60%)+14/5 (80%)+13/5 (60%)0.5
TNBC7/12 (58%)13/6 (50%) 14/6 (67%)16/10 (60%)13/5 (60%)+14/5 (80%)0.75
TNBC7/12 (58%)+14/6 (67%)+13/6 (50%) 0.755/9 (56%)+12/4 (50%) 23/5 (60%)+0.75
TNBC7/12 (58%)14/6 (67%)+15/6 (83%)0.55/10 (50%) 14/5 (80%)+1.54/5 (80%)0.75
TNBC9/12 (75%)+14/6 (67%)+15/6 (83%)+16/10 (60%)+24/5 (80%)+2.53/5 (60%)0.5
TNBC8/12 (67%)+13/6 (50%) 0.55/6 (83%)+16/10 (60%)13/5 (60%)0.54/5 (80%)+1.5
TNBC11/12 (83%)0.56/6 (100%)0.55/6 (83%)0.56/10 (60%)0.53/5 (60%)+14/5 (80%)0.5
Lum8/12 (75%)+14/6 (67%)+14/6 (67%)+15/10 (50%) 1.54/5 (80%)+3.54/5 (80%)0.5
AGREEMENTNo03/10; 30%3/10; 30%2/10; 20%1/10; 10%1/10; 10%
Low6/10; 60%01/10; 10%6/10; 60%4/10; 40%3/10; 30%
High4/10; 40%7/10; 10%6/10; 60%2/10; 20%5/10; 50%6/10; 60%
PD-L1 status: negative (−) or positive (+); (M) Median percentage score; (ALL) refers to all scorers, which were then divided into (NON) Non-expert participants in PD-L1 scoring and (EXP) Expert pathologists. Cases in green represent challenging cases in the first round, those in orange represent challenging ones in both rounds and those in grey represent challenging and no-agreement cases in the second round only. The yellow cases represent contradictory agreement of non-experts in relation to the experts’ agreement, and the blue cases represent cases with no agreement (50%). Agreement levels are categorised into no-agreement (50%), low-agreement (>50%–<67%) and high-agreement (>67%).
Table 6. Inter- and intra-observer agreement for all scoring pathologists.
Table 6. Inter- and intra-observer agreement for all scoring pathologists.
Consensus 1P1P2P3P4P5P6P7P8P9P10P11P12
Consensus 20.9120.8510.78 0.8230.6290.715 0.7370.7470.8190.8650.884
P10.8920.8320.766 0.6790.7240.787 0.7580.6490.7190.7620.733
P20.7650.7470.722 0.7350.5510.695 0.5750.5620.7290.7740.745
P30.7860.7680.607
P40.8230.7620.6540.6410.9560.5870.669 0.6310.6060.6370.7280.699
P50.740.7230.640.6960.6070.6670.682 0.8010.4980.5540.5150.539
P60.7980.7370.7410.6320.6170.7130.732 0.6320.6350.6880.6430.656
P70.7180.6590.5790.7170.6690.540.634
P80.6780.6580.5330.5430.6130.6780.5770.4750.940.5520.5740.6140.661
P90.8910.8710.7450.7640.7150.720.7330.7440.6580.7720.7840.640.826
P100.8220.760.6340.8310.6870.6490.7040.7110.5970.8010.8620.7620.88
P110.870.8080.7240.6490.7380.7430.8010.5860.6380.7630.7370.7780.784
P120.8430.7350.6460.7580.7080.6430.70.6870.5630.7320.7490.7320.906
Figures in italics below the equatorial bordered cells represent the values of the first round, while those in bold represent the second round. The equatorial bordered cells (in bold red font) represent the intra-observer agreement for each participant, as scored in both rounds. Cell shading colours reflect the level of agreement as follows; light green for almost perfect agreement (0.81–1), orange for substantial agreement (0.61–0.8) and light red for moderate agreement (0.41–0.6).
Table 7. Intraclass correlation coefficient for all groups of pathologists.
Table 7. Intraclass correlation coefficient for all groups of pathologists.
ALL-1EXP-1NON-1ALL-2EXP-2NON-2
ALL-1 0.9070.9310.9060.7680.932
EXP-10.915 0.7720.9740.9130.919
NON-10.9330.788 0.7810.6190.876
ALL-20.9190.9740.804 0.9110.946
EXP-20.7980.9230.6550.919 0.792
NON-20.9360.9200.8910.9490.808
Figures in bold/italics below the equatorial grey cells represent values of ICC calculated according to the consistency of assessment, while values below equatorial cells represent ICC calculated according to absolute agreement. (ALL): All scorers’ median percentage; (EXP): Experienced scorers’ median percentage; (NON): Non-experienced scorers’ median percentage. Cells’ shading colours reflect the level of reliability as follows: Red for moderate reliability (0.5–0.75); Orange for good reliability (0.75–0.9); Green for excellent reliability (greater than 0.9).
Table 8. Intra-observer concordance based on pathologists’ experience in scoring PD-L1.
Table 8. Intra-observer concordance based on pathologists’ experience in scoring PD-L1.
RaterPositionExperience as a Breast Reporting Pathologist (years)Experience in SP142 PD-L1 Reporting (years)Previous Training in SP142 PD-L1 Reporting (Provider)Intra-Observer Agreement (Cohen’s Kappa/Level of Agreement)Intra-Observer Reliability (ICC/Level of Reliability)
P1Trainee Pathologist120Roche0.832/Almost perfect0.826/Good
P2120Roche0.722/Substantial0.525/Moderate
P3Consultant ScientistN/A0RocheN/A/N/AN/A/N/A
P4Consultant Pathologist200N/S0.956/Almost perfect0.852/Good
P5210Roche0.667/SubstantialN/A/N/A
P6250None0.732/Substantial0.770/Good
P7253RocheN/A/N/AN/A/N/A
P8291Roche0.94/Almost perfect0.935/Excellent
P9102Roche0.772/Substantial0.933/Excellent
P10252Roche0.862/Almost perfect0.920/Excellent
P11303Local0.778/Substantial0.756/Good
P12222Roche0.906/Almost perfect0.929/Excellent
(N/S) Not stated; (N/A) Not applicable.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zaakouk, M.; Van Bockstal, M.; Galant, C.; Callagy, G.; Provenzano, E.; Hunt, R.; D’Arrigo, C.; Badr, N.M.; O’Sullivan, B.; Starczynski, J.; et al. Inter- and Intra-Observer Agreement of PD-L1 SP142 Scoring in Breast Carcinoma—A Large Multi-Institutional International Study. Cancers 2023, 15, 1511. https://doi.org/10.3390/cancers15051511

AMA Style

Zaakouk M, Van Bockstal M, Galant C, Callagy G, Provenzano E, Hunt R, D’Arrigo C, Badr NM, O’Sullivan B, Starczynski J, et al. Inter- and Intra-Observer Agreement of PD-L1 SP142 Scoring in Breast Carcinoma—A Large Multi-Institutional International Study. Cancers. 2023; 15(5):1511. https://doi.org/10.3390/cancers15051511

Chicago/Turabian Style

Zaakouk, Mohamed, Mieke Van Bockstal, Christine Galant, Grace Callagy, Elena Provenzano, Roger Hunt, Corrado D’Arrigo, Nahla M. Badr, Brendan O’Sullivan, Jane Starczynski, and et al. 2023. "Inter- and Intra-Observer Agreement of PD-L1 SP142 Scoring in Breast Carcinoma—A Large Multi-Institutional International Study" Cancers 15, no. 5: 1511. https://doi.org/10.3390/cancers15051511

APA Style

Zaakouk, M., Van Bockstal, M., Galant, C., Callagy, G., Provenzano, E., Hunt, R., D’Arrigo, C., Badr, N. M., O’Sullivan, B., Starczynski, J., Tanchel, B., Mir, Y., Lewis, P., & Shaaban, A. M. (2023). Inter- and Intra-Observer Agreement of PD-L1 SP142 Scoring in Breast Carcinoma—A Large Multi-Institutional International Study. Cancers, 15(5), 1511. https://doi.org/10.3390/cancers15051511

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop