# Health Economic Decision Tree Models of Diagnostics for Dummies: A Pictorial Primer

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Two Important Measures of Diagnostic Test Accuracy for Decision Tree Modelling

## 3. Decision Tree Model Structures for Diagnostics

## 4. Putting Diagnostic Test Accuracy Data and Decision Tree Structures Together: A Worked Example

#### 4.1. Identifying DTA Data for the Model

#### 4.2. Parameterizing the Decision Tree Model

#### 4.3. Linking the Information in the 2 × 2 Table to the Decision Tree

## 5. Sequential Diagnostics (Modelling Diagnostics as Triage or Add-on)

## 6. Contextualizing the Information in This Paper

## 7. Conclusions

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Macaskill, P.G.; Gatsonis, C.; Deeks, J.J.; Harbord, R.M. Chapter 10: Analysing and Presenting Results. In Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy, Version 1.0; Deeks, J.J., Bossuyt, P.M., Gatsonis, C., Eds.; The Cochrane Collaboration: London, UK, 2010; pp. 3–61. [Google Scholar]
- Sanghera, S.; Orlando, R.; Roberts, T. Economic evaluations and diagnostic testing: An illustrative case study approach. Int. J. Technol. Assess Health Care
**2013**, 29, 53–60. [Google Scholar] [CrossRef] [PubMed][Green Version] - Laking, G.; Lord, J.; Fischer, A. The economics of diagnosis. Health Econ.
**2006**, 15, 1109–1120. [Google Scholar] [CrossRef] [PubMed] - Doble, B.; Tan, M.; Harris, A.; Lorgelly, P. Modeling companion diagnostics in economic evaluations of targeted oncology therapies: Systematic review and methodological checklist. Expert Rev. Mol. Diagn
**2015**, 15, 235–254. [Google Scholar] [CrossRef] [PubMed] - Sutton, A.J.; Cooper, N.J.; Goodacre, S.; Stevenson, M. Integration of meta-analysis and economic decision modeling for evaluating diagnostic tests. Med. Decis. Making
**2008**, 28, 650–667. [Google Scholar] [CrossRef] [PubMed] - Mushlin, A.I.; Ruchlin, H.S.; Callahan, M.A. Cost effectiveness of diagnostic tests. Lancet
**2001**, 358, 1353–1355. [Google Scholar] [CrossRef] - Phelps, C.E.; Mushlin, A.I. Focusing technology assessment using medical decision theory. Med. Decis Making
**1988**, 8, 279–289. [Google Scholar] [CrossRef] - National Institute for Health and Care Excellence. Available online: https://www.nice.org.uk/Media/Default/About/what-we-do/NICE-guidance/NICE-diagnostics-guidance/Diagnostics-assessment-programme-manual.pdf (accessed on 10 March 2020).
- Soares, M.O.; Walker, S.; Palmer, S.J.; Sculpher, M.J. Establishing the Value of Diagnostic and Prognostic Tests in Health Technology Assessment. Med. Decis. Making
**2018**, 38, 495–508. [Google Scholar] [CrossRef][Green Version] - Abel, L.; Shinkins, B.; Smith, A.; Sutton, A. Early Economic Evaluation of Diagnostic Technologies: Experiences of the NIHR Diagnostic Evidence Co-operatives. Med. Decis Making
**2019**, 39, 857–866. [Google Scholar] [CrossRef] - Hunink, M.G.; Glasziou, P.P.; Siegel, J.E.; Weeks, J.C. Decision Making in Health and Medicine: Integrating Evidence and Values; The Press Syndicate of the University of Cambridge: Cambridge, UK, 2011; pp. 1–388. [Google Scholar]
- Yang, Y.; Abel, L.; Buchanan, J.; Fanshawe, T. Use of Decision Modelling in Economic Evaluations of Diagnostic Tests: An Appraisal and Review of Health Technology Assessments in the UK. Pharm. Open
**2019**, 3, 281–291. [Google Scholar] [CrossRef][Green Version] - Trevethan, R. Sensitivity, Specificity, and Predictive Values: Foundations, Pliabilities, and Pitfalls in Research and Practice. Front. Public Health
**2017**, 5, 307. [Google Scholar] [CrossRef] - Trikalinos, T.A.; Siebert, U.; Lau, J. Decision-Analytic Modeling to Evaluate Benefits and Harms of Medical Tests—Uses and Limitations. Medical. Tests White Paper Ser.
**2009**, 29, E22–E29. [Google Scholar] [CrossRef] [PubMed] - Bossuyt, P.M.; Irwig, L.; Craig, J.; Glasziou. Comparative accuracy: Assessing new tests against existing diagnostic pathways. BMJ
**2006**, 332, 1089–1092. [Google Scholar] [CrossRef] [PubMed][Green Version] - Novielli, N.; Cooper, N.C.; Sutton, A.J. Evaluating the cost-effectiveness of diagnostic tests in combination: Is it important to allow for performance dependency? Value Health
**2013**, 16, 536–541. [Google Scholar] [CrossRef] [PubMed][Green Version] - Van Walraven, C.; Austin, P.C.; Jennings, A.; Forster, A. Correlation between serial tests made disease probability estimates erroneous. J. Clin. Epidemiol.
**2009**, 62, 1301–1305. [Google Scholar] [CrossRef] - Silva, M.A.; Ryall, K.A.; Wilm, C.; Caldara, J. PD-L1 immunostaining scoring for non-small cell lung cancer based on immunosurveillance parameters. PLoS ONE
**2018**, 13, 1–15. [Google Scholar] [CrossRef] - Longo, R.; Baxter, P.; Hall, P.; Hewison, J. Methods for identifying the cost-effective case definition cut-off for sequential monitoring tests: An extension of Phelps and Mushlin. Pharmacoeconomics
**2014**, 32, 327–334. [Google Scholar] [CrossRef][Green Version] - Jones, H.E.; Gatsonsis, C.A.; Trikalinos, T.A.; Welton, N.J. Quantifying how diagnostic test accuracy depends on threshold in a meta-analysis. Stat. Med.
**2019**, 38, 4789–4803. [Google Scholar] [CrossRef] - Zweig, M.H.; Campbell, G. Receiver-operating characteristic (ROC) plots: A fundamental evaluation tool in clinical medicine. Clin. Chem.
**1993**, 39, 561–577. [Google Scholar] [CrossRef] - Šimundić, A.-M. Measures of Diagnostic Accuracy: Basic Definitions. EJIFCC
**2009**, 19, 203–211. [Google Scholar] - Dukic, V.; Gatsonis, C. Meta-analysis of diagnostic test accuracy assessment studies with varying number of thresholds. Biometrics
**2003**, 59, 936–946. [Google Scholar] [CrossRef] - Deeks, J.J. Systematic reviews in health care: Systematic reviews of evaluations of diagnostic and screening tests. BMJ
**2001**, 323, 157–162. [Google Scholar] [CrossRef] - Rutter, C.M.; Gatsonis, C.A. A hierarchical regression approach to meta-analysis of diagnostic test accuracy evaluations. Stat. Med.
**2001**, 20, 2865–2884. [Google Scholar] [CrossRef] - Arends, L.R.; Hamza, T.H.; van Houwelingen, J.C.; Heijenbrok-Kal, M.H. Bivariate random effects meta-analysis of ROC curves. Med. Decis. Making
**2008**, 28, 621–638. [Google Scholar] [CrossRef] - Lieberthal, R.D.; Dudash, K.; Axelrod, R.; Goldfarb, N.I. An economic model to value companion diagnostics in non-small-cell lung cancer. Per. Med.
**2013**, 10, 139–147. [Google Scholar] [CrossRef][Green Version] - Elkin, E.B.; Weinstein, M.C.; Winer, E.P.; Kuntz, K.M. HER-2 testing and trastuzumab therapy for metastatic breast cancer: A cost-effectiveness analysis. J. Clin. Oncol.
**2004**, 22, 854–863. [Google Scholar] [CrossRef][Green Version]

**Figure 1.**Decision-analytic model structure of disease-based and test-based approaches to modelling diagnostics. D+ = disease positive; D− = disease negative; T+ = test positive; T− = test negative; TP = true positive; FP = false positive; TN = true negative; FN = false negative

**Figure 2.**Values derived in Table 3 and Table 5 are used to parameterize the disease-based and test-based decision tree structures. D+ = disease positive; D− = disease negative; T+ = test positive; T− = test negative; sens = sensitivity; sens’ = complement of sensitivity which is 1-sens; spec = specificity; spec’ = complement of specificity which is 1-spec; PPV = positive predictive value; PPV’ = complement of positive predictive value which is 1-PPV; NPV = negative predictive value; NPV’ = complement of negative predictive value which is 1-NPV; all T+ = all test positives; all T+’ = complement of all test positives which is 1-all T+; TP = true positive; FP = false positive; TN = true negative; FN = false negative. The colors in all tables are used to identify types of data and the corresponding data and colors are shown in all figures.

**Figure 3.**Inserting the probabilities into the model: disease-based and test-based approaches. D+ = disease positive; D− = disease negative; T+ = test positive; T− = test negative; sens=sensitivity; sens’ = complement of sensitivity which is 1-sens; spec = specificity; spec’ = complement of specificity which is 1-spec; PPV = positive predictive value; PPV’ = complement of positive predictive value, which is 1-PPV; NPV = negative predictive value; NPV’ = complement of negative predictive value, which is 1-NPV; all T+ = all test positives; all T+’ = complement of all test positives which is 1-all T+; TP = true positive; FP = false positive; TN = true negative; FN = false negative. The colors in all tables are used to identify types of data and the corresponding data and colors are shown in all figures.

**Figure 4.**Decision tree, populated with probabilities, calculating number of patients according to a cohort of 100 to correspond to a 2 × 2 table, with outcomes shown for disease-based and test-based approaches. Current versus new diagnostic are modelled in parallel. D+ = disease positive; D− = disease negative; T+ = test positive; T− = test negative; sens = sensitivity; sens’ = complement of sensitivity which is 1-sens; spec=specificity; spec’ = complement of specificity which is 1-spec; PPV = positive predictive value; PPV’ = complement of positive predictive value which is 1-PPV; NPV=negative predictive value; NPV’ = complement of negative predictive value which is 1-NPV; all T+ = all test positives; all T+’ = complement of all test positives which is 1-all T+; TP = true positive; FP = false positive; TN = true negative; FN = false negative, n = number (cohort). The colors in all tables are used to identify types of data and the corresponding data and colors are shown in all figures.

**Figure 5.**Decision analysis, disease-based and test-based approaches for sequential diagnostics. D+ = disease positive; D− = disease negative; T+ = test positive; T− = test negative; sens = sensitivity; sens’ = complement of sensitivity which is 1-sens; spec = specificity; spec’ = complement of specificity which is 1-spec; PPV = positive predictive value; PPV’ = complement of positive predictive value which is 1-PPV; NPV=negative predictive value; NPV’ = complement of negative predictive value which is 1-NPV; all T+ = all test positives; all T+’ = complement of all test positives which is 1-all T+; TP = true positive; FP = false positive; TN = true negative; FN = false negative, n = number (cohort). The colors in all tables are used to identify types of data and the corresponding data and colors are shown in all figures.

Data | Value | Comment |
---|---|---|

Prevalence | 0.400 | reported as 40% |

Sensitivity | 0.840 | reported as 84% |

Specificity | 0.700 | reported as 70% |

Step | Objective | Instruction | Cell Notation | Formula |
---|---|---|---|---|

^{1} | Define cohort | Insert a cohort of 100 ^{a} | (A+B) + (C+D) | na |

^{2} | Find the total D+ | Multiply the prevalence by the cohort to find the number of D+ patients | (A+C) | =0.400 × 100 = 40 |

^{3} | Find the total D− | Subtract the D+ from the total cohort to find D− | (B+D) | =100−40 = 60 |

^{4} | Find D+T+ (TP) | Multiply the sensitivity of the test to the total D+ patients to get the D+T+ | A | =0.840 × 40 = 34 |

^{5} | Find D+T− (FN) | Subtract the D+T+ from the total D+ | C | =40−34 = 6 |

^{6} | Find D−T− (TN) | Multiply the specificity of the test to the total D− patients to get the D−T− | D | =0.700 × 60 = 42 |

^{7} | Find D−T+ (FP) | Subtract the D−T− from the total D- | B | =60−42 = 18 |

^{8} | Find total T+ | Add D+T+ and D−T+ | (A+B) | =34 + 18 = 52 |

^{9} | Find total T− | Add D+T− and D−T− | (C+D) | =6 + 42 = 48 |

^{a}A cohort of one hundred has been used so that the numbers in the 2 × 2 table and the decision trees are easily recognized. D+ = disease positive; D− = disease negative; T+ = test positive; T− = test negative; TP = true positive; FP = false positive; TN = true negative; FN = false negative; na = not applicable. Red letters are cell notation;

^{1}

^{–9}refer to the superscript numbers in Table 3. The colors in all tables are used to identify types of data and the corresponding data and colors are shown in all figures.

D+ | D− | Total | |
---|---|---|---|

T+ | 34 ^{4}(A) TP | 18 ^{7}(B) FP | 52 ^{8}(A+B) total T+ |

T− | 6 ^{5}(C) FN | 42 ^{6}(D) TN | 48 ^{9}(C+D) total T− |

Total | 40 ^{2}(A+C) total D+ | 60 ^{3}(B+D) total D− | 100 ^{1}(A+B) + (C+D) cohort |

Step | Objective | Instruction | Formula |
---|---|---|---|

10 | Check the sensitivity using the 2 × 2 table (checking your work) | Divide A by (A+C) | =34/40 = 0.840 |

11 | Check the specificity using the 2 × 2 table (checking your work) | Divide D by (B+D) | =42/60 = 0.700 |

**Table 5.**Using the 2 × 2 table to calculate positive predictive value (PPV), negative predictive value (NPV) and proportion of cohort who T− and T+, respectively.

Step | Objective | Cell | Instruction | Formula |
---|---|---|---|---|

12 | Calculate PPV | N/A | Divide A by (A+B) | =34/52 = 0.651 |

13 | Calculate NPV | N/A | Divide D by (C+D) | =42/48 = 0.868 |

14 | Calculate proportion of cohort who T+ | N/A | Divide total T+ by cohort | =52/100 = 0.516 |

Data | Value | Source | Comment |
---|---|---|---|

Sensitivity | 0.910 | Meta-analysis | reported as 91% |

Specificity | 0.910 | Meta-analysis | reported as 91% |

**Table 7.**Calculation of 2 × 2 table for diagnostic 2, based on diagnostic 1 being performed first, for sequential diagnostics.

Step | Instruction | Value | Value | |
---|---|---|---|---|

1 | Start by taking the data from the diagnostic 1 T+ and inserting them into the test result row for diagnostic 2 | |||

2 | So the D+T+ for diagnostic 1 | 34 | becomes the D+ for diagnostic 2 | 34 |

The D−T+ for diagnostic 1 | 18 | becomes the D− for diagnostic 2 | 18 | |

3 | The total T+ for diagnostic 1 | 52 | becomes the cohort for diagnostic 2 | 52 |

Based on total D+ (A+C) and the cohort (A+B) + (C+D), calculate new prevalence = 34/52 = 0.651 | ||||

New prevalence | 0.651 ^{1} |

^{1}Note that the prevalence is now higher than before, because we have “filtered out” those with disease when using diagnostic 1 (more of this new cohort D+ than before). Note that the sample size is now 52 and the new prevalence is based on this new sample size (34/52 = 0.651, which is 65.1%). The colors in all tables are used to identify types of data and the corresponding data and colors are shown in all figures.

D+ | D− | Total | |
---|---|---|---|

T+ | 31 ^{4}(A) TP | 2 ^{7}(B) FP | 32 ^{8}(A+B) total T+ |

T- | 3 ^{5}(C) FN | 16 ^{6}(D) TN | 19 ^{9}(C+D) total T− |

Total | 34 ^{1}(A+C) total D+ | 18 ^{2}(B+D) total D- | 52 ^{3}(A+B) + (C+D) cohort |

**Table 9.**Using the 2 × 2 table to calculate PPV, NPV and the proportion of cohort who T− and T+, respectively, for diagnostic 2.

Step | Objective | Cell | Instruction |
---|---|---|---|

12 | Calculate PPV | N/A | Divide A by (A+B) |

13 | Calculate NPV | N/A | Divide D by (C+D) |

14 | Calculate proportion of cohort who T+ | N/A | Divide total T+ by cohort |

15 | Calculate proportion of cohort who T− | N/A | Divide total T- by cohort |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Rautenberg, T.; Gerritsen, A.; Downes, M.
Health Economic Decision Tree Models of Diagnostics for Dummies: A Pictorial Primer. *Diagnostics* **2020**, *10*, 158.
https://doi.org/10.3390/diagnostics10030158

**AMA Style**

Rautenberg T, Gerritsen A, Downes M.
Health Economic Decision Tree Models of Diagnostics for Dummies: A Pictorial Primer. *Diagnostics*. 2020; 10(3):158.
https://doi.org/10.3390/diagnostics10030158

**Chicago/Turabian Style**

Rautenberg, Tamlyn, Annette Gerritsen, and Martin Downes.
2020. "Health Economic Decision Tree Models of Diagnostics for Dummies: A Pictorial Primer" *Diagnostics* 10, no. 3: 158.
https://doi.org/10.3390/diagnostics10030158