Artificial Intelligence Versus Human Dental Expertise in Diagnosing Periapical Pathosis on Periapical Radiographs: A Multicenter Study
Abstract
1. Introduction
2. Materials and Methods
2.1. Study Design and Reporting Framework
2.2. Study Setting and Data Sources
2.3. Sample Size Calculation
2.4. Participants and Eligibility Criteria
2.4.1. Inclusion Criteria
2.4.2. Exclusion Criteria
2.5. Index Test (ChatGPT Assessment)
2.6. Prompt Used for ChatGPT-5 Assessment
- Periapical radiolucency (normal vs. abnormal)
- Arch (upper or lower)
- Position (anterior or posterior)
- Tooth involved
- Lesion borders (ill-defined, well-defined corticated, well-defined non-corticated)
- Lesion shape (irregular, round, or oval)
- Lamina dura integrity (lost or intact)
- Root resorption (present or absent)
- Cortical expansion (present or absent)
- Tooth displacement (present or absent)
2.7. Reference Standard (Expert Consensus)
2.8. Outcome Assessment
- Sensitivity (true-positive rate)—proportion of abnormal radiographs correctly identified by ChatGPT-5.
- Specificity (true-negative rate)—proportion of normal radiographs correctly identified.
- Positive Predictive Value (precision)—proportion of AI-identified abnormalities that were true positives.
- F1 score—harmonic mean of sensitivity and precision.
- Balanced accuracy—average of sensitivity and specificity to correct for class imbalance.
- Overall accuracy—proportion of correct classifications.
2.9. Statistical Analysis
3. Results
3.1. Study Sample and Image Characteristics
3.2. Descriptive Distribution of Radiographic Features
3.3. Diagnostic Accuracy Metrics
3.4. Agreement Between ChatGPT-5 and Expert Consensus
3.5. Comparative Tests of Radiographic Parameters
3.6. Predictors of Correct Classification (Regression Analysis)
4. Discussion
4.1. Limitations
4.2. Practical and Clinical Implications
4.3. Future Directions
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
References
- Tibúrcio-Machado, C.S.; Michelon, C.; Zanatta, F.B.; Gomes, M.S.; Marin, J.A.; Bier, C.A. The global prevalence of apical periodontitis: A systematic review and meta-analysis. Int. Endod. J. 2021, 54, 712–735. [Google Scholar] [CrossRef]
- Kruse, C.; Spin-Neto, R.; Reibel, J.; Wenzel, A.; Kirkevang, L.L. Diagnostic validity of periapical radiography and CBCT for assessing periapical lesions that persist after endodontic surgery. Dentomaxillofac. Radiol. 2017, 46, 20170210. [Google Scholar] [CrossRef]
- Patel, S.; Brown, J.; Pimentel, T.; Kelly, R.D.; Abella, F.; Durack, C. Cone beam computed tomography in Endodontics—A review of the literature. Int. Endod. J. 2019, 52, 1138–1152. [Google Scholar] [CrossRef]
- Essam, O.; Umerji, S.; Blundell, K. Endodontic assessment, complexity, diagnosis and treatment planning. Br. Dent. J. 2025, 238, 441–447. [Google Scholar] [CrossRef]
- DeVine, T.M.; Paisner, N.L.; Dayo, A.F. Prevention, Diagnostic Challenges, and Management of Endodontic Perforations: A Narrative Review. Complications 2025, 2, 17. [Google Scholar] [CrossRef]
- Patel, S.; Durack, C.; Abella, F.; Shemesh, H.; Roig, M.; Lemberg, K. Cone beam computed tomography in Endodontics—A review. Int. Endod. J. 2015, 48, 3–15. [Google Scholar] [CrossRef]
- Azaryan, E.; Karbasi, S.; Saharkhiz, M.; Hanafi-Bojd, M.Y.; Zarban, A.; Emadian Razavi, F.; Naseri, M. Effect of HM-Exos on the migration and inflammatory response of LPS-exposed dental pulp stem cells. BMC Oral Health 2023, 23, 95. [Google Scholar] [CrossRef]
- AAE and AAOMR Joint Position Statement: Use of Cone Beam Computed Tomography in Endodontics 2015 Update. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. 2015, 120, 508–512. [CrossRef] [PubMed]
- Pul, U.; Schwendicke, F. Artificial intelligence for detecting periapical radiolucencies: A systematic review and meta-analysis. J. Dent. 2024, 147, 105104. [Google Scholar] [CrossRef] [PubMed]
- Szabó, V.; Orhan, K.; Dobó-Nagy, C.; Veres, D.S.; Manulis, D.; Ezhov, M.; Sanders, A.; Szabó, B.T. Deep Learning-Based Periapical Lesion Detection on Panoramic Radiographs. Diagnostics 2025, 15, 510. [Google Scholar] [CrossRef]
- Tuzoff, D.V.; Tuzova, L.N.; Bornstein, M.M.; Krasnov, A.S.; Kharchenko, M.A.; Nikolenko, S.I.; Sveshnikov, M.M.; Bednenko, G.B. Tooth detection and numbering in panoramic radiographs using convolutional neural networks. Dentomaxillofac. Radiol. 2019, 48, 20180051. [Google Scholar] [CrossRef] [PubMed]
- Cantu, A.G.; Gehrung, S.; Krois, J.; Chaurasia, A.; Rossi, J.G.; Gaudin, R.; Elhennawy, K.; Schwendicke, F. Detecting caries lesions of different radiographic extension on bitewings using deep learning. J. Dent. 2020, 100, 103425. [Google Scholar] [CrossRef]
- Hassanein, F.E.A.; Ahmed, Y.; Maher, S.; Barbary, A.E.; Abou-Bakr, A. Prompt-dependent performance of multimodal AI model in oral diagnosis: A comprehensive analysis of accuracy, narrative quality, calibration, and latency versus human experts. Sci. Rep. 2025, 15, 37932. [Google Scholar] [CrossRef]
- Hassanein, F.E.A.; Hussein, R.R.; Almalahy, H.G.; Sarhan, S.; Ahmed, Y.; Abou-Bakr, A. Vision-based diagnostic gain of ChatGPT-5 and gemini 2.5 pro compared with human experts in oral lesion assessment. Sci. Rep. 2025, 15, 43279. [Google Scholar] [CrossRef]
- Abou-Bakr, A.; El Barbary, A.; Hassanein, F.E.A. ChatGPT-5 vs oral medicine experts for rank-based differential diagnosis of oral lesions: A prospective, biopsy-validated comparison. Odontology Online ahead of print. 2025. [Google Scholar] [CrossRef]
- Hassanein, F.E.A.; El-Guindy, J.; Ahmed, Y.; Abou-Bakr, A. Evaluating Multimodal Large Language Models for Clinical Diagnosis of Oral Lesions: A Biomedical Informatics Perspective. In Proceedings of the 2025 Twelfth International Conference on Intelligent Computing and Information Systems (ICICIS), Cairo, Egypt, 25–27 November 2025; pp. 552–559. [Google Scholar]
- Robaian, A.; Hassanein, F.E.A.; Hassan, M.T.; Alqahtani, A.S.; Abou-Bakr, A. A Multimodal Large Language Model Framework for Clinical Subtyping and Malignant Transformation Risk Prediction in Oral Lichen Planus: A Paired Comparison With Expert Clinicians. Int. Dent. J. 2026, 76, 109357. [Google Scholar] [CrossRef]
- Hassanein, F.E.A.; Hussein, R.R.; Ahmed, Y.; El-Guindy, J.; Ahmed, D.E.; Abou-Bakr, A. Calibration of AI large language models with human subject matter experts for grading of clinical short-answer responses in dental education. BMC Oral Health 2026, 26, 286. [Google Scholar] [CrossRef]
- Orhan, K.; Bayrakdar, I.S.; Ezhov, M.; Kravtsov, A.; Özyürek, T. Evaluation of artificial intelligence for detecting periapical pathosis on cone-beam computed tomography scans. Int. Endod. J. 2020, 53, 680–689. [Google Scholar] [CrossRef] [PubMed]
- Llorente de Pedro, M.; Suárez, A.; Algar, J.; Díaz-Flores García, V.; Andreu-Vázquez, C.; Freire, Y. Assessing ChatGPT’s Reliability in Endodontics: Implications for AI-Enhanced Clinical Learning. Appl. Sci. 2025, 15, 5231. [Google Scholar] [CrossRef]
- Ali, M.; Irfan, M.; Ali, T.; Wei, C.R.; Akilimali, A. Artificial intelligence in dental radiology: A narrative review. Ann. Med. Surg. 2025, 87, 2212–2217. [Google Scholar] [CrossRef]
- Suh, P.S.; Shim, W.H.; Suh, C.H.; Heo, H.; Park, C.R.; Eom, H.J.; Park, K.J.; Choe, J.; Kim, P.H.; Park, H.J.; et al. Comparing Diagnostic Accuracy of Radiologists versus GPT-4V and Gemini Pro Vision Using Image Inputs from Diagnosis Please Cases. Radiology 2024, 312, e240273. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Y.; Pan, Y.; Zhong, T.; Dong, P.; Xie, K.; Liu, Y.; Jiang, H.; Wu, Z.; Liu, Z.; Zhao, W.; et al. Potential of multimodal large language models for data mining of medical images and free-text reports. Meta-Radiology 2024, 2, 100103. [Google Scholar] [CrossRef]
- Sounderajah, V.; Ashrafian, H.; Aggarwal, R.; De Fauw, J.; Denniston, A.K.; Greaves, F.; Karthikesalingam, A.; King, D.; Liu, X.; Markar, S.R.; et al. Developing specific reporting guidelines for diagnostic accuracy studies assessing AI interventions: The STARD-AI Steering Group. Nat. Med. 2020, 26, 807–808. [Google Scholar] [CrossRef] [PubMed]
- Buderer, N.M. Statistical methodology: I. Incorporating the prevalence of disease into the sample size calculation for sensitivity and specificity. Acad. Emerg. Med. 1996, 3, 895–900. [Google Scholar] [CrossRef]
- Ekert, T.; Krois, J.; Meinhold, L.; Elhennawy, K.; Emara, R.; Golla, T.; Schwendicke, F. Deep Learning for the Radiographic Detection of Apical Lesions. J. Endod. 2019, 45, 917–922.e15. [Google Scholar] [CrossRef]
- Landis, J.R.; Koch, G.G. The measurement of observer agreement for categorical data. Biometrics 1977, 33, 159–174. [Google Scholar] [CrossRef]
- Bhayana, R.; Krishna, S.; Bleakney, R.R. Performance of ChatGPT on a Radiology Board-style Examination: Insights into Current Strengths and Limitations. Radiology 2023, 307, e230582. [Google Scholar] [CrossRef]
- Qutieshat, A.; Al Rusheidi, A.; Al Ghammari, S.; Alarabi, A.; Salem, A.; Zelihic, M. Comparative analysis of diagnostic accuracy in endodontic assessments: Dental students vs. artificial intelligence. Diagnosis 2024, 11, 259–265. [Google Scholar] [CrossRef]
- Gokdeniz, S.T.; Kamburoğlu, K. Artificial intelligence in dentomaxillofacial radiology. World J. Radiol. 2022, 14, 55–59. [Google Scholar] [CrossRef]
- Akinci D’Antonoli, T.; Stanzione, A.; Bluethgen, C.; Vernuccio, F.; Ugga, L.; Klontzas, M.E.; Cuocolo, R.; Cannella, R.; Koçak, B. Large language models in radiology: Fundamentals, applications, ethical considerations, risks, and future directions. Diagn. Interv. Radiol. 2024, 30, 80–90. [Google Scholar] [CrossRef] [PubMed]
- Lacaita, P.G.; Galijasevic, M.; Swoboda, M.; Gruber, L.; Scharll, Y.; Barbieri, F.; Widmann, G.; Feuchtner, G.M. The Accuracy of ChatGPT-4o in Interpreting Chest and Abdominal X-Ray Images. J. Pers. Med. 2025, 15, 194. [Google Scholar] [CrossRef] [PubMed]
- Arruzza, E.S.; Evangelista, C.M.; Chau, M. The performance of ChatGPT-4.0o in medical imaging evaluation: A cross-sectional study. J. Educ. Eval. Health Prof. 2024, 21, 29. [Google Scholar] [CrossRef] [PubMed]
- Endres, M.G.; Hillen, F.; Salloumis, M.; Sedaghat, A.R.; Niehues, S.M.; Quatela, O.; Hanken, H.; Smeets, R.; Beck-Broichsitter, B.; Rendenbach, C. Development of a deep learning algorithm for periapical disease detection in dental radiographs. Diagnostics 2020, 10, 430. [Google Scholar] [CrossRef]
- Krois, J.; Garcia Cantu, A.; Chaurasia, A.; Patil, R.; Chaudhari, P.K.; Gaudin, R.; Gehrung, S.; Schwendicke, F. Generalizability of deep learning models for dental image analysis. Sci. Rep. 2021, 11, 6102. [Google Scholar] [CrossRef]
- Allihaibi, M.; Koller, G.; Mannocci, F. Diagnostic Accuracy of a Commercial AI-based Platform in Evaluating Endodontic Treatment Outcomes on Periapical Radiographs Using CBCT as the Reference Standard. J. Endod. 2025, 51, 898–908.e898. [Google Scholar] [CrossRef]
- Ezhov, M.; Gusarev, M.; Golitsyna, M.; Yates, J.M.; Kushnerev, E.; Tamimi, D.; Aksoy, S.; Shumilov, E.; Sanders, A.; Orhan, K. Clinically applicable artificial intelligence system for dental diagnosis with CBCT. Sci. Rep. 2021, 11, 15006. [Google Scholar] [CrossRef]
- Issa, J.; Jaber, M.; Rifai, I.; Mozdziak, P.; Kempisty, B.; Dyszkiewicz-Konwińska, M. Diagnostic Test Accuracy of Artificial Intelligence in Detecting Periapical Periodontitis on Two-Dimensional Radiographs: A Retrospective Study and Literature Review. Medicina 2023, 59, 768. [Google Scholar] [CrossRef]
- Ren, Y.; Guo, Y.; He, Q.; Cheng, Z.; Huang, Q.; Yang, L. Exploring whether ChatGPT-4 with image analysis capabilities can diagnose osteosarcoma from X-ray images. Exp. Hematol. Oncol. 2024, 13, 71. [Google Scholar] [CrossRef]
- Suárez, A.; Arena, S.; Herranz Calzada, A.; Castillo Varón, A.I.; Diaz-Flores García, V.; Freire, Y. Decoding wisdom: Evaluating ChatGPT’s accuracy and reproducibility in analyzing orthopantomographic images for third molar assessment. Comput. Struct. Biotechnol. J. 2025, 28, 141–147. [Google Scholar] [CrossRef]
- Stephan, D.; Bertsch, A.; Burwinkel, M.; Vinayahalingam, S.; Al-Nawas, B.; Kämmerer, P.W.; Thiem, D.G.E. AI in Dental Radiology—Improving the Efficiency of Reporting With ChatGPT: Comparative Study. J. Med. Internet Res. 2024, 26, e60684. [Google Scholar] [CrossRef] [PubMed]
- Mago, J.; Sharma, M. The Potential Usefulness of ChatGPT in Oral and Maxillofacial Radiology. Cureus 2023, 15, e42133. [Google Scholar] [CrossRef] [PubMed]
- Schwendicke, F.; Samek, W.; Krois, J. Artificial Intelligence in Dentistry: Chances and Challenges. J. Dent. Res. 2020, 99, 769–774. [Google Scholar] [CrossRef] [PubMed]




| Parameter | Category | Expert Consensus n (%) | ChatGPT n (%) |
|---|---|---|---|
| Arch | Upper | 58 (22.7%) | 130 (50.8%) |
| Lower | 70 (27.3%) | 94 (36.7%) | |
| Position | Anterior | 17 (6.6%) | 30 (11.7%) |
| Posterior | 111 (43.4%) | 194 (75.8%) | |
| Tooth Involved | First Molar | 49 (19.1%) | 72 (28.1%) |
| Second Molar | 17 (6.6%) | 16 (6.2%) | |
| Second Premolar | 18 (7.0%) | 26 (10.2%) | |
| Lateral Incisor | 9 (3.5%) | 28 (10.9%) | |
| Central Incisors | 14 (5.5%) | 14 (5.5%) | |
| Borders | Ill-defined | 103 (40.2%) | 168 (65.6%) |
| Well-defined, corticated | 2 (0.8%) | 24 (9.4%) | |
| Well-defined, non-corticated | 23 (9.0%) | 32 (12.5%) | |
| Shape | Irregular | 109 (42.6%) | 187 (73.0%) |
| Round | 12 (4.7%) | 26 (10.2%) | |
| Oval | 7 (2.7%) | 9 (3.5%) | |
| Lamina Dura | Lost | 254 (99.2%) | 240 (93.8%) |
| Intact | 2 (0.8%) | 16 (6.2%) | |
| Root Resorption | Present | 106 (41.4%) | 68 (26.6%) |
| Absent | 150 (58.6%) | 188 (73.4%) | |
| Cortical Expansion | Present | 42 (16.4%) | 58 (22.7%) |
| Absent | 214 (83.6%) | 198 (77.3%) | |
| Tooth Displacement | Present | 36 (14.1%) | 56 (21.9%) |
| Absent | 220 (85.9%) | 200 (78.1%) |
| Metric | Value |
|---|---|
| True Positives (TP) | 112 |
| True Negatives (TN) | 16 |
| False Positives (FP) | 112 |
| False Negatives (FN) | 16 |
| Accuracy | (50%) |
| Sensitivity (Recall) | (87.5%) |
| Specificity | (12.5%) |
| Precision (PPV) | (50%) |
| F1 Score | (63.6%) |
| Balanced Accuracy | (50%) |
| Parameter | Observed Agreement (%) | Cohen’s κ | Strength of Agreement |
|---|---|---|---|
| Detection (Normal/Abnormal) | 50.0 | 0.000 | Poor |
| Arch | 92.9 | 0.857 | Almost perfect |
| Position | 90.2 | 0.589 | Moderate |
| Tooth involved | 56.2 | 0.464 | Moderate |
| Borders | 52.7 | 0.127 | Slight |
| Shape | 76.8 | 0.359 | Fair |
| Lamina Dura | 97.3 | 0.000 | None |
| Root Resorption | 45.5 | 0.014 | Slight |
| Cortical Expansion | 51.8 | 0.000 | Poor |
| Tooth Displacement | 95.5 | −0.014 | None |
| Parameter | Category | Expert n (%) | ChatGPT n (%) | Test | χ2 | p-Value |
|---|---|---|---|---|---|---|
| Arch | Lower | 70 (54.7%) | 54 (48.2%) | McNemar | 1.12 | 0.289 |
| Upper | 58 (45.3%) | 58 (51.8%) | ||||
| Position | Anterior | 17 (13.3%) | 14 (12.5%) | McNemar | 0.36 | 0.546 |
| Posterior | 111 (86.7%) | 98 (87.5%) | ||||
| Tooth Involved | First Molar a | 51 (39.8%) | 46 (41.1%) | Bowker | 12.7 | 0.078 |
| Second Molar a | 19 (14.8%) | 8 (7.1%) | ||||
| First Premolar a | 8 (6.2%) | 2 (1.8%) | ||||
| Second Premolar a | 18 (14.1%) | 18 (16.1%) | ||||
| Canine a | 1 (0.8%) | 0 (0.0%) | ||||
| Lateral Incisor a | 13 (10.2%) | 16 (14.3%) | ||||
| Central Incisor a | 14 (10.9%) | 12 (10.7%) | ||||
| Multiple Teeth a | 4 (3.1%) | 4 (3.6%) | ||||
| Premolar (unspecified) a | 0 (0.0%) | 4 (3.6%) | ||||
| Borders | Ill-defined a | 103 (80.5%) | 83 (74.1%) | Bowker | 26.3 | <0.001 * |
| Well-defined, corticated b | 2 (1.6%) | 9 (8.0%) | ||||
| Well-defined, non-corticated ab | 23 (18.0%) | 18 (16.1%) | ||||
| Shape | Irregular a | 109 (85.2%) | 83 (74.1%) | Bowker | 2.67 | 0.264 |
| Round b | 9 (7.0%) | 9 (8.0%) | ||||
| Oval ab | 10 (7.8%) | 18 (16.1%) | ||||
| Lamina Dura | Lost | 110 (85.9%) | 110 (98.2%) | McNemar | 2.0 | 0.156 |
| Intact | 18 (14.1%) | 2 (1.8%) | ||||
| Root Resorption | Present | 58 (45.3%) | 58 (51.8%) | McNemar | 3.98 | 0.046 * |
| Absent | 70 (54.7%) | 54 (48.2%) | ||||
| Cortical Expansion | Present | 108 (84.4%) | 108 (96.4%) | McNemar | 1.22 | 0.270 |
| Absent | 20 (15.6%) | 4 (3.6%) | ||||
| Tooth Displacement | Present | 111 (86.7%) | 111 (99.1%) | McNemar | 0.34 | 0.560 |
| Absent | 17 (13.3%) | 1 (0.9%) |
| Predictor | Level (vs. Reference) | OR | 95% CI | p-Value |
|---|---|---|---|---|
| Borders | Well-defined, corticated vs. Ill-defined | 60.25 | 3.62–1002.64 | <0.001 * |
| Borders | Well-defined, non-corticated vs. Ill-defined | 3.46 | 1.49–8.04 | 0.004 * |
| Shape | Oval vs. Irregular | 20.43 | 1.18–354.87 | 0.003 * |
| Shape | Round vs. Irregular | 2.45 | 1.03–5.87 | 0.061 |
| Lamina Dura | Intact vs. Lost | 0.03 | 0.00–0.45 | <0.001 * |
| Cortical Expansion | Present vs. Absent | 2.10 | 1.02–4.31 | 0.046 * |
| Root Resorption | Yes vs. No | 0.72 | 0.31–1.65 | 0.408 |
| Tooth Displacement | Yes vs. No | 0.89 | 0.45–1.78 | 0.725 |
| Arch | Lower vs. Upper | 1.61 | 0.96–2.68 | 0.092 |
| Position | Anterior vs. Posterior | 0.86 | 0.40–1.84 | 0.846 |
| Tooth | Other vs. First Molar | 32.55 | 1.92–551.74 | <0.001 * |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Hassanein, F.E.A.; Hussein, R.R.; Elgarhy, M.R.; Maher, S.M.; Hassen, A.; Heidar, S.; Ezz El Arab, M.; Edress, A.; Abou-Bakr, A.; Mekhemar, M. Artificial Intelligence Versus Human Dental Expertise in Diagnosing Periapical Pathosis on Periapical Radiographs: A Multicenter Study. Bioengineering 2026, 13, 232. https://doi.org/10.3390/bioengineering13020232
Hassanein FEA, Hussein RR, Elgarhy MR, Maher SM, Hassen A, Heidar S, Ezz El Arab M, Edress A, Abou-Bakr A, Mekhemar M. Artificial Intelligence Versus Human Dental Expertise in Diagnosing Periapical Pathosis on Periapical Radiographs: A Multicenter Study. Bioengineering. 2026; 13(2):232. https://doi.org/10.3390/bioengineering13020232
Chicago/Turabian StyleHassanein, Fatma E. A., Radwa R. Hussein, Mohamed Riad Elgarhy, Shaymaa Mohamed Maher, Ahmed Hassen, Sherif Heidar, Marwa Ezz El Arab, Amr Edress, Asmaa Abou-Bakr, and Mohamed Mekhemar. 2026. "Artificial Intelligence Versus Human Dental Expertise in Diagnosing Periapical Pathosis on Periapical Radiographs: A Multicenter Study" Bioengineering 13, no. 2: 232. https://doi.org/10.3390/bioengineering13020232
APA StyleHassanein, F. E. A., Hussein, R. R., Elgarhy, M. R., Maher, S. M., Hassen, A., Heidar, S., Ezz El Arab, M., Edress, A., Abou-Bakr, A., & Mekhemar, M. (2026). Artificial Intelligence Versus Human Dental Expertise in Diagnosing Periapical Pathosis on Periapical Radiographs: A Multicenter Study. Bioengineering, 13(2), 232. https://doi.org/10.3390/bioengineering13020232

