Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (12)

Search Parameters:
Keywords = spectrum aided vision enhancer

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 2304 KiB  
Article
Integrating AI with Advanced Hyperspectral Imaging for Enhanced Classification of Selected Gastrointestinal Diseases
by Chu-Kuang Chou, Kun-Hua Lee, Riya Karmakar, Arvind Mukundan, Tsung-Hsien Chen, Ashok Kumar, Danat Gutema, Po-Chun Yang, Chien-Wei Huang and Hsiang-Chen Wang
Bioengineering 2025, 12(8), 852; https://doi.org/10.3390/bioengineering12080852 - 8 Aug 2025
Viewed by 331
Abstract
Ulcerative colitis, polyps, esophagitis, and other gastrointestinal (GI) diseases significantly impact health, making early detection crucial for reducing mortality rates and improving patient outcomes. Traditional white light imaging (WLI) is commonly used during endoscopy to identify abnormalities in the gastrointestinal tract. However, insufficient [...] Read more.
Ulcerative colitis, polyps, esophagitis, and other gastrointestinal (GI) diseases significantly impact health, making early detection crucial for reducing mortality rates and improving patient outcomes. Traditional white light imaging (WLI) is commonly used during endoscopy to identify abnormalities in the gastrointestinal tract. However, insufficient contrast often limits its effectiveness, making it challenging to distinguish between healthy and unhealthy tissues, particularly when identifying subtle mucosal and vascular abnormalities. These limitations have prompted the need for more advanced imaging techniques that enhance pathological visualization and facilitate early diagnosis. Therefore, this study investigates the integration of the Spectrum-Aided Vision Enhancer (SAVE) mechanism to improve WLI images and increase disease detection accuracy. This approach transforms standard WLI images into hyperspectral imaging (HSI) representations, creating narrow-band imaging (NBI-like) visuals with enhanced contrast and tissue differentiation, thereby improving the visualization of vascular and mucosal structures critical for diagnosing GI disorders. This transformation allows for a clearer representation of blood vessels and membrane formations, which is essential for determining the presence of GI diseases. The dataset for this study comprises WLI images alongside SAVE-enhanced images, including four categories: ulcerative colitis, polyps, esophagitis, and healthy GI tissue. These images are organized into training, validation, and test sets to develop a deep learning-based classification model. Utilizing principal component analysis (PCA) and multiple regression analysis for spectral standardization ensures that the improved images retain spectral characteristics that are vital for clinical applications. By merging deep learning techniques with advanced imaging enhancements, this study aims to create an artificial intelligence (AI)–driven diagnostic system capable of early and accurate detection of GI diseases. InceptionV3 attained an overall accuracy of 94% in both scenarios; SAVE produced a modest enhancement in the ulcerative colitis F1-score from 92% to 93%, while the F1-scores for other classes exceeded 96%. SAVE resulted in a 10% increase in YOLOv8x accuracy, reaching 89%, with ulcerative colitis F1 improving to 82% and polyp F1 rising to 76%. VGG16 enhanced accuracy from 85% to 91%, and the F1-score for polyps improved from 68% to 81%. These findings confirm that SAVE enhancement consistently improves disease classification across diverse architectures, offers a practical, hardware-independent approach to hyperspectral-quality images, and enhances the accuracy of gastrointestinal screening. Furthermore, this research seeks to provide a practical and effective solution for clinical applications, improving diagnostic accuracy and facilitating superior patient care. Full article
Show Figures

Figure 1

17 pages, 920 KiB  
Article
Enhancing Early GI Disease Detection with Spectral Visualization and Deep Learning
by Tsung-Jung Tsai, Kun-Hua Lee, Chu-Kuang Chou, Riya Karmakar, Arvind Mukundan, Tsung-Hsien Chen, Devansh Gupta, Gargi Ghosh, Tao-Yuan Liu and Hsiang-Chen Wang
Bioengineering 2025, 12(8), 828; https://doi.org/10.3390/bioengineering12080828 - 30 Jul 2025
Viewed by 544
Abstract
Timely and accurate diagnosis of gastrointestinal diseases (GIDs) remains a critical bottleneck in clinical endoscopy, particularly due to the limited contrast and sensitivity of conventional white light imaging (WLI) in detecting early-stage mucosal abnormalities. To overcome this, this research presents Spectrum Aided Vision [...] Read more.
Timely and accurate diagnosis of gastrointestinal diseases (GIDs) remains a critical bottleneck in clinical endoscopy, particularly due to the limited contrast and sensitivity of conventional white light imaging (WLI) in detecting early-stage mucosal abnormalities. To overcome this, this research presents Spectrum Aided Vision Enhancer (SAVE), an innovative, software-driven framework that transforms standard WLI into high-fidelity hyperspectral imaging (HSI) and simulated narrow-band imaging (NBI) without any hardware modification. SAVE leverages advanced spectral reconstruction techniques, including Macbeth Color Checker-based calibration, principal component analysis (PCA), and multivariate polynomial regression, achieving a root mean square error (RMSE) of 0.056 and structural similarity index (SSIM) exceeding 90%. Trained and validated on the Kvasir v2 dataset (n = 6490) using deep learning models like ResNet-50, ResNet-101, EfficientNet-B2, both EfficientNet-B5 and EfficientNetV2-B0 were used to assess diagnostic performance across six key GI conditions. Results demonstrated that SAVE enhanced imagery and consistently outperformed raw WLI across precision, recall, and F1-score metrics, with EfficientNet-B2 and EfficientNetV2-B0 achieving the highest classification accuracy. Notably, this performance gain was achieved without the need for specialized imaging hardware. These findings highlight SAVE as a transformative solution for augmenting GI diagnostics, with the potential to significantly improve early detection, streamline clinical workflows, and broaden access to advanced imaging especially in resource constrained settings. Full article
Show Figures

Figure 1

19 pages, 1442 KiB  
Article
Hyperspectral Imaging for Enhanced Skin Cancer Classification Using Machine Learning
by Teng-Li Lin, Arvind Mukundan, Riya Karmakar, Praveen Avala, Wen-Yen Chang and Hsiang-Chen Wang
Bioengineering 2025, 12(7), 755; https://doi.org/10.3390/bioengineering12070755 - 11 Jul 2025
Viewed by 560
Abstract
Objective: The classification of skin cancer is very helpful in its early diagnosis and treatment, considering the complexity involved in differentiating AK from BCC and SK. These conditions are generally not easily detectable due to their comparable clinical presentations. Method: This paper presents [...] Read more.
Objective: The classification of skin cancer is very helpful in its early diagnosis and treatment, considering the complexity involved in differentiating AK from BCC and SK. These conditions are generally not easily detectable due to their comparable clinical presentations. Method: This paper presents a new approach to hyperspectral imaging for enhancing the visualization of skin lesions called the Spectrum-Aided Vision Enhancer (SAVE), which has the ability to convert any RGB image into a narrow-band image (NBI) by combining hyperspectral imaging (HSI) to increase the contrast of the area of the cancerous lesions when compared with the normal tissue, thereby increasing the accuracy of classification. The current study investigates the use of ten different machine learning algorithms for the purpose of classification of AK, BCC, and SK, including convolutional neural network (CNN), random forest (RF), you only look once (YOLO) version 8, support vector machine (SVM), ResNet50, MobileNetV2, Logistic Regression, SVM with stochastic gradient descent (SGD) Classifier, SVM with logarithmic (LOG) Classifier and SVM- Polynomial Classifier, in assessing the capability of the system to differentiate AK from BCC and SK with heightened accuracy. Results: The results demonstrated that SAVE enhanced classification performance and increased its accuracy, sensitivity, and specificity compared to a traditional RGB imaging approach. Conclusions: This advanced method offers dermatologists a tool for early and accurate diagnosis, reducing the likelihood of misclassification and improving patient outcomes. Full article
Show Figures

Figure 1

21 pages, 1611 KiB  
Article
Novel Snapshot-Based Hyperspectral Conversion for Dermatological Lesion Detection via YOLO Object Detection Models
by Nan-Chieh Huang, Arvind Mukundan, Riya Karmakar, Syna Syna, Wen-Yen Chang and Hsiang-Chen Wang
Bioengineering 2025, 12(7), 714; https://doi.org/10.3390/bioengineering12070714 - 30 Jun 2025
Viewed by 473
Abstract
Objective: Skin lesions, including dermatofibroma, lichenoid lesions, and acrochordons, are increasingly prevalent worldwide and often require timely identification for effective clinical management. However, conventional RGB-based imaging can overlook subtle vascular characteristics, potentially delaying diagnosis. Methods: A novel spectrum-aided vision enhancer (SAVE) that [...] Read more.
Objective: Skin lesions, including dermatofibroma, lichenoid lesions, and acrochordons, are increasingly prevalent worldwide and often require timely identification for effective clinical management. However, conventional RGB-based imaging can overlook subtle vascular characteristics, potentially delaying diagnosis. Methods: A novel spectrum-aided vision enhancer (SAVE) that transforms standard RGB images into simulated narrowband imaging representations in a single step was proposed. The performances of five cutting-edge object detectors, based on You Look Only Once (YOLOv11, YOLOv10, YOLOv9, YOLOv8, and YOLOv5) models, were assessed across three lesion categories using white-light imaging (WLI) and SAVE modalities. Each YOLO model was trained separately on SAVE and WLI images, and performance was measured using precision, recall, and F1 score. Results: Among all tested configurations, YOLOv10 attained the highest overall performance, particularly under the SAVE modality, demonstrating superior precision and recall across the majority of lesion types. YOLOv9 exhibited robust performance, especially for dermatofibroma detection under SAVE, albeit slightly lagging behind YOLOv10. Conversely, YOLOv11 underperformed on acrochordon detection (cumulative F1  =  65.73%), and YOLOv8 and YOLOv5 displayed lower accuracy and higher false-positive rates, especially in WLI mode. Although SAVE improved the performance of YOLOv8 and YOLOv5, their results remained below those of YOLOv10 and YOLOv9. Conclusions: Combining the SAVE modality with advanced YOLO-based object detectors, specifically YOLOv10 and YOLOv9, markedly enhances the accuracy of lesion detection compared to conventional WLI, facilitating expedited real-time dermatological screening. These findings indicate that integrating snapshot-based narrowband imaging with deep learning object detection models can improve early diagnosis and has potential applications in broader clinical contexts. Full article
(This article belongs to the Special Issue Medical Artificial Intelligence and Data Analysis)
Show Figures

Figure 1

19 pages, 620 KiB  
Article
Software-Based Transformation of White Light Endoscopy Images to Hyperspectral Images for Improved Gastrointestinal Disease Detection
by Chien-Wei Huang, Chang-Chao Su, Chu-Kuang Chou, Arvind Mukundan, Riya Karmakar, Tsung-Hsien Chen, Pranav Shukla, Devansh Gupta and Hsiang-Chen Wang
Diagnostics 2025, 15(13), 1664; https://doi.org/10.3390/diagnostics15131664 - 30 Jun 2025
Viewed by 544
Abstract
Background/Objectives: Gastrointestinal diseases (GID), such as oesophagitis, polyps, and ulcerative colitis, contribute significantly to global morbidity and mortality. Conventional diagnostic methods employing white light imaging (WLI) in wireless capsule endoscopy (WCE) provide limited spectrum information, therefore influencing classification performance. Methods: A new technique [...] Read more.
Background/Objectives: Gastrointestinal diseases (GID), such as oesophagitis, polyps, and ulcerative colitis, contribute significantly to global morbidity and mortality. Conventional diagnostic methods employing white light imaging (WLI) in wireless capsule endoscopy (WCE) provide limited spectrum information, therefore influencing classification performance. Methods: A new technique called Spectrum Aided Vision Enhancer (SAVE), which converts traditional WLI images into hyperspectral imaging (HSI)-like representations, hence improving diagnostic accuracy. HSI involves the acquisition of image data across numerous wavelengths of light, extending beyond the visible spectrum, to deliver comprehensive information regarding the material composition and attributes of the imaged objects. This technique facilitates improved tissue characterisation, rendering it especially effective for identifying abnormalities in medical imaging. Using a carefully selected dataset consisting of 6000 annotated photos taken from the KVASIR and ETIS-Larib Polyp Database, this work classifies normal, ulcers, polyps, and oesophagitis. The performance of both the original WLI and SAVE transformed images was assessed using advanced deep learning architectures. The principal outcome was the overall classification accuracy for normal, ulcer, polyp, and oesophagitis categories, contrasting SAVE-enhanced images with standard WLI across five deep learning models. Results: The principal outcome of this study was the enhancement of diagnostic accuracy for gastrointestinal disease classification, assessed through classification accuracy, precision, recall, and F1 score. The findings illustrate the efficacy of the SAVE method in improving diagnostic performance without requiring specialised equipment. With the best accuracy of 98% attained using EfficientNetB7, compared to 97% with WLI, experimental data show that SAVE greatly increases classification metrics across all models. With relative improvement from 85% (WLI) to 92% (SAVE), VGG16 showed the highest accuracy. Conclusions: These results confirm that the SAVE algorithm significantly improves the early identification and classification of GID, therefore providing a potential development towards more accurate, non-invasive GID diagnostics with WCE. Full article
Show Figures

Figure 1

22 pages, 3052 KiB  
Article
Evaluation of Spectral Imaging for Early Esophageal Cancer Detection
by Li-Jen Chang, Chu-Kuang Chou, Arvind Mukundan, Riya Karmakar, Tsung-Hsien Chen, Syna Syna, Chou-Yuan Ko and Hsiang-Chen Wang
Cancers 2025, 17(12), 2049; https://doi.org/10.3390/cancers17122049 - 19 Jun 2025
Viewed by 615
Abstract
Objective: Esophageal carcinoma (EC) is the eighth most prevalent cancer and the sixth leading cause of cancer-related mortality worldwide. Early detection is vital for improving prognosis, particularly for dysplasia and squamous cell carcinoma (SCC). Methods: This study evaluates a hyperspectral imaging conversion method, [...] Read more.
Objective: Esophageal carcinoma (EC) is the eighth most prevalent cancer and the sixth leading cause of cancer-related mortality worldwide. Early detection is vital for improving prognosis, particularly for dysplasia and squamous cell carcinoma (SCC). Methods: This study evaluates a hyperspectral imaging conversion method, the Spectrum-Aided Vision Enhancer (SAVE), for its efficacy in enhancing esophageal cancer detection compared to conventional white-light imaging (WLI). Five deep learning models (YOLOv9, YOLOv10, YOLO-NAS, RT-DETR, and Roboflow 3.0) were trained and evaluated on a dataset comprising labeled endoscopic images, including normal, dysplasia, and SCC classes. Results: Across all five evaluated deep learning models, the SAVE consistently outperformed conventional WLI in detecting esophageal cancer lesions. For SCC, the F1 score improved from 84.3% to 90.4% in regard to the YOLOv9 model and from 87.3% to 90.3% in regard to the Roboflow 3.0 model when using the SAVE. Dysplasia detection also improved, with the precision increasing from 72.4% (WLI) to 76.5% (SAVE) in regard to the YOLOv9 model. Roboflow 3.0 achieved the highest F1 score for dysplasia of 64.7%. YOLO-NAS exhibited balanced performance across all lesion types, with the dysplasia precision rising from 75.1% to 79.8%. Roboflow 3.0 also recorded the highest SCC sensitivity of 85.7%. In regard to SCC detection with YOLOv9, the WLI F1 score was 84.3% (95% CI: 71.7–96.9%) compared to 90.4% (95% CI: 80.2–100%) with the SAVE (p = 0.03). For dysplasia detection, the F1 score increased from 60.3% (95% CI: 51.5–69.1%) using WLI to 65.5% (95% CI: 57.0–73.8%) with SAVE (p = 0.04). These findings demonstrate that the SAVE enhances lesion detectability and diagnostic performance across different deep learning models. Conclusions: The amalgamation of the SAVE with deep learning algorithms markedly enhances the detection of esophageal cancer lesions, especially squamous cell carcinoma and dysplasia, in contrast to traditional white-light imaging. This underscores the SAVE’s potential as an essential clinical instrument for the early detection and diagnosis of cancer. Full article
Show Figures

Figure 1

16 pages, 2427 KiB  
Article
Assessing the Efficacy of the Spectrum-Aided Vision Enhancer (SAVE) to Detect Acral Lentiginous Melanoma, Melanoma In Situ, Nodular Melanoma, and Superficial Spreading Melanoma: Part II
by Teng-Li Lin, Riya Karmakar, Arvind Mukundan, Sakshi Chaudhari, Yu-Ping Hsiao, Shang-Chin Hsieh and Hsiang-Chen Wang
Diagnostics 2025, 15(6), 714; https://doi.org/10.3390/diagnostics15060714 - 13 Mar 2025
Cited by 3 | Viewed by 889
Abstract
Background: Melanoma, a highly aggressive form of skin cancer, necessitates early detection to significantly improve survival rates. Traditional diagnostic techniques, such as white-light imaging (WLI), are effective but often struggle to differentiate between melanoma subtypes in their early stages. Methods: The emergence of [...] Read more.
Background: Melanoma, a highly aggressive form of skin cancer, necessitates early detection to significantly improve survival rates. Traditional diagnostic techniques, such as white-light imaging (WLI), are effective but often struggle to differentiate between melanoma subtypes in their early stages. Methods: The emergence of the Spectrum-Aided Vison Enhancer (SAVE) offers a promising alternative by utilizing specific wavelength bands to enhance visual contrast in melanoma lesions. This technique facilitates greater differentiation between malignant and benign tissues, particularly in challenging cases. In this study, the efficacy of the SAVE is evaluated in detecting melanoma subtypes including acral lentiginous melanoma (ALM), melanoma in situ (MIS), nodular melanoma (NM), and superficial spreading melanoma (SSM) compared to WLI. Results: The findings demonstrated that the SAVE consistently outperforms WLI across various key metrics, including precision, recall, F1-scorw, and mAP, making it a more reliable tool for early melanoma detection using the four different machine learning methods YOLOv10, Faster RCNN, Scaled YOLOv4, and YOLOv7. Conclusions: The ability of the SAVE to capture subtle spectral differences offers clinicians a new avenue for improving diagnostic accuracy and patient outcomes. Full article
Show Figures

Figure 1

18 pages, 3428 KiB  
Article
Assessing the Efficacy of the Spectrum-Aided Vision Enhancer (SAVE) to Detect Acral Lentiginous Melanoma, Melanoma In Situ, Nodular Melanoma, and Superficial Spreading Melanoma
by Teng-Li Lin, Chun-Te Lu, Riya Karmakar, Kalpana Nampalley, Arvind Mukundan, Yu-Ping Hsiao, Shang-Chin Hsieh and Hsiang-Chen Wang
Diagnostics 2024, 14(15), 1672; https://doi.org/10.3390/diagnostics14151672 - 1 Aug 2024
Cited by 19 | Viewed by 2440
Abstract
Skin cancer is the predominant form of cancer worldwide, including 75% of all cancer cases. This study aims to evaluate the effectiveness of the spectrum-aided visual enhancer (SAVE) in detecting skin cancer. This paper presents the development of a novel algorithm for snapshot [...] Read more.
Skin cancer is the predominant form of cancer worldwide, including 75% of all cancer cases. This study aims to evaluate the effectiveness of the spectrum-aided visual enhancer (SAVE) in detecting skin cancer. This paper presents the development of a novel algorithm for snapshot hyperspectral conversion, capable of converting RGB images into hyperspectral images (HSI). The integration of band selection with HSI has facilitated the identification of a set of narrow band images (NBI) from the RGB images. This study utilizes various iterations of the You Only Look Once (YOLO) machine learning (ML) framework to assess the precision, recall, and mean average precision in the detection of skin cancer. YOLO is commonly preferred in medical diagnostics due to its real-time processing speed and accuracy, which are essential for delivering effective and efficient patient care. The precision, recall, and mean average precision (mAP) of the SAVE images show a notable enhancement in comparison to the RGB images. This work has the potential to greatly enhance the efficiency of skin cancer detection, as well as improve early detection rates and diagnostic accuracy. Consequently, it may lead to a reduction in both morbidity and mortality rates. Full article
Show Figures

Figure 1

13 pages, 1906 KiB  
Article
Evaluation of Spectrum-Aided Visual Enhancer (SAVE) in Esophageal Cancer Detection Using YOLO Frameworks
by Chu-Kuang Chou, Riya Karmakar, Yu-Ming Tsao, Lim Wei Jie, Arvind Mukundan, Chien-Wei Huang, Tsung-Hsien Chen, Chau-Yuan Ko and Hsiang-Chen Wang
Diagnostics 2024, 14(11), 1129; https://doi.org/10.3390/diagnostics14111129 - 29 May 2024
Cited by 9 | Viewed by 1908
Abstract
The early detection of esophageal cancer presents a substantial difficulty, which contributes to its status as a primary cause of cancer-related fatalities. This study used You Only Look Once (YOLO) frameworks, specifically YOLOv5 and YOLOv8, to predict and detect early-stage EC by using [...] Read more.
The early detection of esophageal cancer presents a substantial difficulty, which contributes to its status as a primary cause of cancer-related fatalities. This study used You Only Look Once (YOLO) frameworks, specifically YOLOv5 and YOLOv8, to predict and detect early-stage EC by using a dataset sourced from the Division of Gastroenterology and Hepatology, Ditmanson Medical Foundation, Chia-Yi Christian Hospital. The dataset comprised 2741 white-light images (WLI) and 2741 hyperspectral narrowband images (HSI-NBI). They were divided into 60% training, 20% validation, and 20% test sets to facilitate robust detection. The images were produced using a conversion method called the spectrum-aided vision enhancer (SAVE). This algorithm can transform a WLI into an NBI without requiring a spectrometer or spectral head. The main goal was to identify dysplasia and squamous cell carcinoma (SCC). The model’s performance was evaluated using five essential metrics: precision, recall, F1-score, mAP, and the confusion matrix. The experimental results demonstrated that the HSI model exhibited improved learning capabilities for SCC characteristics compared with the original RGB images. Within the YOLO framework, YOLOv5 outperformed YOLOv8, indicating that YOLOv5’s design possessed superior feature-learning skills. The YOLOv5 model, when used in conjunction with HSI-NBI, demonstrated the best performance. It achieved a precision rate of 85.1% (CI95: 83.2–87.0%, p < 0.01) in diagnosing SCC and an F1-score of 52.5% (CI95: 50.1–54.9%, p < 0.01) in detecting dysplasia. The results of these figures were much better than those of YOLOv8. YOLOv8 achieved a precision rate of 81.7% (CI95: 79.6–83.8%, p < 0.01) and an F1-score of 49.4% (CI95: 47.0–51.8%, p < 0.05). The YOLOv5 model with HSI demonstrated greater performance than other models in multiple scenarios. This difference was statistically significant, suggesting that the YOLOv5 model with HSI significantly improved detection capabilities. Full article
(This article belongs to the Special Issue Advancements in Diagnosis and Prognosis of Gastrointestinal Diseases)
Show Figures

Figure 1

21 pages, 405 KiB  
Review
Promoting Artificial Intelligence for Global Breast Cancer Risk Prediction and Screening in Adult Women: A Scoping Review
by Lea Sacca, Diana Lobaina, Sara Burgoa, Kathryn Lotharius, Elijah Moothedan, Nathan Gilmore, Justin Xie, Ryan Mohler, Gabriel Scharf, Michelle Knecht and Panagiota Kitsantas
J. Clin. Med. 2024, 13(9), 2525; https://doi.org/10.3390/jcm13092525 - 25 Apr 2024
Cited by 4 | Viewed by 4630
Abstract
Background: Artificial intelligence (AI) algorithms can be applied in breast cancer risk prediction and prevention by using patient history, scans, imaging information, and analysis of specific genes for cancer classification to reduce overdiagnosis and overtreatment. This scoping review aimed to identify the barriers [...] Read more.
Background: Artificial intelligence (AI) algorithms can be applied in breast cancer risk prediction and prevention by using patient history, scans, imaging information, and analysis of specific genes for cancer classification to reduce overdiagnosis and overtreatment. This scoping review aimed to identify the barriers encountered in applying innovative AI techniques and models in developing breast cancer risk prediction scores and promoting screening behaviors among adult females. Findings may inform and guide future global recommendations for AI application in breast cancer prevention and care for female populations. Methods: The PRISMA-SCR (Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews) was used as a reference checklist throughout this study. The Arksey and O’Malley methodology was used as a framework to guide this review. The framework methodology consisted of five steps: (1) Identify research questions; (2) Search for relevant studies; (3) Selection of studies relevant to the research questions; (4) Chart the data; (5) Collate, summarize, and report the results. Results: In the field of breast cancer risk detection and prevention, the following AI techniques and models have been applied: Machine and Deep Learning Model (ML-DL model) (n = 1), Academic Algorithms (n = 2), Breast Cancer Surveillance Consortium (BCSC), Clinical 5-Year Risk Prediction Model (n = 2), deep-learning computer vision AI algorithms (n = 2), AI-based thermal imaging solution (Thermalytix) (n = 1), RealRisks (n = 2), Breast Cancer Risk NAVIgation (n = 1), MammoRisk (ML-Based Tool) (n = 1), Various MLModels (n = 1), and various machine/deep learning, decision aids, and commercial algorithms (n = 7). In the 11 included studies, a total of 39 barriers to AI applications in breast cancer risk prediction and screening efforts were identified. The most common barriers in the application of innovative AI tools for breast cancer prediction and improved screening rates included lack of external validity and limited generalizability (n = 6), as AI was used in studies with either a small sample size or datasets with missing data. Many studies (n = 5) also encountered selection bias due to exclusion of certain populations based on characteristics such as race/ethnicity, family history, or past medical history. Several recommendations for future research should be considered. AI models need to include a broader spectrum and more complete predictive variables for risk assessment. Investigating long-term outcomes with improved follow-up periods is critical to assess the impacts of AI on clinical decisions beyond just the immediate outcomes. Utilizing AI to improve communication strategies at both a local and organizational level can assist in informed decision-making and compliance, especially in populations with limited literacy levels. Conclusions: The use of AI in patient education and as an adjunctive tool for providers is still early in its incorporation, and future research should explore the implementation of AI-driven resources to enhance understanding and decision-making regarding breast cancer screening, especially in vulnerable populations with limited literacy. Full article
Show Figures

Figure 1

26 pages, 1287 KiB  
Systematic Review
A State-of-the-Art of Exoskeletons in Line with the WHO’s Vision on Healthy Aging: From Rehabilitation of Intrinsic Capacities to Augmentation of Functional Abilities
by Rebeca Alejandra Gavrila Laic, Mahyar Firouzi, Reinhard Claeys, Ivan Bautmans, Eva Swinnen and David Beckwée
Sensors 2024, 24(7), 2230; https://doi.org/10.3390/s24072230 - 30 Mar 2024
Cited by 8 | Viewed by 5436
Abstract
The global aging population faces significant health challenges, including an increasing vulnerability to disability due to natural aging processes. Wearable lower limb exoskeletons (LLEs) have emerged as a promising solution to enhance physical function in older individuals. This systematic review synthesizes the use [...] Read more.
The global aging population faces significant health challenges, including an increasing vulnerability to disability due to natural aging processes. Wearable lower limb exoskeletons (LLEs) have emerged as a promising solution to enhance physical function in older individuals. This systematic review synthesizes the use of LLEs in alignment with the WHO’s healthy aging vision, examining their impact on intrinsic capacities and functional abilities. We conducted a comprehensive literature search in six databases, yielding 36 relevant articles covering older adults (65+) with various health conditions, including sarcopenia, stroke, Parkinson’s Disease, osteoarthritis, and more. The interventions, spanning one to forty sessions, utilized a range of LLE technologies such as Ekso®, HAL®, Stride Management Assist®, Honda Walking Assist®, Lokomat®, Walkbot®, Healbot®, Keeogo Rehab®, EX1®, overground wearable exoskeletons, Eksoband®, powered ankle–foot orthoses, HAL® lumbar type, Human Body Posturizer®, Gait Enhancing and Motivation System®, soft robotic suits, and active pelvis orthoses. The findings revealed substantial positive outcomes across diverse health conditions. LLE training led to improvements in key performance indicators, such as the 10 Meter Walk Test, Five Times Sit-to-Stand test, Timed Up and Go test, and more. Additionally, enhancements were observed in gait quality, joint mobility, muscle strength, and balance. These improvements were accompanied by reductions in sedentary behavior, pain perception, muscle exertion, and metabolic cost while walking. While longer intervention durations can aid in the rehabilitation of intrinsic capacities, even the instantaneous augmentation of functional abilities can be observed in a single session. In summary, this review demonstrates consistent and significant enhancements in critical parameters across a broad spectrum of health conditions following LLE interventions in older adults. These findings underscore the potential of LLE in promoting healthy aging and enhancing the well-being of older adults. Full article
(This article belongs to the Special Issue Intelligent Sensors and Robots for Ambient Assisted Living)
Show Figures

Figure 1

10 pages, 237 KiB  
Review
Navigating the Usher Syndrome Genetic Landscape: An Evaluation of the Associations between Specific Genes and Quality Categories of Cochlear Implant Outcomes
by Micol Busi and Alessandro Castiglione
Audiol. Res. 2024, 14(2), 254-263; https://doi.org/10.3390/audiolres14020023 - 26 Feb 2024
Cited by 2 | Viewed by 3075
Abstract
Usher syndrome (US) is a clinically and genetically heterogeneous disorder that involves three main features: sensorineural hearing loss, retinitis pigmentosa (RP), and vestibular impairment. With a prevalence of 4–17/100,000, it is the most common cause of deaf-blindness worldwide. Genetic research has provided crucial [...] Read more.
Usher syndrome (US) is a clinically and genetically heterogeneous disorder that involves three main features: sensorineural hearing loss, retinitis pigmentosa (RP), and vestibular impairment. With a prevalence of 4–17/100,000, it is the most common cause of deaf-blindness worldwide. Genetic research has provided crucial insights into the complexity of US. Among nine confirmed causative genes, MYO7A and USH2A are major players in US types 1 and 2, respectively, whereas CRLN1 is the sole confirmed gene associated with type 3. Variants in these genes also contribute to isolated forms of hearing loss and RP, indicating intersecting molecular pathways. While hearing loss can be adequately managed with hearing aids or cochlear implants (CIs), approved RP treatment modalities are lacking. Gene replacement and editing, antisense oligonucleotides, and small-molecule drugs hold promise for halting RP progression and restoring vision, enhancing patients’ quality of life. Massively parallel sequencing has identified gene variants (e.g., in PCDH15) that influence CI results. Accordingly, preoperative genetic examination appears valuable for predicting CI success. To explore genetic mutations in CI recipients and establish correlations between implant outcomes and involved genes, we comprehensively reviewed the literature to gather data covering a broad spectrum of CI outcomes across all known US-causative genes. Implant outcomes were categorized as excellent or very good, good, poor or fair, and very poor. Our review of 95 cochlear-implant patients with US, along with their CI outcomes, revealed the importance of presurgical genetic testing to elucidate potential challenges and provide tailored counseling to improve auditory outcomes. The multifaceted nature of US demands a comprehensive understanding and innovative interventions. Genetic insights drive therapeutic advancements, offering potential remedies for the retinal component of US. The synergy between genetics and therapeutics holds promise for individuals with US and may enhance their sensory experiences through customized interventions. Full article
(This article belongs to the Special Issue Genetics of Hearing Loss—Volume II)
Back to TopTop