Advancements in Medical and Assistive Technologies Using Artificial Intelligence and Deep Learning Techniques

A special issue of Technologies (ISSN 2227-7080). This special issue belongs to the section "Assistive Technologies".

Deadline for manuscript submissions: closed (31 March 2026) | Viewed by 48807

Special Issue Editors


E-Mail Website
Guest Editor
1. Developmental Psychology, "Giustino Fortunato" University of Benevento, 82100 Benevento, Italy
2. Faculty of Law, Giustino Fortunato University, 82100 Benevento, Italy
Interests: virtual reality; assistive technology; cognitive-behavioral approach; ADHD (attention deficit and hyperactivity disorder); ASD (autism spectrum disorders); single-subject design; rare diseases; augmentative and alternative communication; augmentative and alternative communication technologies; Alzheimer’s disease (AD); multiple sclerosis; multiple disabilities; clinical rehabilitation; neurodegenerative diseases; neurodevelopmental disorders; telerehabilitation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Engineering, Architecture and Design, Universidad Autónoma de Baja California, Ensenada 22860, BCN, México
Interests: artificial intelligence; data science; medical imaging; biomedical signal processing; machine learning; deep learning; IoT; H-IoT; network security; wearable devices; embedded systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Integrating artificial intelligence (AI) and deep learning (DL) techniques into medical and assistive technology (AT) is revolutionizing the healthcare landscape, offering unprecedented precision and efficiency in diagnosing, monitoring, and treating various conditions. As the demand for personalized and accessible healthcare grows, these technologies are crucial in overcoming the limitations of traditional methods, providing new avenues for innovation in patient care. AI and deep learning can allow healthcare systems to handle vast numbers of data, enabling more accurate and timely interventions, which are vital in developing assistive technologies that enhance the quality of life for individuals with disabilities. These advancements not only improve existing medical practices but also drive the creation of novel tools and systems to reshape the future of healthcare.

For this Special Issue, we will gather new research at the intersection of AI, deep learning (DL), and biomedical engineering, redefining modern healthcare by showcasing innovative methods for diagnosing, monitoring, and treating various medical conditions. The focus will be on cutting-edge developments in AI and DL applications within medical technologies and assistive devices, addressing challenges in designing and deploying AI-driven solutions across healthcare domains.

Contributions are invited on topics such as the following:

  • AI and DL in medical diagnostics for early disease detection through medical imaging or signal processing;
  • the development of adaptive assistive technologies and robotics to support individuals with disabilities;
  • advancements in healthcare monitoring systems powered by AI for real-time analysis;
  • AI-driven biomedical signal processing;
  • the application of deep learning in biomedical imaging and signal interpretation;
  • smart medical device innovation for improved patient care, including security for telemedicine technologies;
  • the use of AI in personalized medicine for tailored treatment plans;
  • the integration of AI into IoT in healthcare environments to optimize patient outcomes;
  • the ethical and security challenges in AI-driven healthcare systems.

To conclude, we welcome submissions exploring the integration of AI-based programs and AT tools or devices into reinforcement learning principles and new technologies (e.g., augmented reality, virtual reality, serious games, and telerehabilitation) for both assessment and recovery purposes, to provide participants with highly customized and tailored technological solutions. This Special Issue will feature work that presents novel systems or approaches, frameworks, methods, algorithms, or applications, pushing the boundaries of what AI and DL can achieve in the healthcare sector.

Dr. Fabrizio Stasolla
Dr. Everardo Inzunza-González
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Technologies is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • medical imaging
  • machine learning
  • medical image classification
  • deep learning
  • H-IoT
  • biomedical signal processing
  • deep neural networks
  • CNNs
  • health informatics
  • computer-aided diagnosis
  • data science

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

19 pages, 1792 KB  
Article
Assessing EEG Channel Similarity and Informational Relevance for Motor Tasks
by Julio C. Gonzalez-Morales, Marcos Aviles, José R. García-Martínez and Juvenal Rodríguez-Reséndiz
Technologies 2026, 14(3), 163; https://doi.org/10.3390/technologies14030163 - 5 Mar 2026
Viewed by 369
Abstract
This study investigates whether inter-channel similarity, quantified using Pearson’s correlation, can be used as an indicator of electrode relevance in electroencephalography (EEG)-based motor imagery (MI) classification and compares this approach with a genetic algorithm (GA)-based electrode selection strategy. Electrode subsets were obtained using [...] Read more.
This study investigates whether inter-channel similarity, quantified using Pearson’s correlation, can be used as an indicator of electrode relevance in electroencephalography (EEG)-based motor imagery (MI) classification and compares this approach with a genetic algorithm (GA)-based electrode selection strategy. Electrode subsets were obtained using Pearson correlation ranking, a GA optimizing classification accuracy, and the reference-study electrode subset reported in prior work. All subsets were evaluated on the BCI Competition IV Dataset 2a using a unified classifier architecture, and the sensitivity to classifier hyperparameter configuration was analyzed. Pearson-based selection achieved accuracies of 75.8% (8 channels), 78.1% (10 channels), and 81.5% (12 channels), while the GA achieved 75.9% (8 channels), 78.8% (10 channels), and 80.0% (13 channels). The reference-study electrode subset reached 75.0% (8 channels) and 76.7% (10 channels). Although correlation-based selection yielded competitive performance, no consistent relationship was observed between inter-channel similarity and discriminative relevance, and classification performance showed notable sensitivity to hyperparameter settings. These findings indicate that inter-channel similarity alone is not sufficient to determine electrode importance in MI classification and support the use of data-driven, model-aware selection strategies for the design of efficient low-channel-count brain–computer interface systems. Full article
Show Figures

Figure 1

33 pages, 4995 KB  
Article
Multi-Scale ConvNeXt for Robust Brain Tumor Segmentation in Multimodal MRI
by Jose Luis Lopez-Ramirez, Fernando Daniel Hernandez-Gutierrez, Jose Ramon Avina-Ortiz, Paula Dalida Bravo-Aguilar, Eli Gabriel Avina-Bravo, Jose Ruiz-Pinales and Juan Gabriel Avina-Cervantes
Technologies 2026, 14(1), 34; https://doi.org/10.3390/technologies14010034 - 4 Jan 2026
Viewed by 1179
Abstract
Vision Transformer (ViT) models are well known for effectively capturing global contextual information through self-attention. In contrast, ConvNeXt’s hierarchical convolutional inductive bias enables the extraction of robust multi-scale features at lower computational and memory cost, making it suitable for deployment in systems with [...] Read more.
Vision Transformer (ViT) models are well known for effectively capturing global contextual information through self-attention. In contrast, ConvNeXt’s hierarchical convolutional inductive bias enables the extraction of robust multi-scale features at lower computational and memory cost, making it suitable for deployment in systems with limited annotation and constrained resources. Accordingly, a multi-scale UNet architecture based on a ConvNeXt backbone is proposed for brain tumor segmentation; it is equipped with a spatial latent module and Reverse Attention (RA)-guided skip connections. This framework jointly models long-range context and delineates reliable boundaries. Magnetic resonance images drawn from the BraTS 2021, 2023, and 2024 datasets serve as case studies for evaluating brain tumor segmentation performance. The incorporated multi-scale features notably improve the segmentation of small enhancing regions and peripheral tumor boundaries, which are frequently missed by single-scale baselines. On BraTS 2021, the model achieves a Dice similarity coefficient (DSC) of 0.8956 and a mean intersection over union (IoU) of 0.8122, with a sensitivity of 0.8761, a specificity of 0.9964, and an accuracy of 0.9878. On BraTS 2023, it attains a DSC of 0.9235 and an IoU of 0.8592, with a sensitivity of 0.9037, a specificity of 0.9977, and an accuracy of 0.9904. On BraTS 2024, it yields a DSC of 0.9225 and an IoU of 0.8575, with a sensitivity of 0.8989, a specificity of 0.9979, and an accuracy of 0.9903. Overall, the segmentation results provide spatially explicit contours that support lesion-area estimation, precise boundary delineation, and slice-wise longitudinal assessment. Full article
Show Figures

Figure 1

26 pages, 1227 KB  
Article
Automated Sleep Spindle Analysis in Epilepsy EEG Using Deep Learning
by Nikolay V. Gromov, Albina V. Lebedeva, Artem A. Sharkov, Anna D. Grebenyukova, Anton E. Malkov, Svetlana A. Gerasimova, Lev A. Smirnov, Tatiana A. Levanova and Alexander N. Pisarchik
Technologies 2025, 13(11), 524; https://doi.org/10.3390/technologies13110524 - 13 Nov 2025
Cited by 1 | Viewed by 1624
Abstract
Sleep spindles, together with K-complexes, are the distinctive patterns of neuronal activity in EEG recordings during stage 2 sleep. When the mechanisms of sleep spindle generation are impaired, e.g., in epilepsy, their quantitative parameters change. The analysis of these changes can provide valuable [...] Read more.
Sleep spindles, together with K-complexes, are the distinctive patterns of neuronal activity in EEG recordings during stage 2 sleep. When the mechanisms of sleep spindle generation are impaired, e.g., in epilepsy, their quantitative parameters change. The analysis of these changes can provide valuable insights into the formation of epileptiform activity patterns and help to develop an additional tool for more accurate medical diagnosis. Despite the central role of EEG in the diagnosis of epilepsy, disorders of consciousness, and neurological research, resources specifically dedicated to large-scale EEG data analysis are under-represented. In our study, we collect a specialized database of clinical EEG recordings from epilepsy patients and controls during N2 sleep, characterized by rhythmic spindle activity in frontocentral and vertex regions, and manually annotate them. We then quantify four key sleep spindle characteristics using a comparison of manual annotation by a clinician and artificial intelligence technologies. A thorough evaluation of state-of-the-art deep learning architectures for detecting and characterizing sleep spindles in EEG recordings from epilepsy patients is conducted. The results show that the 1D U-Net and SEED architectures achieve competitive overall performance, but their precision-to-recall ratios differ markedly in clinical settings. This suggests that different approaches may be appropriate for each clinical situation. Furthermore, our results demonstrate that epilepsy is associated with significant and quantifiable changes in sleep spindle morphology and frequency. Automated analysis of these characteristics using artificial intelligence provides a reliable biomarker that provides a detailed picture of thalamocortical dysfunction in epilepsy. This approach has great potential for accelerated diagnosis and the development of targeted therapeutic strategies for epilepsy. Full article
Show Figures

Figure 1

24 pages, 2879 KB  
Article
Skeleton-Based Real-Time Hand Gesture Recognition Using Data Fusion and Ensemble Multi-Stream CNN Architecture
by Maki K. Habib, Oluwaleke Yusuf and Mohamed Moustafa
Technologies 2025, 13(11), 484; https://doi.org/10.3390/technologies13110484 - 26 Oct 2025
Cited by 1 | Viewed by 2050
Abstract
Hand Gesture Recognition (HGR) is a vital technology that enables intuitive human–computer interaction in various domains, including augmented reality, smart environments, and assistive systems. Achieving both high accuracy and real-time performance remains challenging due to the complexity of hand dynamics, individual morphological variations, [...] Read more.
Hand Gesture Recognition (HGR) is a vital technology that enables intuitive human–computer interaction in various domains, including augmented reality, smart environments, and assistive systems. Achieving both high accuracy and real-time performance remains challenging due to the complexity of hand dynamics, individual morphological variations, and computational limitations. This paper presents a lightweight and efficient skeleton-based HGR framework that addresses these challenges through an optimized multi-stream Convolutional Neural Network (CNN) architecture and a trainable ensemble tuner. Dynamic 3D gestures are transformed into structured, noise-minimized 2D spatiotemporal representations via enhanced data-level fusion, supporting robust classification across diverse spatial perspectives. The ensemble tuner strengthens semantic relationships between streams and improves recognition accuracy. Unlike existing solutions that rely on high-end hardware, the proposed framework achieves real-time inference on consumer-grade devices without compromising accuracy. Experimental validation across five benchmark datasets (SHREC2017, DHG1428, FPHA, LMDHG, and CNR) confirms consistent or superior performance with reduced computational overhead. Additional validation on the SBU Kinect Interaction Dataset highlights generalization potential for broader Human Action Recognition (HAR) tasks. This advancement bridges the gap between efficiency and accuracy, supporting scalable deployment in AR/VR, mobile computing, interactive gaming, and resource-constrained environments. Full article
Show Figures

Figure 1

20 pages, 3294 KB  
Article
Non-Intrusive Infant Body Position Detection for Sudden Infant Death Syndrome Prevention Using Pressure Mats
by Antonio Garcia-Herraiz, Susana Nunez-Nagy, Luis Cruz-Piris and Bernardo Alarcos
Technologies 2025, 13(10), 427; https://doi.org/10.3390/technologies13100427 - 23 Sep 2025
Viewed by 1205
Abstract
Sudden Infant Death Syndrome (SIDS) is one of the leading causes of postnatal mortality, with the prone sleeping position identified as a critical risk factor. This article presents the design, implementation, and validation of a low-cost embedded system for unobtrusive, real-time monitoring of [...] Read more.
Sudden Infant Death Syndrome (SIDS) is one of the leading causes of postnatal mortality, with the prone sleeping position identified as a critical risk factor. This article presents the design, implementation, and validation of a low-cost embedded system for unobtrusive, real-time monitoring of infant posture. The system acquires data from a pressure mat on which the infant rests, converting the pressure matrix into an image representing the postural imprint. A Convolutional Neural Network (CNN) has been trained to classify these images and distinguish between prone and supine positions with high accuracy. The trained model was optimized and deployed in a data acquisition and processing system (DAQ) based on the Raspberry Pi platform, enabling local and autonomous inference. To prevent false positives, the system activates a visual and audible alarm upon detection of a sustained risk position, alongside remote notifications via the MQTT protocol. The results demonstrate that the prototype is capable of reliably and continuously identifying the infant’s posture when used by people who are not technology experts. We conclude that it is feasible to develop an autonomous, accessible, and effective monitoring system that can serve as a support tool for caregivers and as a technological basis for new strategies in SIDS prevention. Full article
Show Figures

Graphical abstract

15 pages, 4611 KB  
Article
Real-Time Prediction of Foot Placement and Step Height Using Stereo Vision Enhanced by Ground Object Awareness
by Chulyong Lim, Jaewon Baek, Junhee Han, Giuk Lee and Woochul Nam
Technologies 2025, 13(9), 399; https://doi.org/10.3390/technologies13090399 - 3 Sep 2025
Viewed by 1327
Abstract
Foot placement position (FP) and step height (SH) are needed to control walking-assistive systems on uneven terrain. This study proposes a novel model that predicts FP and SH before a user takes a step. The model uses a stereo vision system mounted on [...] Read more.
Foot placement position (FP) and step height (SH) are needed to control walking-assistive systems on uneven terrain. This study proposes a novel model that predicts FP and SH before a user takes a step. The model uses a stereo vision system mounted on the upper body and adapts to various terrains by incorporating foot motions and terrain object information. First, FP was predicted by visually tracking foot positions and was corrected based on the types and locations of objects on the ground. Then, SH was estimated using depth maps captured by an RGB-D stereo camera. To predict SH, several RGB-D frames were considered with homography, feature matching, and image transformation. The results show that the heatmap trajectory improved FP prediction on the flat-walking dataset, reducing the root mean square error of FP from 20.89 to 17.70 cm. Furthermore, incorporating object preference significantly improved FP prediction, resulting in an accuracy improvement from 52.57% to 78.01% in identifying the object a user stepped on. The mean absolute error of SH was calculated to be 7.65 cm in scenes containing rocks and puddles. The proposed model can enhance the control of walking-assistive systems in complex environments. Full article
Show Figures

Figure 1

33 pages, 8494 KB  
Article
Enhanced Multi-Class Brain Tumor Classification in MRI Using Pre-Trained CNNs and Transformer Architectures
by Marco Antonio Gómez-Guzmán, Laura Jiménez-Beristain, Enrique Efren García-Guerrero, Oscar Adrian Aguirre-Castro, José Jaime Esqueda-Elizondo, Edgar Rene Ramos-Acosta, Gilberto Manuel Galindo-Aldana, Cynthia Torres-Gonzalez and Everardo Inzunza-Gonzalez
Technologies 2025, 13(9), 379; https://doi.org/10.3390/technologies13090379 - 22 Aug 2025
Cited by 6 | Viewed by 4389
Abstract
Early and accurate identification of brain tumors is essential for determining effective treatment strategies and improving patient outcomes. Artificial intelligence (AI) and deep learning (DL) techniques have shown promise in automating diagnostic tasks based on magnetic resonance imaging (MRI). This study evaluates the [...] Read more.
Early and accurate identification of brain tumors is essential for determining effective treatment strategies and improving patient outcomes. Artificial intelligence (AI) and deep learning (DL) techniques have shown promise in automating diagnostic tasks based on magnetic resonance imaging (MRI). This study evaluates the performance of four pre-trained deep convolutional neural network (CNN) architectures for the automatic multi-class classification of brain tumors into four categories: Glioma, Meningioma, Pituitary, and No Tumor. The proposed approach utilizes the publicly accessible Brain Tumor MRI Msoud dataset, consisting of 7023 images, with 5712 provided for training and 1311 for testing. To assess the impact of data availability, subsets containing 25%, 50%, 75%, and 100% of the training data were used. A stratified five-fold cross-validation technique was applied. The CNN architectures evaluated include DeiT3_base_patch16_224, Xception41, Inception_v4, and Swin_Tiny_Patch4_Window7_224, all fine-tuned using transfer learning. The training pipeline incorporated advanced preprocessing and image data augmentation techniques to enhance robustness and mitigate overfitting. Among the models tested, Swin_Tiny_Patch4_Window7_224 achieved the highest classification Accuracy of 99.24% on the test set using 75% of the training data. This model demonstrated superior generalization across all tumor classes and effectively addressed class imbalance issues. Furthermore, we deployed and benchmarked the best-performing DL model on embedded AI platforms (Jetson AGX Xavier and Orin Nano), demonstrating their capability for real-time inference and highlighting their feasibility for edge-based clinical deployment. The results highlight the strong potential of pre-trained deep CNN and transformer-based architectures in medical image analysis. The proposed approach provides a scalable and energy-efficient solution for automated brain tumor diagnosis, facilitating the integration of AI into clinical workflows. Full article
Show Figures

Figure 1

22 pages, 7716 KB  
Article
A Deep-Learning Approach to Heart Sound Classification Based on Combined Time-Frequency Representations
by Leonel Orozco-Reyes, Miguel A. Alonso-Arévalo, Eloísa García-Canseco, Roilhi F. Ibarra-Hernández and Roberto Conte-Galván
Technologies 2025, 13(4), 147; https://doi.org/10.3390/technologies13040147 - 7 Apr 2025
Cited by 6 | Viewed by 6524
Abstract
Worldwide, heart disease is the leading cause of mortality. Cardiac auscultation, when conducted by a trained professional, is a non-invasive, cost-effective, and readily available method for the initial assessment of cardiac health. Automated heart sound analysis offers a promising and accessible approach to [...] Read more.
Worldwide, heart disease is the leading cause of mortality. Cardiac auscultation, when conducted by a trained professional, is a non-invasive, cost-effective, and readily available method for the initial assessment of cardiac health. Automated heart sound analysis offers a promising and accessible approach to supporting cardiac diagnosis. This work introduces a novel method for classifying heart sounds as normal or abnormal by leveraging time-frequency representations. Our approach combines three distinct time-frequency representations—short-time Fourier transform (STFT), mel-scale spectrogram, and wavelet synchrosqueezed transform (WSST)—to create images that enhance classification performance. These images are used to train five convolutional neural networks (CNNs): AlexNet, VGG-16, ResNet50, a CNN specialized in STFT images, and our proposed CNN model. The method was trained and tested using three public heart sound datasets: PhysioNet/CinC Challenge 2016, CirCor DigiScope Phonocardiogram Dataset 2022, and another open database. While individual representations achieve maximum accuracy of ≈85.9%, combining STFT, mel, and WSST boosts accuracy to ≈99%. By integrating complementary time-frequency features, our approach demonstrates robust heart sound analysis, achieving consistent classification performance across diverse CNN architectures, thus ensuring reliability and generalizability. Full article
Show Figures

Figure 1

28 pages, 3613 KB  
Article
Chatbot Based on Large Language Model to Improve Adherence to Exercise-Based Treatment in People with Knee Osteoarthritis: System Development
by Humberto Farías, Joaquín González Aroca and Daniel Ortiz
Technologies 2025, 13(4), 140; https://doi.org/10.3390/technologies13040140 - 4 Apr 2025
Cited by 4 | Viewed by 3149
Abstract
Knee osteoarthritis (KOA) is a prevalent condition globally, leading to significant pain and disability, particularly in individuals over the age of 40. While exercise has been shown to reduce symptoms and improve physical function and quality of life in patients with KOA, long-term [...] Read more.
Knee osteoarthritis (KOA) is a prevalent condition globally, leading to significant pain and disability, particularly in individuals over the age of 40. While exercise has been shown to reduce symptoms and improve physical function and quality of life in patients with KOA, long-term adherence to exercise programs remains a challenge due to the lack of ongoing support. To address this, a chatbot was developed using large language models (LLMs) to provide evidence-based guidance and promote adherence to treatment. A systematic review conducted under the PRISMA framework identified relevant clinical guidelines that served as the foundational knowledge base for the chatbot. The Mistral 7B model, optimized with Parameter-Efficient Fine-Tuning (PEFT) and Mixture-of-Experts (MoE) techniques, was integrated to ensure computational efficiency and mitigate hallucinations, a critical concern in medical applications. Additionally, the chatbot employs Self-Reflective Retrieval-Augmented Generation (SELF-RAG) combined with Chain of Thought (CoT) reasoning, enabling dynamic query reformulation and the generation of accurate, evidence-based responses tailored to patient needs. The chatbot was evaluated by comparing pre- and post-improvement versions and against a reference model (ChatGPT), using metrics of accuracy, relevance, and consistency. The results demonstrated significant improvements in response quality and conversational coherence, emphasizing the potential of integrating advanced LLMs with retrieval and reasoning methods to address critical challenges in healthcare. This approach not only enhances treatment adherence but also strengthens patient–provider interactions in managing chronic conditions like KOA. Full article
Show Figures

Figure 1

17 pages, 1167 KB  
Article
Preprocessing-Free Convolutional Neural Network Model for Arrhythmia Classification Using ECG Images
by Chotirose Prathom, Ryuhi Fukuda, Yuto Yokoyanagi and Yoshifumi Okada
Technologies 2025, 13(4), 128; https://doi.org/10.3390/technologies13040128 - 26 Mar 2025
Cited by 3 | Viewed by 2283
Abstract
Arrhythmia, which is characterized by irregular heart rhythms, can lead to life-threatening conditions by disrupting the circulatory system. Thus, early arrhythmia detection is crucial for timely and appropriate patient treatment. Machine learning models have been developed to classify arrhythmia using electrocardiogram (ECG) data, [...] Read more.
Arrhythmia, which is characterized by irregular heart rhythms, can lead to life-threatening conditions by disrupting the circulatory system. Thus, early arrhythmia detection is crucial for timely and appropriate patient treatment. Machine learning models have been developed to classify arrhythmia using electrocardiogram (ECG) data, which effectively capture the patterns associated with different abnormalities and achieve high classification performance. However, these models face challenges in terms of input coverage and robustness against data imbalance issues. Typically, existing methods employ a single cardiac cycle as the input, possibly overlooking the intervals between cycles, potentially resulting in the loss of critical temporal information. In addition, limited samples for rare arrhythmia types restrict the involved model’s ability to effectively learn, frequently resulting in low classification accuracy. Furthermore, the classification performance of existing methods on unseen data is not satisfactory owing to insufficient generalizability. To address these limitations, this research proposes a convolutional neural network (CNN) model for arrhythmia classification that incorporates two specialized modules. First, the proposed model utilizes images of three consecutive cardiac cycles as the input to expand the learning scope. Second, we implement a focal loss (FL) function during model training to prioritize minority classes. The experimental results demonstrate that the proposed model outperforms existing methods without requiring data preprocessing. The integration of multicycle ECG images and the FL function substantially enhances the model’s ability to capture ECG patterns, particularly for minority classes. In addition, our model exhibits satisfactory classification performance on unseen data from new patients. These findings suggest that the proposed model is a promising tool for practical application in arrhythmia classification tasks. Full article
Show Figures

Figure 1

28 pages, 3337 KB  
Article
Lung and Colon Cancer Classification Using Multiscale Deep Features Integration of Compact Convolutional Neural Networks and Feature Selection
by Omneya Attallah
Technologies 2025, 13(2), 54; https://doi.org/10.3390/technologies13020054 - 1 Feb 2025
Cited by 19 | Viewed by 4396
Abstract
The automated and precise classification of lung and colon cancer from histopathological photos continues to pose a significant challenge in medical diagnosis, as current computer-aided diagnosis (CAD) systems are frequently constrained by their dependence on singular deep learning architectures, elevated computational complexity, and [...] Read more.
The automated and precise classification of lung and colon cancer from histopathological photos continues to pose a significant challenge in medical diagnosis, as current computer-aided diagnosis (CAD) systems are frequently constrained by their dependence on singular deep learning architectures, elevated computational complexity, and their ineffectiveness in utilising multiscale features. To this end, the present research introduces a CAD system that integrates several lightweight convolutional neural networks (CNNs) with dual-layer feature extraction and feature selection to overcome the aforementioned constraints. Initially, it extracts deep attributes from two separate layers (pooling and fully connected) of three pre-trained CNNs (MobileNet, ResNet-18, and EfficientNetB0). Second, the system uses the benefits of canonical correlation analysis for dimensionality reduction in pooling layer attributes to reduce complexity. In addition, it integrates the dual-layer features to encapsulate both high- and low-level representations. Finally, to benefit from multiple deep network architectures while reducing classification complexity, the proposed CAD merges dual deep layer variables of the three CNNs and then applies the analysis of variance (ANOVA) and Chi-Squared for the selection of the most discriminative features from the integrated CNN architectures. The CAD is assessed on the LC25000 dataset leveraging eight distinct classifiers, encompassing various Support Vector Machine (SVM) variants, Decision Trees, Linear Discriminant Analysis, and k-nearest neighbours. The experimental results exhibited outstanding performance, attaining 99.8% classification accuracy with cubic SVM classifiers employing merely 50 ANOVA-selected features, exceeding the performance of individual CNNs while markedly diminishing computational complexity. The framework’s capacity to sustain exceptional accuracy with a limited feature set renders it especially advantageous for clinical applications where diagnostic precision and efficiency are critical. These findings confirm the efficacy of the multi-CNN, multi-layer methodology in enhancing cancer classification precision while mitigating the computational constraints of current systems. Full article
Show Figures

Figure 1

20 pages, 3968 KB  
Article
HybridFusionNet: Deep Learning for Multi-Stage Diabetic Retinopathy Detection
by Amar Shukla, Shamik Tiwari and Anurag Jain
Technologies 2024, 12(12), 256; https://doi.org/10.3390/technologies12120256 - 11 Dec 2024
Cited by 7 | Viewed by 3733
Abstract
Diabetic retinopathy (DR) is one of the most common causes of visual impairment worldwide and requires reliable automated detection methods. Numerous research efforts have developed various conventional methods for early detection of DR. Research in the field of DR remains insufficient, indicating the [...] Read more.
Diabetic retinopathy (DR) is one of the most common causes of visual impairment worldwide and requires reliable automated detection methods. Numerous research efforts have developed various conventional methods for early detection of DR. Research in the field of DR remains insufficient, indicating the potential for advances in diagnosis. In this paper, a hybrid model (HybridFusionNet) that integrates vision transformer (VIT) and attention processes is presented. It improves classification in the binary (Bcl) and multi-class (Mcl) stages by utilizing deep features from the DR stages. As a result, both the SAN and VIT models improve the recognition accuracy (Acc) in both stages.The HybridFusionNet mechanism achieves a competitive improvement in multi-stage and binary stages, which is Acc in Bcl and Mcl, with 91% and 99%, respectively. This illustrates that this model is suitable for a better diagnosis of DR. Full article
Show Figures

Figure 1

Review

Jump to: Research, Other

22 pages, 1130 KB  
Review
Artificial Intelligence in the Diagnosis and Prognosis of Osteosarcoma: A Decade of Progress and Future Directions
by Ralph Abou Ghayda, Karim Kalout, Joudy Eter, Mario Abdelnour, Hilda E. Ghadieh, Sami Azar and Frederic Harb
Technologies 2026, 14(3), 184; https://doi.org/10.3390/technologies14030184 - 19 Mar 2026
Viewed by 623
Abstract
Osteosarcoma is the most frequent cause of primary malignant bone tumors in childhood and adolescence. It is aggressive and may be associated with early metastasis, making patient management difficult. In this research, modern AI models for the diagnosis and prognosis of osteosarcoma were [...] Read more.
Osteosarcoma is the most frequent cause of primary malignant bone tumors in childhood and adolescence. It is aggressive and may be associated with early metastasis, making patient management difficult. In this research, modern AI models for the diagnosis and prognosis of osteosarcoma were screened and analyzed. Our review searched for articles that used AI for the diagnosis and prognosis of osteosarcoma over the past 10 years, including AI in predicting the staging of tumors, predicting chemotherapy response, identifying prognostic biomarkers and assessing risk of metastasis. The models performed well based on AUC and C-index, with considerable discriminatory power, and were superior to the classical clinical methods analyzed. Through the identification of already existing deficiencies in the literature, this review pointed out a need for future research trends to explore with respect to prospective validation, multimodal data fusion and translation of AI tools into clinical routine. Full article
Show Figures

Graphical abstract

Other

Jump to: Research, Review

21 pages, 1343 KB  
Systematic Review
The Role of Artificial Intelligence in the Detection and Diagnosis of Neurocognitive Disorders: A Systematic Review
by Pasqualina Perna, Alessandra Claudi, Fabrizio Stasolla and Raffaele Nappo
Technologies 2026, 14(3), 183; https://doi.org/10.3390/technologies14030183 - 18 Mar 2026
Viewed by 450
Abstract
Dementia represents a major healthcare challenge, as pathological changes often occur years before overt symptoms. Early manifestations such as mild cognitive impairment (MCI) and subjective cognitive decline (SCD) represent critical transitional stages between normal aging and dementia. Thus, distinguishing these conditions (i.e., MCI [...] Read more.
Dementia represents a major healthcare challenge, as pathological changes often occur years before overt symptoms. Early manifestations such as mild cognitive impairment (MCI) and subjective cognitive decline (SCD) represent critical transitional stages between normal aging and dementia. Thus, distinguishing these conditions (i.e., MCI and SCD) and determining their potential evolution into dementia remains crucial. However, current clinical tools, mainly neuroimaging and neuropsychological assessments, are not always clearly interpretable and are often resource-intensive. In recent years, artificial intelligence (AI), including machine learning (ML) and deep learning (DL), has demonstrated promising potential in early detection, progression prediction, and differential diagnosis of neurocognitive disorders. This systematic review aims to synthesize current evidence on the application of AI-based approaches to improve diagnostic accuracy and prognostic assessments in dementia. A comprehensive literature search of studies published between 2015 and 2025 was conducted across PubMed/MEDLINE, Scopus, and Web of Science, following PRISMA 2020 guidelines. Studies were evaluated for data modality, methodological rigor, performance metrics, and clinical applicability. Seventeen (17) studies, of which twelve (12) are primary studies and five (5) are secondary studies, examining AI applications in detecting and diagnosing neurocognitive disorders (NCDs) in adults with dementia, MCI, or SCD were included. Results indicate that AI models, particularly DL applied to neuroimaging, electrophysiological data, speech and language features, biomarkers, and digital behavioral data, achieve high diagnostic accuracy in distinguishing MCI, Alzheimer’s disease, and healthy aging. Predictive models also show potential in forecasting conversion from MCI to dementia and monitoring cognitive trajectories via wearable or smart-home technologies. Nonetheless, heterogeneity, limited external validation, and methodological inconsistencies hinder clinical translation. In conclusion, AI represents a rapidly evolving and promising tool for early detection and monitoring of neurocognitive disorders. Collectively, the reviewed studies underscore the need for standardized pipelines, larger multicenter datasets, and explainable AI frameworks to enable effective clinical implementation. Full article
Show Figures

Figure 1

28 pages, 676 KB  
Systematic Review
Challenges and Ethical Considerations in Implementing Assistive Technologies in Healthcare
by Eleni Gkiolnta, Debopriyo Roy and George F. Fragulis
Technologies 2025, 13(2), 48; https://doi.org/10.3390/technologies13020048 - 27 Jan 2025
Cited by 17 | Viewed by 13288
Abstract
Assistive technologies are becoming an increasingly important aspect of healthcare, particularly for people with physical or cognitive problems. While earlier research has investigated the ethical, legal, and societal implications of AI and assistive technologies, many studies have failed to address real-world obstacles such [...] Read more.
Assistive technologies are becoming an increasingly important aspect of healthcare, particularly for people with physical or cognitive problems. While earlier research has investigated the ethical, legal, and societal implications of AI and assistive technologies, many studies have failed to address real-world obstacles such as data privacy, algorithm bias, and regulatory issues. To further understand these issues, we conducted a thorough analysis of the current literature and analyzed real-world case studies. As AI-powered solutions become more widely used, we discovered that stronger legal frameworks and robust data security standards are required. Furthermore, privacy-preserving procedures and transparent accountability are critical for retaining patient trust and guaranteeing the effective use of these technologies in healthcare. This research provides important insights into the ethical and practical challenges that must be tackled for the successful integration of assistive technologies. Full article
Show Figures

Figure 1

Back to TopTop