Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (154)

Search Parameters:
Keywords = machine–human hybrid approach

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
44 pages, 5889 KB  
Article
A Multi-Stage Hybrid Learning Model with Advanced Feature Fusion for Enhanced Prostate Cancer Classification
by Sameh Abd El-Ghany and A. A. Abd El-Aziz
Diagnostics 2025, 15(24), 3235; https://doi.org/10.3390/diagnostics15243235 - 17 Dec 2025
Abstract
Background: Cancer poses a significant health risk to humans, with prostate cancer (PCa) being the second most common and deadly form among men, following lung cancer. Each year, it affects over a million individuals and presents substantial diagnostic challenges due to variations [...] Read more.
Background: Cancer poses a significant health risk to humans, with prostate cancer (PCa) being the second most common and deadly form among men, following lung cancer. Each year, it affects over a million individuals and presents substantial diagnostic challenges due to variations in tissue appearance and imaging quality. In recent decades, various techniques utilizing Magnetic Resonance Imaging (MRI) have been developed for identifying and classifying PCa. Accurate classification in MRI typically requires the integration of complementary feature types, such as deep semantic representations from Convolutional Neural Networks (CNNs) and handcrafted descriptors like Histogram of Oriented Gradients (HOG). Therefore, a more robust and discriminative feature integration strategy is crucial for enhancing computer-aided diagnosis performance. Objectives: This study aims to develop a multi-stage hybrid learning model that combines deep and handcrafted features, investigates various feature reduction and classification techniques, and improves diagnostic accuracy for prostate cancer using magnetic resonance imaging. Methods: The proposed framework integrates deep features extracted from convolutional architectures with handcrafted texture descriptors to capture both semantic and structural information. Multiple dimensionality reduction methods, including singular value decomposition (SVD), were evaluated to optimize the fused feature space. Several machine learning (ML) classifiers were benchmarked to identify the most effective diagnostic configuration. The overall framework was validated using k-fold cross-validation to ensure reliability and minimize evaluation bias. Results: Experimental results on the Transverse Plane Prostate (TPP) dataset for binary classification tasks showed that the hybrid model significantly outperformed individual deep or handcrafted approaches, achieving superior accuracy of 99.74%, specificity of 99.87%, precision of 99.87%, sensitivity of 99.61%, and F1-score of 99.74%. Conclusions: By combining complementary feature extraction, dimensionality reduction, and optimized classification, the proposed model offers a reliable and generalizable solution for prostate cancer diagnosis and demonstrates strong potential for integration into intelligent clinical decision-support systems. Full article
Show Figures

Figure 1

33 pages, 2685 KB  
Review
Predicting Coastal Flooding and Overtopping with Machine Learning: Review and Future Prospects
by Moeketsi L. Duiker, Victor Ramos, Francisco Taveira-Pinto and Paulo Rosa-Santos
J. Mar. Sci. Eng. 2025, 13(12), 2384; https://doi.org/10.3390/jmse13122384 - 16 Dec 2025
Abstract
Flooding and overtopping are major concerns in coastal areas due to their potential to cause severe damage to infrastructure, economic activities, and human lives. Traditional methods for predicting these phenomena include numerical and physical models, as well as empirical formulations. However, these methods [...] Read more.
Flooding and overtopping are major concerns in coastal areas due to their potential to cause severe damage to infrastructure, economic activities, and human lives. Traditional methods for predicting these phenomena include numerical and physical models, as well as empirical formulations. However, these methods have limitations, such as the high computational costs, reliance on extensive field data, and reduced accuracy under complex conditions. Recent advances in machine learning (ML) offer new opportunities to improve predictive capabilities in coastal engineering. This paper reviews ML applications for coastal flooding and overtopping prediction, analyzing commonly used models, data sources, and preprocessing techniques. Several studies report that ML models can match or exceed the performance of traditional approaches, such as empirical EurOtop formulas or high-fidelity numerical models, particularly in controlled laboratory datasets where numerical models are computationally intensive and empirical methods show larger estimation errors. However, their advantages remain task- and data-dependent, and their generalization and interpretability may lag behind physics-based methods. This review also examines recent developments, such as hybrid approaches, real-time monitoring, and explainable artificial intelligence, which show promise in addressing these limitations and advancing the operational use of ML in coastal flooding and overtopping prediction. Full article
(This article belongs to the Special Issue Coastal Disaster Assessment and Response—2nd Edition)
Show Figures

Figure 1

20 pages, 6385 KB  
Article
Molecular Remodeling of Milk Fat Globules Induced by Centrifugation: Insights from Deep Learning-Based Detection of Milk Adulteration
by Grzegorz Gwardys, Grzegorz Grodkowski, Piotr Kostusiak, Wojciech Mendelowski, Jan Slósarz, Michał Satława, Bartłomiej Śmietanka, Krzysztof Gwardys, Marcin Gołębiewski and Kamila Puppel
Int. J. Mol. Sci. 2025, 26(24), 11919; https://doi.org/10.3390/ijms262411919 - 10 Dec 2025
Viewed by 168
Abstract
Milk adulteration through centrifugation, which artificially reduces the somatic cell count (SCC), represents a significant challenge to food authenticity and public health. This fraudulent practice alters the native molecular architecture of milk, masking inflammatory conditions such as subclinical mastitis and distorting product quality. [...] Read more.
Milk adulteration through centrifugation, which artificially reduces the somatic cell count (SCC), represents a significant challenge to food authenticity and public health. This fraudulent practice alters the native molecular architecture of milk, masking inflammatory conditions such as subclinical mastitis and distorting product quality. Conventional analytical and microscopic techniques remain insufficiently sensitive to detect the subtle physicochemical changes associated with centrifugation, highlighting the need for molecular-level, data-driven diagnostics. The dataset included 128 paired raw milk samples and approximately 25,000 bright-field micrographs acquired across multiple microscopes, of which 95% were confirmed to be of high quality. In this study, advanced machine learning (ML) and deep learning (DL) approaches were applied to identify centrifugation-induced alterations in raw milk microstructure. Bright-field micrographs (pixel size 0.27 µm) of paired unprocessed and centrifuged samples were obtained under standardized optical conditions and analyzed using convolutional neural networks (ResNet-18/50, Inception-v3, Xception, NasNet-Mobile) and hybrid attention architectures (MaxViT, CoAtNet). Model performance was evaluated using the harmonic average of recalls across five micrographs per sample (HAR5). Human microscopy experts (n = 4) achieved only 18% classification accuracy—below the random baseline (25%)—confirming that centrifugation-induced modifications are not visually discernible. In contrast, DL architectures reached up to 97% accuracy (HAR5, Xception), successfully identifying subtle molecular cues. Class activation and sensitivity analyses indicated that models focused not on milk fat globule (MFG) boundaries but on high-frequency nanoscale variations related to the reorganization of casein micelles and solid non-fat fractions. The findings strongly suggest that centrifugation adulteration constitutes a molecular reorganization event rather than a morphological alteration. The integration of optical microscopy with AI-driven molecular analytics establishes deep learning as a precise and objective tool for detecting fraudulent milk processing and improving food integrity diagnostics. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Molecular Sciences)
Show Figures

Figure 1

49 pages, 969 KB  
Article
Evolution and Key Differences in Maturity Models for Industrial Digital Transformation: Focus on Industry 4.0 and 5.0
by Dayron Reyes Domínguez, Marta Beatriz Infante Abreu and Aurica Luminita Parv
Sustainability 2025, 17(24), 11042; https://doi.org/10.3390/su172411042 - 10 Dec 2025
Viewed by 248
Abstract
This study conducts an Academic Literature Analysis of 75 maturity models to clarify how Industry 4.0 and Industry 5.0 are being conceptualized and assessed. We map model scope, level structures, evaluated dimensions, and enabling technologies and complement descriptive statistics with exploratory non-parametric tests [...] Read more.
This study conducts an Academic Literature Analysis of 75 maturity models to clarify how Industry 4.0 and Industry 5.0 are being conceptualized and assessed. We map model scope, level structures, evaluated dimensions, and enabling technologies and complement descriptive statistics with exploratory non-parametric tests on the relationship between level structure and dimensional breadth. Results show a persistent dominance of Industry 4.0 models (≈92%), alongside a recent but steady emergence of Industry 5.0 and hybrid approaches in the latest models. Structurally, five-level schemes prevail, balancing diagnostic granularity and comparability. Content-wise, Technology and Digitalization, Processes and Operations, and Management and Strategy remain core, while People and Competencies and Innovation gain relevance; Sustainability and Social Responsibility and Human–Machine Interaction appear with the rise of Industry 5.0. We contribute (i) an operational definition of “hybrid” maturity models to make the I4.0→I5.0 transition measurable, (ii) a meta-typology of maturity levels explaining the five-level preference, and (iii) an evidence-based technology cartography across models. The findings suggest that future designs should retain the digital backbone of I4.0 while integrating explicit indicators for human-centricity, sustainability, and resilience with transparent weighting and scenario-based validation. Full article
(This article belongs to the Special Issue Sustainable Intelligent Manufacturing Systems in Industry 4.0 and 5.0)
Show Figures

Figure 1

23 pages, 3559 KB  
Article
From Static Prediction to Mindful Machines: A Paradigm Shift in Distributed AI Systems
by Rao Mikkilineni and W. Patrick Kelly
Computers 2025, 14(12), 541; https://doi.org/10.3390/computers14120541 - 10 Dec 2025
Viewed by 275
Abstract
A special class of complex adaptive systems—biological and social—thrive not by passively accumulating patterns, but by engineering coherence, i.e., the deliberate alignment of prior knowledge, real-time updates, and teleonomic purposes. By contrast, today’s AI stacks—Large Language Models (LLMs) wrapped in agentic toolchains—remain rooted [...] Read more.
A special class of complex adaptive systems—biological and social—thrive not by passively accumulating patterns, but by engineering coherence, i.e., the deliberate alignment of prior knowledge, real-time updates, and teleonomic purposes. By contrast, today’s AI stacks—Large Language Models (LLMs) wrapped in agentic toolchains—remain rooted in a Turing-paradigm architecture: statistical world models (opaque weights) bolted onto brittle, imperative workflows. They excel at pattern completion, but they externalize governance, memory, and purpose, thereby accumulating coherence debt—a structural fragility manifested as hallucinations, shallow and siloed memory, ad hoc guardrails, and costly human oversight. The shortcoming of current AI relative to human-like intelligence is therefore less about raw performance or scaling, and more about an architectural limitation: knowledge is treated as an after-the-fact annotation on computation, rather than as an organizing substrate that shapes computation. This paper introduces Mindful Machines, a computational paradigm that operationalizes coherence as an architectural property rather than an emergent afterthought. A Mindful Machine is specified by a Digital Genome (encoding purposes, constraints, and knowledge structures) and orchestrated by an Autopoietic and Meta-Cognitive Operating System (AMOS) that runs a continuous Discover–Reflect–Apply–Share (D-R-A-S) loop. Instead of a static model embedded in a one-shot ML pipeline or deep learning neural network, the architecture separates (1) a structural knowledge layer (Digital Genome and knowledge graphs), (2) an autopoietic control plane (health checks, rollback, and self-repair), and (3) meta-cognitive governance (critique-then-commit gates, audit trails, and policy enforcement). We validate this approach on the classic Credit Default Prediction problem by comparing a traditional, static Logistic Regression pipeline (monolithic training, fixed features, external scripting for deployment) with a distributed Mindful Machine implementation whose components can reconfigure logic, update rules, and migrate workloads at runtime. The Mindful Machine not only matches the predictive task, but also achieves autopoiesis (self-healing services and live schema evolution), explainability (causal, event-driven audit trails), and dynamic adaptation (real-time logic and threshold switching driven by knowledge constraints), thereby reducing the coherence debt that characterizes contemporary ML- and LLM-centric AI architectures. The case study demonstrates “a hybrid, runtime-switchable combination of machine learning and rule-based simulation, orchestrated by AMOS under knowledge and policy constraints”. Full article
(This article belongs to the Special Issue Cloud Computing and Big Data Mining)
Show Figures

Figure 1

29 pages, 3769 KB  
Systematic Review
Illuminating Industry Evolution: Reframing Artificial Intelligence Through Transparent Machine Reasoning
by Albérico Travassos Rosário and Joana Carmo Dias
Information 2025, 16(12), 1044; https://doi.org/10.3390/info16121044 - 1 Dec 2025
Viewed by 256
Abstract
As intelligent systems become increasingly embedded in industrial ecosystems, the demand for transparency, reliability, and interpretability has intensified. This study investigates how explainable artificial intelligence (XAI) contributes to enhancing accountability, trust, and human–machine collaboration across industrial contexts transitioning from Industry 4.0 to Industry [...] Read more.
As intelligent systems become increasingly embedded in industrial ecosystems, the demand for transparency, reliability, and interpretability has intensified. This study investigates how explainable artificial intelligence (XAI) contributes to enhancing accountability, trust, and human–machine collaboration across industrial contexts transitioning from Industry 4.0 to Industry 5.0. To achieve this objective, a systematic bibliometric literature review (LRSB) was conducted following the PRISMA framework, analysing 98 peer-reviewed publications indexed in Scopus. This methodological approach enabled the identification of major research trends, theoretical foundations, and technical strategies that shape the development and implementation of XAI within industrial settings. The findings reveal that explainability is evolving from a purely technical requirement to a multidimensional construct integrating ethical, social, and regulatory dimensions. Techniques such as counterfactual reasoning, causal modelling, and hybrid neuro-symbolic frameworks are shown to improve interpretability and trust while aligning AI systems with human-centric and legal principles, notably those outlined in the EU AI Act. The bibliometric analysis further highlights the increasing maturity of XAI research, with strong scholarly convergence around transparency, fairness, and collaborative intelligence. By reframing artificial intelligence through the lens of transparent machine reasoning, this study contributes to both theory and practice. It advances a conceptual model linking explainability with measurable indicators of trustworthiness and accountability, and it offers a roadmap for developing responsible, human-aligned AI systems in the era of Industry 5.0. Ultimately, the study underscores that fostering explainability not only enhances functional integrity but also strengthens the ethical and societal legitimacy of AI in industrial transformation. Full article
(This article belongs to the Special Issue Advances in Information Studies)
Show Figures

Figure 1

25 pages, 1910 KB  
Review
Natural Language Processing in Generating Industrial Documentation Within Industry 4.0/5.0
by Izabela Rojek, Olga Małolepsza, Mirosław Kozielski and Dariusz Mikołajewski
Appl. Sci. 2025, 15(23), 12662; https://doi.org/10.3390/app152312662 - 29 Nov 2025
Viewed by 486
Abstract
Deep learning (DL) methods have revolutionized natural language processing (NLP), enabling industrial documentation systems to process and generate text with high accuracy and fluency. Modern deep learning models, such as transformers and recurrent neural networks (RNNs), learn contextual relationships in text, making them [...] Read more.
Deep learning (DL) methods have revolutionized natural language processing (NLP), enabling industrial documentation systems to process and generate text with high accuracy and fluency. Modern deep learning models, such as transformers and recurrent neural networks (RNNs), learn contextual relationships in text, making them ideal for analyzing and creating complex industrial documentation. Transformer-based architectures, such as BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pre-trained Transformer), are ideally suited for tasks such as text summarization, content generation, and question answering, which are crucial for documentation systems. Pre-trained language models, tuned to specific industrial datasets, support domain-specific vocabulary, ensuring the generated documentation complies with industry standards. Deep learning-based systems can use sequential models, such as those used in machine translation, to generate documentation in multiple languages, promoting accessibility, and global collaboration. Using attention mechanisms, these models identify and highlight critical sections of input data, resulting in the generation of accurate and concise documentation. Integration with optical character recognition (OCR) tools enables DL-based NLP systems to digitize and interpret legacy documents, streamlining the transition to automated workflows. Reinforcement learning and human feedback loops can enhance a system’s ability to generate consistent and contextually relevant text over time. These approaches are particularly effective in creating dynamic documentation that is automatically updated based on data from sensors, registers, or other sources in real time. The scalability of DL techniques enables industrial organizations to efficiently produce massive amounts of documentation, reducing manual effort and improving overall efficiency. NLP has become a fundamental technology for automating the generation, maintenance, and personalization of industrial documentation within the Industry 4.0, 5.0, and emerging Industry 6.0 paradigms. Recent advances in large language models, search-assisted generation, and multimodal architectures have significantly improved the accuracy and contextualization of technical manuals, maintenance reports, and compliance documents. However, persistent challenges such as domain-specific terminology, data scarcity, and the risk of hallucinations highlight the limitations of current approaches in safety-critical manufacturing environments. This review synthesizes state-of-the-art methods, comparing rule-based, neural, and hybrid systems while assessing their effectiveness in addressing industrial requirements for reliability, traceability, and real-time adaptation. Human–AI collaboration and the integration of knowledge graphs are transforming documentation workflows as factories evolve toward cognitive and autonomous systems. The review included 32 articles published between 2018 and 2025. The implications of these bibliometric findings suggest that a high percentage of conference papers (69.6%) may indicate a field still in its conceptual phase, which contextualizes the article’s emphasis on proposed architecture rather than their industrial validation. Most research was conducted in computer science, suggesting early stages of technological maturity. The leading countries were China and India, but these countries did not have large publication counts, nor were leading researchers or affiliations observed, suggesting significant research dispersion. However, the most frequently observed SDGs indicate a clear health context, focusing on “industry innovation and infrastructure” and “good health and well-being”. Full article
(This article belongs to the Special Issue Emerging and Exponential Technologies in Industry 4.0)
Show Figures

Figure 1

17 pages, 3038 KB  
Article
Research on Deep Learning-Based Human–Robot Static/Dynamic Gesture-Driven Control Framework
by Gong Zhang, Jiahong Su, Shuzhong Zhang, Jianzheng Qi, Zhicheng Hou and Qunxu Lin
Sensors 2025, 25(23), 7203; https://doi.org/10.3390/s25237203 - 25 Nov 2025
Viewed by 433
Abstract
For human–robot gesture-driven control, this paper proposes a deep learning-based approach that employs both static and dynamic gestures to drive and control robots for object-grasping and delivery tasks. The method utilizes two-dimensional Convolutional Neural Networks (2D-CNNs) for static gesture recognition and a hybrid [...] Read more.
For human–robot gesture-driven control, this paper proposes a deep learning-based approach that employs both static and dynamic gestures to drive and control robots for object-grasping and delivery tasks. The method utilizes two-dimensional Convolutional Neural Networks (2D-CNNs) for static gesture recognition and a hybrid architecture combining three-dimensional Convolutional Neural Networks (3D-CNNs) and Long Short-Term Memory networks (3D-CNN+LSTM) for dynamic gesture recognition. Results on a custom gesture dataset demonstrate validation accuracies of 95.38% for static gestures and 93.18% for dynamic gestures, respectively. Then, in order to control and drive the robot to perform corresponding tasks, hand pose estimation was performed. The MediaPipe machine learning framework was first employed to extract hand feature points. These 2D feature points were then converted into 3D coordinates using a depth camera-based pose estimation method, followed by coordinate system transformation to obtain hand poses relative to the robot’s base coordinate system. Finally, an experimental platform for human–robot gesture-driven interaction was established, deploying both gesture recognition models. Four participants were invited to perform 100 trials each of gesture-driven object-grasping and delivery tasks under three lighting conditions: natural light, low light, and strong light. Experimental results show that the average success rates for completing tasks via static and dynamic gestures are no less than 96.88% and 94.63%, respectively, with task completion times consistently within 20 s. These findings demonstrate that the proposed approach enables robust vision-based robotic control through natural hand gestures, showing great prospects for human–robot collaboration applications. Full article
Show Figures

Figure 1

13 pages, 2928 KB  
Article
Application Research on General Technology for Safety Appraisal of Existing Buildings Based on Unmanned Aerial Vehicles and Stair-Climbing Robots
by Zizhen Shen, Rui Wang, Lianbo Wang, Wenhao Lu and Wei Wang
Buildings 2025, 15(22), 4145; https://doi.org/10.3390/buildings15224145 - 17 Nov 2025
Viewed by 328
Abstract
Structure detection (SD) has emerged as a critical technology for ensuring the safety and longevity of infrastructure, particularly in housing and civil engineering. Traditional SD methods often rely on manual inspections, which are time-consuming, labor-intensive, and prone to human error, especially in complex [...] Read more.
Structure detection (SD) has emerged as a critical technology for ensuring the safety and longevity of infrastructure, particularly in housing and civil engineering. Traditional SD methods often rely on manual inspections, which are time-consuming, labor-intensive, and prone to human error, especially in complex environments such as dense urban settings or aging buildings with deteriorated materials. Recent advances in autonomous systems—such as Unmanned Aerial Vehicles (UAVs) and climbing robots—have shown promise in addressing these limitations by enabling efficient, real-time data collection. However, challenges persist in accurately detecting and analyzing structural defects (e.g., masonry cracks, concrete spalling) amidst cluttered backgrounds, hardware constraints, and the need for multi-scale feature integration. The integration of machine learning (ML) and deep learning (DL) has revolutionized SD by enabling automated feature extraction and robust defect recognition. For instance, RepConv architectures have been widely adopted for multi-scale object detection, while attention mechanisms like TAM (Technology Acceptance Model) have improved spatial feature fusion in complex scenes. Nevertheless, existing works often focus on singular sensing modalities (e.g., UAVs alone) or neglect the fusion of complementary data streams (e.g., ground-based robot imagery) to enhance detection accuracy. Furthermore, computational redundancy in multi-scale processing and inconsistent bounding box regression in detection frameworks remain underexplored. This study addresses these gaps by proposing a generalized safety inspection system that synergizes UAV and stair-climbing robot data. We introduce a novel multi-scale targeted feature extraction path (Rep-FasterNet TAM block) to unify automated RepConv-based feature refinement with dynamic-scale fusion, reducing computational overhead while preserving critical structural details. For detection, we combine traditional methods with remote sensor fusion to mitigate feature loss during image upsampling/downsampling, supported by a structural model GIOU [Mathematical Definition: GIOU = IOU − (C − U)/C] that enhances bounding box regression through shape/scale-aware constraints and real-time analysis. By siting our work within the context of recent reviews on ML/DL for SD, we demonstrate how our hybrid approach bridges the gap between autonomous inspection hardware and AI-driven defect analysis, offering a scalable solution for large-scale housing safety assessments. In response to challenges in detecting objects accurately during housing safety assessments—including large/dense objects, complex backgrounds, and hardware limitations—we propose a generalized inspection system leveraging data from UAVs and stair-climbing robots. To address multi-scale feature extraction inefficiencies, we design a Rep-FasterNet TAM block that integrates RepConv for automated feature refinement and a multi-scale attention module to enhance spatial feature consistency. For detection, we combine dynamic-scale remote feature fusion with traditional methods, supported by a structural GIOU model that improves bounding box regression through shape/scale constraints and real-time analysis. Experiments demonstrate that our system increases masonry/concrete assessment accuracy by 11.6% and 20.9%, respectively, while reducing manual drawing restoration workload by 16.54%. This validates the effectiveness of our hybrid approach in unifying autonomous inspection hardware with AI-driven analysis, offering a scalable solution for SD in housing infrastructure. Full article
(This article belongs to the Special Issue AI-Powered Structural Health Monitoring: Innovations and Applications)
Show Figures

Figure 1

37 pages, 2180 KB  
Review
Recent Advances and Unaddressed Challenges in Biomimetic Olfactory- and Taste-Based Biosensors: Moving Towards Integrated, AI-Powered, and Market-Ready Sensing Systems
by Zunaira Khalid, Yuqi Chen, Xinyi Liu, Beenish Noureen, Yating Chen, Miaomiao Wang, Yao Ma, Liping Du and Chunsheng Wu
Sensors 2025, 25(22), 7000; https://doi.org/10.3390/s25227000 - 16 Nov 2025
Viewed by 1220
Abstract
Biomimetic olfactory and taste biosensors replicate human sensory functions by coupling selective biological recognition elements (such as receptors, binding proteins, or synthetic mimics) with highly sensitive transducers (including electrochemical, transistor, optical, and mechanical types). This review summarizes recent progress in olfactory and taste [...] Read more.
Biomimetic olfactory and taste biosensors replicate human sensory functions by coupling selective biological recognition elements (such as receptors, binding proteins, or synthetic mimics) with highly sensitive transducers (including electrochemical, transistor, optical, and mechanical types). This review summarizes recent progress in olfactory and taste biosensors focusing on three key areas: (i) materials and device design, (ii) artificial intelligence (AI) and data fusion for real-time decision-making, and (iii) pathways for practical application, including hybrid platforms, Internet of Things (IoT) connectivity, and regulatory considerations. We provide a comparative analysis of smell and taste sensing methods, emphasizing cases where integrating both modalities enhances sensitivity, selectivity, detection limits, and reliability in complex environments like food, environmental monitoring, healthcare, and security. Ongoing challenges are addressed with emerging solutions such as antifouling/self-healing interfaces, modular cartridges, machine learning (ML)-assisted calibration, and manufacturing-friendly approaches using scalable microfabrication and sustainable materials. The review concludes with a practical roadmap advocating for the joint development of receptors, materials, and algorithms; establishment of open standards for long-term stability; implementation of explainable/edge AI with privacy-focused analytics; and proactive collaboration with regulatory bodies. Collectively, these strategies aim to advance biomimetic smell and taste biosensors from experimental prototypes to dependable, commercially viable tools for continuous chemical sensing in real-world applications. Full article
(This article belongs to the Special Issue Nature Inspired Engineering: Biomimetic Sensors (2nd Edition))
Show Figures

Graphical abstract

21 pages, 368 KB  
Systematic Review
Integrating Multi-Omics and Medical Imaging in Artificial Intelligence-Based Cancer Research: An Umbrella Review of Fusion Strategies and Applications
by Ahmed Al Marouf, Jon George Rokne and Reda Alhajj
Cancers 2025, 17(22), 3638; https://doi.org/10.3390/cancers17223638 - 13 Nov 2025
Viewed by 1393
Abstract
Background: The combination of multi-omics data, including genomics, transcriptomics, and epigenomics, with medical imaging modalities (PET, CT, MRI, histopathology) has emerged in recent years as a promising direction for the advancement of precision oncology. Many researchers have contributed to this domain, exploring the [...] Read more.
Background: The combination of multi-omics data, including genomics, transcriptomics, and epigenomics, with medical imaging modalities (PET, CT, MRI, histopathology) has emerged in recent years as a promising direction for the advancement of precision oncology. Many researchers have contributed to this domain, exploring the multi-modality aspect of using both multi-omics and image data for better cancer identification, subtype classifications, cancer prognosis, etc. Methods: We present an umbrella review summarizing the state of the art in fusing imaging modalities with omics and artificial intelligence, focusing on existing reviews and meta-analyses. The analysis highlights early, late, and hybrid fusion strategies and their advantages and disadvantages, mainly in tumor classification, prognosis, and treatment prediction. We searched review articles until 25 May 2025 across multiple databases following PRISMA guidelines, with registration on PROSPERO (CRD420251062147). Results: After identifying 56 articles from different databases (i.e., PubMed, Scopus, Web of Science and Dimensions.ai), 35 articles were screened out based on the inclusion and exclusion criteria, keeping 21 studies for the umbrella review. Discussion: We investigated prominent fusion techniques in various contexts of cancer types and the role of machine learning in model performance enhancement. We address the problems of model generalizability versus interpretability within the clinical context and argue how these multi-modal issues can facilitate translating research into actual clinical scenarios. Conclusions: Lastly, we recommend future work to define clearer and more reliable validation criteria, address the need for integration of human clinicians with the AI system, and describe the trust issue with AI in cancer care, which requires more standardized approaches. Full article
Show Figures

Figure 1

19 pages, 2716 KB  
Article
Analysis of a Hybrid Intrabody Communications Scheme for Wireless Cortical Implants
by Assefa K. Teshome and Daniel T. H. Lai
Electronics 2025, 14(22), 4410; https://doi.org/10.3390/electronics14224410 - 12 Nov 2025
Viewed by 282
Abstract
Implantable technologies targeting the cerebral cortex and deeper brain structures are increasingly utilised in human–machine interfacing, advanced neuroprosthetics, and clinical interventions for neurological conditions. These systems require highly efficient and low-power methods for exchanging information between the implant and external electronics. Traditional approaches [...] Read more.
Implantable technologies targeting the cerebral cortex and deeper brain structures are increasingly utilised in human–machine interfacing, advanced neuroprosthetics, and clinical interventions for neurological conditions. These systems require highly efficient and low-power methods for exchanging information between the implant and external electronics. Traditional approaches often rely on inductively coupled data transfer (ic-DT), where the same coils used for wireless power are modulated for communication. Other designs use high-frequency antenna-based radio systems, typically operating in the 401–406 MHz MedRadio band or the 2.4 GHz ISM band. A promising alternative is intrabody communication (IBC), which leverages the bioelectrical characteristics of body tissue to enable signal propagation. This work presents a theoretical investigation into two schemes—inductive coupling and galvanically coupled IBC (gc-IBC)—as applied to cortical data links, considering frequencies from 1 to 10 MHz and implant depths of up to 7 cm. We propose a hybrid solution where gc-IBC supports data transmission and inductive coupling facilitates wireless power delivery. Our findings indicate that gc-IBC can accommodate wider bandwidths than ic-DT and offers significantly reduced path loss, approximately 20 dB lower than those of conventional RF-based antenna systems. Full article
(This article belongs to the Special Issue Applications of Sensor Networks and Wireless Communications)
Show Figures

Figure 1

14 pages, 738 KB  
Opinion
Envisioning the Future of Machine Learning in the Early Detection of Neurodevelopmental and Neurodegenerative Disorders via Speech and Language Biomarkers
by Georgios P. Georgiou
Acoustics 2025, 7(4), 72; https://doi.org/10.3390/acoustics7040072 - 10 Nov 2025
Viewed by 852
Abstract
Speech and language offer a rich, non-invasive window into brain health. Advances in machine learning (ML) have enabled increasingly accurate detection of neurodevelopmental and neurodegenerative disorders through these modalities. This paper envisions the future of ML in the early detection of neurodevelopmental disorders [...] Read more.
Speech and language offer a rich, non-invasive window into brain health. Advances in machine learning (ML) have enabled increasingly accurate detection of neurodevelopmental and neurodegenerative disorders through these modalities. This paper envisions the future of ML in the early detection of neurodevelopmental disorders like autism spectrum disorder and attention-deficit/hyperactivity disorder, and neurodegenerative disorders, such as Parkinson’s disease and Alzheimer’s disease, through speech and language biomarkers. We explore the current landscape of ML techniques, including deep learning and multimodal approaches, and review their applications across various conditions, highlighting both successes and inherent limitations. Our core contribution lies in outlining future trends across several critical dimensions. These include the enhancement of data availability and quality, the evolution of models, the development of multilingual and cross-cultural models, the establishment of regulatory and clinical translation frameworks, and the creation of hybrid systems enabling human–artificial intelligence (AI) collaboration. Finally, we conclude with a vision for future directions, emphasizing the potential integration of ML-driven speech diagnostics into public health infrastructure, the development of patient-specific explainable AI, and its synergistic combination with genomics and brain imaging for holistic brain health assessment. Overcoming substantial hurdles in validation, generalization, and clinical adoption, the field is poised to shift toward ubiquitous, accessible, and highly personalized tools for early diagnosis. Full article
(This article belongs to the Special Issue Artificial Intelligence in Acoustic Phonetics)
Show Figures

Figure 1

23 pages, 2577 KB  
Article
A Hybrid STL-Based Ensemble Model for PM2.5 Forecasting in Pakistani Cities
by Moiz Qureshi, Atef F. Hashem, Hasnain Iftikhar and Paulo Canas Rodrigues
Symmetry 2025, 17(11), 1827; https://doi.org/10.3390/sym17111827 - 31 Oct 2025
Viewed by 488
Abstract
Air pollution, outstanding particulate matter (PM2.5), poses severe risks to human health and the environment in densely populated urban areas. Accurate short-term forecasting of PM2.5 concentrations is therefore crucial for timely public health advisories and effective mitigation strategies. This work [...] Read more.
Air pollution, outstanding particulate matter (PM2.5), poses severe risks to human health and the environment in densely populated urban areas. Accurate short-term forecasting of PM2.5 concentrations is therefore crucial for timely public health advisories and effective mitigation strategies. This work proposes a hybrid approach that combines machine learning models with STL decomposition to provide precise short-term PM2.5 predictions. Daily PM2.5 series from four major Pakistani cities—Islamabad, Lahore, Karachi, and Peshawar—are first pre-processed to handle missing values, outliers, and variance instability. The data are then decomposed via seasonal-trend decomposition using Loess (STL), which explicitly exploits the symmetric and recurrent structure of seasonal patterns. Each decomposed component (trend, seasonality, and remainder) is modeled independently using an ensemble of statistical and machine learning approaches. Forecasts are combined through a weighted aggregation scheme that balances bias–variance trade-offs and preserves the distributional consistency. The final recombined forecasts provide one-day-ahead PM2.5 predictions with associated uncertainty measures. The model evaluation employs multiple statistical accuracy metrics, distributional diagnostics, and out-of-sample validation to assess its performance. The results demonstrate that the proposed framework consistently outperforms conventional benchmark models, yielding robust, interpretable, and probabilistically coherent forecasts. This study demonstrates how periodic and recurrent seasonal structure decomposition and probabilistic ensemble methods enhance the statistical modeling of environmental time series, offering actionable insights for urban air quality management. Full article
(This article belongs to the Special Issue Unlocking the Power of Probability and Statistics for Symmetry)
Show Figures

Figure 1

24 pages, 1962 KB  
Systematic Review
Autonomous Hazardous Gas Detection Systems: A Systematic Review
by Boon-Keat Chew, Azwan Mahmud and Harjit Singh
Sensors 2025, 25(21), 6618; https://doi.org/10.3390/s25216618 - 28 Oct 2025
Cited by 1 | Viewed by 1235
Abstract
Gas Detection Systems (GDSs) are critical safety technologies deployed in semiconductor wafer fabrication facilities to monitor the presence of hazardous gases. A GDS receives input from gas detectors equipped with consumable gas sensors, such as electrochemical (EC) and metal oxide semiconductor (MOS) types, [...] Read more.
Gas Detection Systems (GDSs) are critical safety technologies deployed in semiconductor wafer fabrication facilities to monitor the presence of hazardous gases. A GDS receives input from gas detectors equipped with consumable gas sensors, such as electrochemical (EC) and metal oxide semiconductor (MOS) types, which are used to detect toxic, flammable, or reactive gases. However, over time, sensors degradations, accuracy drift, and cross-sensitivity to interference gases compromise their intended performance. To maintain sensor accuracy and reliability, routine manual calibration is required—an approach that is resource-intensive, time-consuming, and prone to human error, especially in facilities with extensive networks of gas detectors. This systematic review (PROSPERO on 11th October 2025 Registration number: 1166004) explored minimizing or eliminating the dependency on manual calibration. Findings indicate that using properly calibrated gas sensor data can support advanced data analytics and machine learning algorithms to correct accuracy drift and improve gas selectivity. Techniques such as Principal Component Analysis (PCA), Support Vector Machines (SVMs), multivariate regression, and calibration transfer have been effectively applied to differentiate target gases from interferences and compensate for sensor aging and environmental variability. The paper also explores the emerging potential for integrating calibration-free or self-correcting gas sensor systems into existing GDS infrastructures. Despite significant progress, key research challenges persist. These include understanding the dynamics of sensor response drift due to prolonged gas exposure, synchronizing multi-sensor data collection to minimize time-related drift, and aligning ambient sensor signals with gas analytical references. Future research should prioritize the development of application-specific datasets, adaptive environmental compensation models, and hybrid validation frameworks. These advancements will contribute to the realization of intelligent, autonomous, and data-driven gas detection solutions that are robust, scalable, and well-suited to the operational complexities of modern industrial environments. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Back to TopTop