Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (7,245)

Search Parameters:
Keywords = neural network architecture

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 5650 KB  
Article
An Intelligent Multi-Task Supply Chain Model Based on Bio-Inspired Networks
by Mehdi Khaleghi, Sobhan Sheykhivand, Nastaran Khaleghi and Sebelan Danishvar
Biomimetics 2026, 11(2), 123; https://doi.org/10.3390/biomimetics11020123 (registering DOI) - 6 Feb 2026
Abstract
Acknowledging recent breakthroughs in the context of deep bio-inspired neural networks, several architectural deep network options have been deployed to create intelligent systems. The foundations of convolutional neural networks are influenced by hierarchical processing in the visual cortex. The graph neural networks mimic [...] Read more.
Acknowledging recent breakthroughs in the context of deep bio-inspired neural networks, several architectural deep network options have been deployed to create intelligent systems. The foundations of convolutional neural networks are influenced by hierarchical processing in the visual cortex. The graph neural networks mimic the communication of biological neurons. Considering these two computation methods, a novel deep ensemble network is used to propose a bio-inspired deep graph network for creating an intelligent supply chain model. An automated smart supply chain helps to create a more agile, resilient and sustainable system. Improving the sustainability of the network plays a key role in the efficiency of the supply chain’s performance. The proposed bio-inspired Chebyshev ensemble graph network (Ch-EGN) is hybrid learning for creating an intelligent supply chain. The functionality of the proposed deep network is assessed on two different databases including SupplyGraph and DataCo for risk administration, enhancing supply chain sustainability, identifying hidden risks and increasing the supply chain’s transparency. An average accuracy of 98.95% is obtained using the proposed network for automatic delivery status prediction. The performance metrics regarding multi-class categorization scenarios of the intelligent supply chain confirm the efficiency of the proposed bio-inspired approach for sustainability and risk management. Full article
Show Figures

Figure 1

26 pages, 44951 KB  
Article
Advanced Deep Learning Models for Classifying Dental Diseases from Panoramic Radiographs
by Deema M. Alnasser, Reema M. Alnasser, Wareef M. Alolayan, Shihanah S. Albadi, Haifa F. Alhasson, Amani A. Alkhamees and Shuaa S. Alharbi
Diagnostics 2026, 16(3), 503; https://doi.org/10.3390/diagnostics16030503 (registering DOI) - 6 Feb 2026
Abstract
Background/Objectives: Dental diseases represent a great problem for oral health care, and early diagnosis is essential to reduce the risk of complications. Panoramic radiographs provide a detailed perspective of dental structures that is suitable for automated diagnostic methods. This paper aims to investigate [...] Read more.
Background/Objectives: Dental diseases represent a great problem for oral health care, and early diagnosis is essential to reduce the risk of complications. Panoramic radiographs provide a detailed perspective of dental structures that is suitable for automated diagnostic methods. This paper aims to investigate the use of an advanced deep learning (DL) model for the multiclass classification of diseases at the sub-diagnosis level using panoramic radiographs to resolve the inconsistencies and skewed classes in the dataset. Methods: To classify and test the models, rich data of 10,580 high-quality panoramic radiographs, initially annotated in 93 classes and subsequently improved to 35 consolidated classes, was used. We applied extensive preprocessing techniques like class consolidation, mislabeled entry correction, redundancy removal and augmentation to reduce the ratio of class imbalance from 2560:1 to 61:1. Five modern convolutional neural network (CNN) architectures—InceptionV3, EfficientNetV2, DenseNet121, ResNet50, and VGG16—were assessed with respect to five metrics: accuracy, mean average precision (mAP), precision, recall, and F1-score. Results: InceptionV3 achieved the best performance with a 97.51% accuracy rate and a mAP of 96.61%, thus confirming its superior ability for diagnosing a wide range of dental conditions. The EfficientNetV2 and DenseNet121 models achieved accuracies of 97.04% and 96.70%, respectively, indicating strong classification performance. ResNet50 and VGG16 also yielded competitive accuracy values comparable to these models. Conclusions: Overall, the results show that deep learning models are successful in dental disease classification, especially the model with the highest accuracy, InceptionV3. New insights and clinical applications will be realized from a further study into dataset expansion, ensemble learning strategies, and the application of explainable artificial intelligence techniques. The findings provide a starting point for implementing automated diagnostic systems for dental diagnosis with greater efficiency, accuracy, and clinical utility in the deployment of oral healthcare. Full article
(This article belongs to the Special Issue Advances in Dental Diagnostics)
30 pages, 4048 KB  
Review
Artificial Intelligence as a Catalyst for Antimicrobial Discovery: From Predictive Models to De Novo Design
by Romaisaa Boudza, Salim Bounou, Jaume Segura-Garcia, Ismail Moukadiri and Sergi Maicas
Microorganisms 2026, 14(2), 394; https://doi.org/10.3390/microorganisms14020394 (registering DOI) - 6 Feb 2026
Abstract
Antimicrobial resistance represents one of the most critical global health challenges of the 21st century, urgently demanding innovative strategies for antimicrobial discovery. Traditional antibiotic development pipelines are slow, costly, and increasingly ineffective against multidrug-resistant pathogens. In this context, recent advances in artificial intelligence [...] Read more.
Antimicrobial resistance represents one of the most critical global health challenges of the 21st century, urgently demanding innovative strategies for antimicrobial discovery. Traditional antibiotic development pipelines are slow, costly, and increasingly ineffective against multidrug-resistant pathogens. In this context, recent advances in artificial intelligence have emerged as transformative tools capable of accelerating antimicrobial discovery and expanding accessible chemical and biological space. This comprehensive review critically synthesizes recent progress in AI-driven approaches applied to the discovery and design of both small-molecule antibiotics and antimicrobial peptides. We examine how machine learning, deep learning, and generative models are being leveraged for virtual screening, activity prediction, mechanism-informed prioritization, and de novo antimicrobial design. Particular emphasis is placed on graph-based neural networks, attention-based and transformer architectures, and generative frameworks such as variational autoencoders and large language model-based generators. Across these approaches, AI has enabled the identification of structurally novel compounds, facilitated narrow-spectrum antimicrobial strategies, and improved interpretability in peptide prediction. However, significant challenges remain, including data scarcity and imbalance, limited experimental validation, and barriers to clinical translation. By integrating methodological advances with a critical analysis of the current limitations, this review highlights emerging trends and outlines future directions aimed at bridging the gap between in silico discovery and real-world therapeutic development. Full article
Show Figures

Graphical abstract

42 pages, 2797 KB  
Review
Decoding Technical Diagrams: A Survey of AI Methods for Image Content Extraction and Understanding
by Nick Bray, Michael Hempel, Matthew Boeding and Hamid Sharif
Information 2026, 17(2), 165; https://doi.org/10.3390/info17020165 - 6 Feb 2026
Abstract
With artificial intelligence (AI) rapidly increasing in popularity and presence in everyday life, new applications utilizing AI are being explored across virtually all domains, from banking and healthcare to cybersecurity to generative AI for images, voice, and video content creation. With that trend [...] Read more.
With artificial intelligence (AI) rapidly increasing in popularity and presence in everyday life, new applications utilizing AI are being explored across virtually all domains, from banking and healthcare to cybersecurity to generative AI for images, voice, and video content creation. With that trend comes an inherent need for increased AI capabilities. One cornerstone of AI applications is the ability of generative AI to consume documents and utilize their content to answer questions, generate new content, correlate it with other data sources, and more. No longer constrained to text alone, we now leverage multimodal AI models to help us understand visual elements within documents, such as images, tables, figures, and charts. Within this realm, capabilities have expanded exponentially from traditional Optical Character Recognition (OCR) approaches towards increasingly utilizing complex AI models for visual content analysis and understanding. Modern approaches, especially those leveraging AI, are now focusing on interpreting more complex diagrams such as flowcharts, block diagrams, Unified Modeling Language (UML) diagrams, electrical schematics, and timing diagrams. These diagram types combine text, symbols, and structured layout, making them challenging to parse and comprehend using conventional techniques. This paper presents a historical analysis and comprehensive survey of scientific literature exploring this domain of visual understanding of complex technical illustrations and diagrams. We explore the use of deep learning models, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and transformer-based architectures. These models, along with OCR, enable the extraction of both textual and structural information from visually complex sources. Despite these advancements, numerous challenges remain, however. These range from hallucinations, where the content extraction system produces outputs not grounded in the source image, which leads to misinterpretations, to a lack of contextual understanding of diagrammatic elements, such as arrows, grouping, and spatial hierarchy. This survey focuses on five key diagram types: flowcharts, block diagrams, UML diagrams, electrical schematics, and timing diagrams. It evaluates the effectiveness, limitations, and practical solutions—both traditional and AI-driven—that aim to enable the extraction of accurate and meaningful information from complex diagrams in a way that is trustworthy and suitable for real-world, high-accuracy AI applications. This survey reveals that virtually all approaches struggle with accurately extracting technical diagram information. It also illustrates a path forward. Pursuing research to further improve their accuracy is crucial for supporting and enabling various applications, including complex document question answering and Retrieval Augmented Generation (RAG), document-driven AI agents, accessibility applications, and automation. Full article
(This article belongs to the Special Issue Intelligent Image Processing by Deep Learning, 2nd Edition)
Show Figures

Figure 1

25 pages, 2214 KB  
Article
Spectrum Sensing in Cognitive Radio Internet of Things Networks: A Comparative Analysis of Machine and Deep Learning Techniques
by Akeem Abimbola Raji and Thomas Otieno Olwal
Telecom 2026, 7(1), 20; https://doi.org/10.3390/telecom7010020 - 6 Feb 2026
Abstract
The proliferation of data-intensive IoT applications has created unprecedented demand for wireless spectrum, necessitating more efficient bandwidth management. Spectrum sensing allows unlicensed secondary users to dynamically access idle channels assigned to primary users. However, traditional sensing techniques are hindered by their sensitivity to [...] Read more.
The proliferation of data-intensive IoT applications has created unprecedented demand for wireless spectrum, necessitating more efficient bandwidth management. Spectrum sensing allows unlicensed secondary users to dynamically access idle channels assigned to primary users. However, traditional sensing techniques are hindered by their sensitivity to noise and reliance on prior knowledge of primary user signals. This limitation has propelled research into machine learning (ML) and deep learning (DL) solutions, which operate without such constraints. This study presents a comprehensive performance assessment of prominent ML models: random forest (RF), K-nearest neighbor (KNN), and support vector machine (SVM) against DL architectures, namely a convolutional neural network (CNN) and an Autoencoder. Evaluated using a robust suite of metrics (probability of detection, false alarm, missed detection, accuracy, and F1-score), the results reveal the clear and consistent superiority of RF. Notably, RF achieved a probability of detection of 95.7%, accuracy of 97.17%, and an F1-score of 96.93%, while maintaining excellent performance in low signal-to-noise ratio (SNR) conditions, even surpassing existing hybrid DL models. These findings underscore RF’s exceptional noise resilience and establish it as an ideal, high-performance candidate for practical spectrum sensing in wireless networks. Full article
Show Figures

Figure 1

24 pages, 12659 KB  
Article
Design of Multi-Legged Locomotion Control System for Reconfigurable Robots Integrating Decoupled Virtual Model Control with BP Neural Network
by Congnan Yang, Jianwen Liu, Tong Cai, Yijie Zhao, Wenhao Wang, Bolong Liu and Xiaojun Xu
Machines 2026, 14(2), 184; https://doi.org/10.3390/machines14020184 - 6 Feb 2026
Abstract
Modular reconfigurable robots exhibit significant potential in adapting to complex terrains through cooperative multi-robot formations. However, current control systems often struggle to maintain consistent performance when the number of modules varies due to a lack of unified and adaptive control frameworks. Existing Virtual [...] Read more.
Modular reconfigurable robots exhibit significant potential in adapting to complex terrains through cooperative multi-robot formations. However, current control systems often struggle to maintain consistent performance when the number of modules varies due to a lack of unified and adaptive control frameworks. Existing Virtual Model Control (VMC) methods, while effective for fixed-configuration legged robots, are limited in their ability to dynamically adjust control parameters in reconfigurable multi-legged systems. To address this gap, this study proposes a parallel multi-legged control system that integrates a Backpropagation Neural Network (BPNN) with a decoupled VMC framework. The BPNN enables adaptive tuning of motion parameters under varying modular configurations, while the decoupled VMC ensures stable gait control under force feedback. Simulation and physical experiments demonstrate that the proposed system achieves a unified control architecture across quadrupedal and multi-legged configurations, with improved tracking accuracy, stability, and adaptability compared to traditional VMC methods. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

22 pages, 1612 KB  
Article
Lightweight 1D-CNN-Based Battery State-of-Charge Estimation and Hardware Development
by Seungbum Kang, Yoonjae Lee, Gahyeon Jang and Seongsoo Lee
Electronics 2026, 15(3), 704; https://doi.org/10.3390/electronics15030704 - 6 Feb 2026
Abstract
This paper presents the FPGA implementation and verification of a lightweight one-dimensional convolutional neural network (1D-CNN) pipeline for real-time battery state-of-charge (SoC) estimation in automotive battery management systems. The proposed model employs separable 1D convolution and global average pooling, and applies aggressive structured [...] Read more.
This paper presents the FPGA implementation and verification of a lightweight one-dimensional convolutional neural network (1D-CNN) pipeline for real-time battery state-of-charge (SoC) estimation in automotive battery management systems. The proposed model employs separable 1D convolution and global average pooling, and applies aggressive structured pruning to reduce the number of parameters from 3121 to 358, representing an 88.5% reduction, without significant accuracy loss. Using quantization-aware training (QAT), the network is trained and executed in INT8, which reduces weight storage to one-quarter of the 32-bit baseline while maintaining high estimation accuracy with a Mean Absolute Error (MAE) of 0.0172. The hardware adopts a time-multiplexed single MAC architecture with FSM control, occupying 98,410 gates under a 28 nm process. Evaluations on an FPGA testbed with representative drive-cycle inputs show that the proposed INT8 pipeline achieves performance comparable to the floating-point reference with negligible precision drop, demonstrating its suitability for in-vehicle BMS deployment. Full article
Show Figures

Figure 1

14 pages, 463 KB  
Article
MoE Based Consistency and Complementarity Mining for Multi-View Clustering
by Xiaoping Wang, Yang Cao, Yifan Zhang, Hanlu Ren and Qiyue Yin
Algorithms 2026, 19(2), 132; https://doi.org/10.3390/a19020132 - 6 Feb 2026
Abstract
Multi-view clustering, which improves clustering performance by using the complementary and consistent information from multiple diverse feature sets, has been attracting increasing research attention owing to its broad applicability in real world scenarios. Conventional approaches typically leverage this complementarity by projecting different views [...] Read more.
Multi-view clustering, which improves clustering performance by using the complementary and consistent information from multiple diverse feature sets, has been attracting increasing research attention owing to its broad applicability in real world scenarios. Conventional approaches typically leverage this complementarity by projecting different views into a common embedding space using view-specific or shared non-linear neural networks. This unified embedding is then fed into standard single-view clustering algorithms to obtain the final clustering results. However, a single common embedding may be insufficient to capture the distinct or even contradictory characteristics of multi-view data, due to the divergent representational capacities of different views. To address this issue, we propose a mixture of experts (MoE) based embedding learning method that adaptively models inter-view relationships. This architecture employs a typical MoE module as a projection layer across all views, which uses shared expert and several groups of experts for consistency and complementarity mining. Furthermore, a Kullback-Leibler divergence based objective with over clustering is designed for clustering-oriented embedding learning. Extensive experiments on six benchmark datasets confirm that our method achieves superior performance compared to a number of state-of-the-art approaches. Full article
Show Figures

Figure 1

18 pages, 298 KB  
Review
A Survey on Quantum Machine Learning Applications in Medicine and Healthcare
by Radosław Idzikowski, Mateusz A. Kucharski, Konrad Pempera and Michał Jaroszczuk
Appl. Sci. 2026, 16(3), 1630; https://doi.org/10.3390/app16031630 - 5 Feb 2026
Abstract
Quantum machine learning (QML) is an emerging field combining quantum computing and artificial intelligence, with promising applications in medicine and healthcare. This survey reviews more than 60 studies published between 2018 and 2025, highlighting a sharp increase in research activity, especially in the [...] Read more.
Quantum machine learning (QML) is an emerging field combining quantum computing and artificial intelligence, with promising applications in medicine and healthcare. This survey reviews more than 60 studies published between 2018 and 2025, highlighting a sharp increase in research activity, especially in the last three years. We address seven core research questions related to publication trends, the use of real quantum hardware versus simulators, quantum architectures overview, dataset types, medical domains, algorithmic frameworks, and reported results. Our analysis shows that most QML research in healthcare is conducted on simulators due to limited hardware access, and it relies on small datasets. Quantum convolutional neural network (QCNN) architectures dominate image-based medical tasks such as tumor detection, pneumonia diagnosis, and ECG interpretation, while feature-based datasets are mainly analyzed with variational quantum classifiers and quantum support vector machines. Despite hardware constraints, QML models often match or surpass classical machine learning approaches in accuracy, frequently reaching 95–99%. However, these performance statements should be qualified to recognize experimental limitations and avoid excessive optimism and should not be interpreted as definitive proof of quantum superiority at this stage. Additionally, issues with reproducibility and reporting of hardware details persist, which is a significant research gap. This review emphasizes the need for standardized benchmarks, more real hardware testing, and architecture-aware algorithm design. With the potential for accelerated diagnostics and personalized healthcare, QML represents a strategic direction for future medical research. Full article
(This article belongs to the Section Quantum Science and Technology)
25 pages, 2295 KB  
Article
Stochastic Neuromorphic Computing Architecture Based on Voltage-Controlled Probabilistic Switching Magnetic Tunnel Junction (MTJ) Devices
by Liang Gao, Chenxi Wang and Yanfeng Jiang
Micromachines 2026, 17(2), 216; https://doi.org/10.3390/mi17020216 - 5 Feb 2026
Abstract
As integrated circuits face increasingly stringent demands regarding power consumption, area, and stability, integrating novel spintronic devices with computing architectures has become a crucial direction for breaking through traditional computing paradigms. In the paper, switching mechanism of Magnetic Tunnel Junctions (MTJs) under the [...] Read more.
As integrated circuits face increasingly stringent demands regarding power consumption, area, and stability, integrating novel spintronic devices with computing architectures has become a crucial direction for breaking through traditional computing paradigms. In the paper, switching mechanism of Magnetic Tunnel Junctions (MTJs) under the synergistic effect of Voltage-Controlled Magnetic Anisotropy (VCMA) and the Spin Hall Effect (SHE) is investigated. VCMA-assisted switching SHE-MTJ device is adopted, and a macrospin approximation model is established based on the Landau-Lifshitz-Gilbert (LLG) equation to systematically analyze its dynamic characteristics. The research demonstrates that applying VCMA voltage pulses with appropriate amplitude and width can significantly reduce the required spin Hall current density and pulse width for switching, thereby effectively minimizing ohmic losses and Joule heating. Furthermore, by incorporating a thermal fluctuation field, voltage-controlled SHE-MTJ device with stochastic switching behavior can be constructed, obtaining an approximately sigmoidal voltage-probability response curve. This provides an ideal physical foundation for stochastic computing and neuromorphic computing. Based on the above established fundamental discovery, an in-memory computing architecture supporting binarized Convolutional Neural Networks (CNNs) is proposed and designed in the paper. Combined with the lightweight network SqueezeNet, this architecture achieves a Top-1 recognition accuracy of 72.49% on the CIFAR-10 dataset, with a parameter count of only 1.25 × 106. This work offers a feasible spintronic implementation scheme for low-power, high-energy-efficiency edge-side intelligent chips. Full article
21 pages, 3659 KB  
Article
A Battery State-of-Charge Prediction Method Based on a Hammerstein Model Integrated with a Hippopotamus Optimization Algorithm and Neural Network
by Liang Zhang, Bilong Yang, Ling Lyu, Sihan Che, Haoqiang Li and Weifei Wang
Electronics 2026, 15(3), 698; https://doi.org/10.3390/electronics15030698 - 5 Feb 2026
Abstract
Accurate estimation of the state of charge (SOC) of lithium-ion batteries is critical for assessing the safety and remaining range of electric vehicles. However, due to the complex and variable operating environment of batteries and their highly nonlinear internal mechanisms, achieving high-precision SOC [...] Read more.
Accurate estimation of the state of charge (SOC) of lithium-ion batteries is critical for assessing the safety and remaining range of electric vehicles. However, due to the complex and variable operating environment of batteries and their highly nonlinear internal mechanisms, achieving high-precision SOC prediction remains a central challenge in current research. To this end, this paper proposes a nonlinear Hammerstein model based on the Hippopotamus Optimization Algorithm (HO) to optimize the backpropagation neural network, thereby enhancing the accuracy of SOC prediction. The HO-BP-Hammerstein model optimizes the BP neural network architecture using the Hippopotamus Algorithm and conducts SOC prediction accuracy tests on real-world data. Experimental results demonstrate the superiority of the proposed method through comparative accuracy analysis of various SOC prediction approaches under different operating conditions, confirming its significant engineering application value. Full article
Show Figures

Figure 1

31 pages, 2038 KB  
Article
Enhanced Cropland SOM Prediction via LEW-DWT Fusion of Multi-Temporal Landsat 8 Images and Time-Series NDVI Features
by Lixin Ning, Daocheng Li, Yingxin Xia, Erlong Xiao, Dongfeng Han, Jun Yan and Xiaoliang Dong
Sensors 2026, 26(3), 1048; https://doi.org/10.3390/s26031048 - 5 Feb 2026
Abstract
Soil organic matter (SOM) is a key indicator of arable land quality and the global carbon cycle; accurate regional-scale SOM estimation is vitally significant for sustainable agricultural development and climate change research. This study evaluates a multisource data-fusion approach for improving cropland SOM [...] Read more.
Soil organic matter (SOM) is a key indicator of arable land quality and the global carbon cycle; accurate regional-scale SOM estimation is vitally significant for sustainable agricultural development and climate change research. This study evaluates a multisource data-fusion approach for improving cropland SOM prediction in Yucheng City, Shandong Province, China. We applied a Local Energy Weighted Discrete Wavelet Transform (LEW-DWT) to fuse multi-temporal Landsat 8 imagery (2014–2023). Quantitative analysis (e.g., Information Entropy and Average Gradient) demonstrated that LEW-DWT effectively preserved high-frequency spatial details and texture features of fragmented croplands better than traditional DWT and simple splicing methods. These were combined with 41 environmental predictors to construct composite Ev–Tn–Mm features (environmental variables, temporal NDVI features, and multi-temporal multispectral information). Random Forest (RF) and Convolutional Neural Network (CNN) models were trained and compared to assess the contribution of the fused data to SOM mapping. Key findings are: (1) Comparative analysis showed that the LEW-DWT fusion strategy achieved the lowest spectral distortion and highest spatial fidelity. Using the fused multitemporal dataset, the CNN attained the highest predictive performance for SOM (R2 = 0.49). (2) Using the Ev–Tn–Mm features, the CNN achieved R2 = 0.62, outperforming the RF model (R2 = 0.53). Despite the limited sample size, the optimized shallow CNN architecture effectively extracted local spatial features while mitigating overfitting. (3) Variable importance analysis based on the RF model reveals that mean soil moisture is the primary single variable influencing the SOM, (relative importance 15.22%), with the NDVI phase among time-series features (1.80%) and the SWIR1 band among fused multispectral bands (1.38%). (4) By category, soil moisture-related variables contributed 45.84% of total importance, followed by climatic factors. The proposed multisource fusion framework offers a practical solution for regional SOM digital monitoring and can support precision agriculture and soil carbon management. Full article
(This article belongs to the Special Issue Soil Sensing and Mapping in Precision Agriculture: 2nd Edition)
28 pages, 3453 KB  
Article
Denoising Adaptive Multi-Branch Architecture for Detecting Cyber Attacks in Industrial Internet of Services
by Ghazia Qaiser and Siva Chandrasekaran
J. Cybersecur. Priv. 2026, 6(1), 26; https://doi.org/10.3390/jcp6010026 - 5 Feb 2026
Abstract
The emerging scope of the Industrial Internet of Services (IIoS) requires a robust intrusion detection system to detect malicious attacks. The increasing frequency of sophisticated and high-impact cyber attacks has resulted in financial losses and catastrophes in IIoS-based manufacturing industries. However, existing solutions [...] Read more.
The emerging scope of the Industrial Internet of Services (IIoS) requires a robust intrusion detection system to detect malicious attacks. The increasing frequency of sophisticated and high-impact cyber attacks has resulted in financial losses and catastrophes in IIoS-based manufacturing industries. However, existing solutions often struggle to adapt and generalize to new cyber attacks. This study proposes a unique approach designed for known and zero-day network attack detection in IIoS environments, called Denoising Adaptive Multi-Branch Architecture (DA-MBA). The proposed approach is a smart, conformal, and self-adjusting cyber attack detection framework featuring denoising representation learning, hybrid neural inference, and open-set uncertainty calibration. The model merges a denoising autoencoder (DAE) to generate noise-tolerant latent representations, which are processed using a hybrid multi-branch classifier combining dense and bidirectional recurrent layers to capture both static and temporal attack signatures. Moreover, it addresses challenges such as adaptability and generalizability by hybridizing a Multilayer Perceptron (MLP) and bidirectional LSTM (BiLSTM). The proposed hybrid model was designed to fuse feed-forward transformations with sequence-aware modeling, which can capture direct feature interactions and any underlying temporal and order-dependent patterns. Multiple approaches have been applied to strengthen the dual-branch architecture, such as class weighting and comprehensive hyperparameter optimization via Optuna, which collectively address imbalanced data, overfitting, and dynamically shifting threat vectors. The proposed DA-MBA is evaluated on two widely recognized IIoT-based datasets, Edge-IIoT set and WUSTL-IIoT-2021 and achieves over 99% accuracy and a near 0.02 loss, underscoring its effectiveness in detecting the most sophisticated attacks and outperforming recent deep learning IDS baselines. The solution offers a scalable and flexible architecture for enhancing cybersecurity within evolving IIoS environments by coupling feature denoising, multi-branch classification, and automated hyperparameter tuning. The results confirm that coupling robust feature denoising with sequence-aware classification can provide a scalable and flexible framework for improving cybersecurity within the IIoS. The proposed architecture offers a scalable, interpretable, and risk sensitive defense mechanism for IIoS, advancing secure, adaptive, and trustworthy industrial cyber-resilience. Full article
(This article belongs to the Special Issue Cyber Security and Digital Forensics—2nd Edition)
Show Figures

Figure 1

26 pages, 4800 KB  
Article
Porosity and Permeability Estimations from X-Ray Tomography Images and Data Using a Deep Learning Approach
by Edwar Herrera, Oriol Oms and Eduard Remacha
Appl. Sci. 2026, 16(3), 1613; https://doi.org/10.3390/app16031613 - 5 Feb 2026
Abstract
This work presents a novel deep learning workflow for estimating porosity and permeability from combined data, where numerical variables such as high-resolution bulk density (RHOB) and photoelectric factor (PEF) data are integrated with X-ray computed tomography (X-CT) image data, using a dual-energy X-CT [...] Read more.
This work presents a novel deep learning workflow for estimating porosity and permeability from combined data, where numerical variables such as high-resolution bulk density (RHOB) and photoelectric factor (PEF) data are integrated with X-ray computed tomography (X-CT) image data, using a dual-energy X-CT approach (DECT). Convolutional neural networks (CNNs) were calibrated with routine core analysis (RCAL) laboratory measurements from one well from Sinú-San Jacinto Basin (Colombia). The CNN architecture combines two main branches: An image branch, in which a CNN extracts spatial features from normalized X-CT sections using 3 × 3 convolution layers, ReLU activation, batch normalization, and maxPooling, and a numerical branch, which processes the input vectors corresponding to RHOB and PEF using fully connected dense layers and dropout regularization. Both branches are concatenated in a fusion layer, from which the model’s final predictions are made. Results indicate a strong correlation between porosity, permeability, RHOB and PEF logs, and CT images. The porosity model achieved excellent predictive performance, with an R2 = 0.996, MAE = 3.96 × 10−3, MSE = 3.82 × 10−5, and 0.064 maximum error. The permeability model also performed well, with a linear R2 = 0.983, though metrics reflected the wide dynamic range of permeability. Consequently, artificial neural networks (ANNs) can accurately predict porosity and permeability at various depths where no corresponding laboratory data exists, demonstrating excellent predictive capabilities over several rock intervals, in a high vertical resolution because of X-CT data scale (0.625 mm). Full article
Show Figures

Figure 1

39 pages, 4781 KB  
Article
Cardiovascular Disease Risk Prediction Utilizing Two-Tier Classification Framework Optimized with Adapted Variable Neighborhood Search Algorithm
by Saramma John Villoth, Petar Dabic, Tamara Zivkovic, Miodrag Zivkovic, Svetlana Andjelic, Milos Mravik, Vladimir Simic, Mahmoud Abdel-Salam and Nebojsa Bacanin
Algorithms 2026, 19(2), 130; https://doi.org/10.3390/a19020130 - 5 Feb 2026
Abstract
Accurately assessing a patient’s likelihood of developing cardiovascular conditions is essential for proper case classification and for ensuring timely, targeted medical intervention. To address this need, the present study employs a carefully optimized machine learning framework to predict such risks within cardiology settings. [...] Read more.
Accurately assessing a patient’s likelihood of developing cardiovascular conditions is essential for proper case classification and for ensuring timely, targeted medical intervention. To address this need, the present study employs a carefully optimized machine learning framework to predict such risks within cardiology settings. A hybrid architecture is proposed that combines convolutional neural networks (CNNs) with cutting-edge gradient boosting classifiers, namely CatBoost and LightGBM, whose performance is further enhanced by metaheuristic optimization. The system adopts a two-layer design capable of capturing complex data structures while supporting accurate classification of cardiac patients and their risk of developing cardiovascular disease. Extensive evaluation on real-world data confirms the framework’s effectiveness for binary classification, with the best models reaching an accuracy of slightly over 92%. To complement predictive performance, explainable AI methods were applied to clarify model decisions, yielding practical insights that can guide future data collection strategies and improve diagnostic precision. Full article
Show Figures

Figure 1

Back to TopTop