Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,405)

Search Parameters:
Keywords = architectural AI

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 9294 KB  
Article
AI-Enabled Frequency Diverse Array Spaceborne Surveillance Radar for Space Debris and Threat Detection Under Resource Constraints
by Dayan Guo, Tianyao Huang, Zijian Lin, Jie He and Yue Qi
Remote Sens. 2026, 18(6), 908; https://doi.org/10.3390/rs18060908 - 16 Mar 2026
Abstract
Ensuring space environment security through the detection of space debris and non-cooperative threat objects has become a critical mission for next-generation spaceborne surveillance systems. Frequency diversity array (FDA) radar, with its unique range angle-dependent beampattern, offers a transformative capability to distinguish closely-spaced space [...] Read more.
Ensuring space environment security through the detection of space debris and non-cooperative threat objects has become a critical mission for next-generation spaceborne surveillance systems. Frequency diversity array (FDA) radar, with its unique range angle-dependent beampattern, offers a transformative capability to distinguish closely-spaced space threats from intense background clutter. However, the operational deployment of spaceborne FDA is inherently hindered by stringent platform resource constraints, including limited power supply, high hardware complexity, and restricted data transmission bandwidth. These physical limitations inevitably lead to incomplete signal observations, resulting in elevated sidelobes that can obscure small, high-speed space debris. To bridge the gap between hardware constraints and high-fidelity surveillance, this paper proposes an AI-enabled data recovery framework based on deep matrix factorization. Specifically designed to process the complex-valued nature of radar echoes, the proposed framework introduces two specialized architectures: a real-valued representation-based method (DMF-Rr) and a native complex-valued deep matrix factorization (CDMF) network that preserves vital phase coherence. By leveraging deep learning to “enable” sparse-sampled systems, the proposed method effectively reconstructs missing observations without requiring prior knowledge of the signal rank. Numerical results demonstrate that the AI-powered CDMF significantly suppresses the high sidelobes induced by resource-limited sampling, enabling the reliable identification and localization of weak threat objects. This study demonstrates the power of AI in overcoming the physical bottlenecks of spaceborne hardware, providing a robust solution for enhancing space situational awareness in an increasingly crowded orbital environment. Full article
(This article belongs to the Special Issue Advanced Techniques of Spaceborne Surveillance Radar)
20 pages, 315 KB  
Systematic Review
Green Scheduling and Task Offloading in Edge Computing: A Systematic Review
by Adriana Rangel Ribeiro, Ana Clara Santos Andrade, Gabriel Leal dos Santos, Guilherme Dinarte Marcondes Lopes, Edvard Martins de Oliveira, Adler Diniz de Souza and Jeremias Barbosa Machado
Network 2026, 6(1), 17; https://doi.org/10.3390/network6010017 - 16 Mar 2026
Abstract
This paper presents a Systematic Literature Review (SLR) on green scheduling and task offloading strategies for energy optimization in edge computing environments. The evolution of low-latency, high-performance applications has driven the widespread adoption of distributed computing paradigms such as Edge Computing, Fog-Cloud architectures, [...] Read more.
This paper presents a Systematic Literature Review (SLR) on green scheduling and task offloading strategies for energy optimization in edge computing environments. The evolution of low-latency, high-performance applications has driven the widespread adoption of distributed computing paradigms such as Edge Computing, Fog-Cloud architectures, and the Internet of Things (IoT). In this context, Mobile Edge Computing (MEC) is often combined with Unmanned Aerial Vehicles (UAVs) to extend computational capabilities to areas with limited infrastructure, bringing processing closer to the data source to reduce latency and improve scalability. Nevertheless, these systems encounter substantial energy-related challenges, particularly in battery-powered or resource-constrained environments. To address these concerns, green computing strategies—especially energy-efficient scheduling and task offloading—have emerged as promising approaches to optimize energy usage in edge environments. Green scheduling optimizes task allocation to minimize energy consumption, whereas offloading redistributes workloads from resource-constrained devices to edge or cloud servers. Increasingly, these techniques are enhanced through artificial intelligence (AI) and machine learning (ML), enabling adaptive and context-aware decision-making in dynamic environments. This paper conducts a systematic literature review (SLR) to synthesize the most widely adopted strategies for energy-efficient scheduling and task offloading in edge computing, highlighting their impact on sustainability and performance. The analysis provides a comprehensive view of the state of the art, examines how architectural contexts influence energy-aware decisions, and highlights the role of AI/ML in enabling intelligent and sustainable edge systems. The findings reveal current research gaps and outline future directions to advance the development of robust, scalable, and environmentally responsible computing infrastructures. Full article
Show Figures

Figure 1

19 pages, 1546 KB  
Article
Deep Learning-Enhanced Proactive Strategy: LSTM and VRP/ACO for Autonomous Replenishment and Demand Forecasting in Shared Logistics
by Martin Straka and Kristína Kleinová
Appl. Sci. 2026, 16(6), 2838; https://doi.org/10.3390/app16062838 - 16 Mar 2026
Abstract
At present, the global logistics sector faces critical challenges, including rising energy costs and pressure to reduce CO2 emissions. Traditional linear supply chains are becoming inefficient, necessitating a transition toward shared logistics based on the principles of the sharing economy. This paper [...] Read more.
At present, the global logistics sector faces critical challenges, including rising energy costs and pressure to reduce CO2 emissions. Traditional linear supply chains are becoming inefficient, necessitating a transition toward shared logistics based on the principles of the sharing economy. This paper presents a progressive three-layer architecture that transforms conventional reactive data collection into an autonomous, proactive management system for the distribution of consumable materials. While previous research established foundations in IoT connectivity for smart vending machines, this study advances the process by integrating an intelligent layer of artificial intelligence (AI) algorithms. The framework utilizes Long Short-Term Memory (LSTM) neural networks for demand forecasting, dynamic route optimization (VRP/ACO) for replenishment, and Isolation Forest/DBSCAN algorithms for real-time anomaly detection. To evaluate the framework, a numerical simulation was conducted using representative pilot scenarios. The results indicate that within the simulated environment, the system achieves over 95% accuracy in inventory depletion prediction (MAPE = 4.02%). In these analyzed instances, this leads to a 25–30% reduction in stock-out risks and a 25% reduction in replenishment distance. These findings demonstrate the significant potential for reducing operational costs and carbon footprints in green logistics. The study confirms that the synergy between IoT infrastructure and AI-driven analysis provides a robust foundation for transitioning from static methodologies to resilient, collaborative logistics ecosystems. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in the Internet of Things)
Show Figures

Figure 1

23 pages, 2885 KB  
Article
AI-Controlled Modular Decoy Generation for Reconstruction-Resistant Hybrid and Multi-Cloud Storage Systems
by Munir Ahmed and Jiann-Shiun Yuan
Electronics 2026, 15(6), 1231; https://doi.org/10.3390/electronics15061231 - 16 Mar 2026
Abstract
Although cloud storage is widely trusted by users and enterprises, externally stored encrypted and fragmented data remain vulnerable to reconstruction and inference attacks following partial exposure. Existing decoy-based defenses often rely on static configurations or randomly generated artifacts that can be filtered during [...] Read more.
Although cloud storage is widely trusted by users and enterprises, externally stored encrypted and fragmented data remain vulnerable to reconstruction and inference attacks following partial exposure. Existing decoy-based defenses often rely on static configurations or randomly generated artifacts that can be filtered during adversarial analysis. This paper presents an Artificial Intelligence (AI)-controlled modular decoy generation method to enhance reconstruction resistance in distributed storage systems. The method operates as a system-agnostic post-fragmentation layer and does not require modification of encryption or storage architecture. Given encrypted fragments as input, decoys are generated using a supervised Extreme Gradient Boosting (XGBoost) regression model that adapts decoy quantity based on system telemetry and resource conditions. Decoys maintain statistical alignment with real encrypted fragments in size and Shannon entropy characteristics. To address scalability, the method is evaluated across small, medium, and large deployments comprising up to 413 externally exposed fragments and compared against fixed-ratio (10%, 20%) and randomized baselines. Experimental evaluation demonstrates increased adversarial uncertainty without altering legitimate reconstruction procedures or encryption mechanisms. Kolmogorov–Smirnov analysis indicates no statistically significant difference between AI-generated decoys and real fragments, whereas baseline decoys produce significant deviations in size and entropy distributions, supporting reconstruction resistance at scale in multi-cloud environments. Full article
Show Figures

Figure 1

24 pages, 770 KB  
Article
Responsible AI for Sepsis Prediction: Bridging the Gap Between Machine Learning Performance and Clinical Trust
by Thiago Q. Oliveira, Leandro A. Carvalho, Flávio R. C. Sousa, João B. F. Filho, Khalil F. Oliveira and Daniel A. B. Tavares
J. Clin. Med. 2026, 15(6), 2251; https://doi.org/10.3390/jcm15062251 - 16 Mar 2026
Abstract
Background: Sepsis remains a leading cause of mortality in intensive care units (ICUs) worldwide. Machine learning models for clinical prediction must be accurate, fair, transparent, and reliable to ensure that physicians feel confident in their decision-making processes. Methods: We used the MIMIC-IV (version [...] Read more.
Background: Sepsis remains a leading cause of mortality in intensive care units (ICUs) worldwide. Machine learning models for clinical prediction must be accurate, fair, transparent, and reliable to ensure that physicians feel confident in their decision-making processes. Methods: We used the MIMIC-IV (version 3.1) database to evaluate several machine learning architectures, including Logistic Regression, XGBoost, LightGBM, LSTM (Long Short-Term Memory) networks and Transformer models. We predicted three main clinical targets—hospital mortality, length of stay, and septic shock onset—using artificial intelligence algorithms, with respect for responsible AI principles. Model interpretability was assessed using Shapley Additive Explanations (SHAP). Results: The XGBoost model demonstrated superior performance in prediction tasks, particularly for hospital mortality (AUROC 0.874), outperforming traditional LSTM networks, Transformers, and linear baselines. The importance analysis of the variables confirmed the clinical relevance of the model. Conclusions: While XGBoost and ensemble algorithms demonstrate superior predictive power for sepsis prognosis, their clinical adoption necessitates robust explainability mechanisms to gain trust among doctors. Full article
Show Figures

Figure 1

22 pages, 2762 KB  
Article
Automated Classification of Medical Image Modality and Anatomy
by Jean de Smidt, Kian Anderson and Andries Engelbrecht
Algorithms 2026, 19(3), 222; https://doi.org/10.3390/a19030222 - 16 Mar 2026
Abstract
Radiological departments face challenges in efficiency and diagnostic consistency. The interpretation of radiographs remains highly variable between practitioners, which creates potential disparities in patient care. This study explores how artificial intelligence (AI), specifically transfer learning techniques, can automate parts of the radiological workflow [...] Read more.
Radiological departments face challenges in efficiency and diagnostic consistency. The interpretation of radiographs remains highly variable between practitioners, which creates potential disparities in patient care. This study explores how artificial intelligence (AI), specifically transfer learning techniques, can automate parts of the radiological workflow to improve service quality and efficiency. Transfer learning methods were applied to various convolutional neural network (CNN) architectures and compared to classify medical images across different modalities, i.e., X-rays, ultrasound, magnetic resonance imaging (MRI), and angiography, through a two-component model: medical image modality prediction and anatomical region prediction. Several publicly available datasets were combined to create a representative dataset to evaluate residual networks (ResNet), dense networks (DenseNet), efficient networks (EfficientNet), and the Swin Transformer (Swin-T). The models were evaluated through accuracy, precision, recall, and F1-score metrics with macro-averaging to account for class imbalance. The results demonstrate that lightweight transfer learning methods effectively classify medical imagery, with an accuracy of 97.21% on test data for the combined transfer learning pipeline. EfficientNet-B4 demonstrated the best performance on both components of the proposed pipeline and achieved a 99.6% accuracy for modality prediction and 99.21% accuracy for anatomical region prediction on unseen test data. This approach offers the potential for streamlined radiological workflows while maintaining diagnostic quality. The strong model performance across diverse modalities and anatomical regions indicates robust generalisability for practical implementation in clinical settings. Full article
(This article belongs to the Special Issue Advances in Deep Learning-Based Data Analysis)
Show Figures

Figure 1

16 pages, 1673 KB  
Article
DeepSarcAE: A Deep Autoencoder Framework for Learning Gait Dynamics in the Detection of Sarcopenia
by Muthamil Balakrishnan, Janardanan Kumar, Jaison Jacob Mathunny, Varshini Karthik and Ashok Kumar Devaraj
Biophysica 2026, 6(2), 20; https://doi.org/10.3390/biophysica6020020 - 16 Mar 2026
Abstract
Sarcopenia is a degenerative musculoskeletal condition recognised as the age-related decline in skeletal muscle mass, strength, and function. Traditional diagnostic methods are limited by cost, accessibility, and subjectivity. This study aimed to develop a non-invasive, AI-driven, video-based framework for early Sarcopenia detection using [...] Read more.
Sarcopenia is a degenerative musculoskeletal condition recognised as the age-related decline in skeletal muscle mass, strength, and function. Traditional diagnostic methods are limited by cost, accessibility, and subjectivity. This study aimed to develop a non-invasive, AI-driven, video-based framework for early Sarcopenia detection using functional movement analysis. Participants with and without Sarcopenia were recorded performing functional movements such as level walking, stair climbing, and ramp walking. Ten representative frames were extracted from each participant, resulting in 300 images (150 Sarcopenic, 150 non-Sarcopenic) utilised for the study. The DeepSarcAE model is an integrated framework of an autoencoder and a CNN-based classifier. Its performance was benchmarked against pretrained architectures such as EfficientNet, ResNet, MobileNet, Inception, VGG16 and four custom CNN models. Evaluation metrics such as sensitivity, specificity, precision, negative predictive value (NPV), accuracy, and AUC were used to analyse the models. DeepSarcAE outperformed all other models, attaining 100% sensitivity, 83.33% specificity, 85.71% precision, 100% NPV, 91.67% accuracy, and an AUC of 0.96. VGG16 and MobileNet followed the performance of DeepSarcAE closely, while the Inception network exhibited the weakest results due to poor generalisation. TheDeepSarcAE framework offers a scalable, cost-effective, and non-invasive approach for Sarcopenia screening from the input gait image frames. Its promising preliminary performance highlights the potential of deep learning in early diagnosis and clinical decision support in preventive healthcare. Full article
Show Figures

Figure 1

29 pages, 2707 KB  
Review
Digital Twin Technology in Wind Turbine Condition Monitoring, Predictive Maintenance, and RUL Estimation: A Systematic Literature Review
by Jorge Maldonado-Correa, José Cuenca-Granda, Joel Torres-Cabrera, Galo Cerda Mejía, Wilson Daniel Bastidas Barragan, Rocío Guapulema, Edwin Paccha-Herrera, Juan Carlos Solano, Darwin Tapia-Peralta, José Benavides and Cristian Laverde-Albarracín
Energies 2026, 19(6), 1477; https://doi.org/10.3390/en19061477 - 15 Mar 2026
Abstract
The rapid growth of wind energy has increased the need for advanced condition monitoring (CM), predictive maintenance, and remaining useful life (RUL) estimation strategies for wind turbines. In this context, digital twins (DTs) have emerged as a key tool for improving reliability, availability, [...] Read more.
The rapid growth of wind energy has increased the need for advanced condition monitoring (CM), predictive maintenance, and remaining useful life (RUL) estimation strategies for wind turbines. In this context, digital twins (DTs) have emerged as a key tool for improving reliability, availability, and operational efficiency by integrating physical models, operational data, and artificial intelligence (AI). This paper presents a systematic literature review (SLR) aimed at analyzing the state of the art, classifying the main applications, and identifying research gaps. A rigorous search protocol was applied across scientific databases, considering inclusion and exclusion criteria and analysis categories aligned with four research questions. The results show a high concentration of studies on critical wind turbine components, a predominance of hybrid physics-based and data-driven approaches, and an increasing use of deep learning (DL) models. However, several research gaps remain, including the predominance of component-level digital twin implementations rather than system-level architectures, the lack of standardized datasets and benchmarking frameworks, and challenges related to SCADA data heterogeneity and real-time scalability. It is concluded that DTs are evolving toward more autonomous and prescriptive systems; however, they still require further maturation for widespread industrial adoption. Full article
(This article belongs to the Special Issue Latest Challenges in Wind Turbine Maintenance, Operation, and Safety)
Show Figures

Figure 1

21 pages, 1683 KB  
Review
From Gene Knockouts to Genome Remodeling: Large DNA Fragment Deletion Technologies in Plants
by Jiayi Hou, Hui Li, Fengfeng Zhang, Dan Yang, Yan Xiong, Xiaoyue Zhu and Mingzhang Wen
Plants 2026, 15(6), 909; https://doi.org/10.3390/plants15060909 - 15 Mar 2026
Abstract
Large DNA fragment deletion (LDFD) provides a powerful means to reconfigure plant genomes at the kilobase to megabase scale, enabling the dissection of genome function, elucidation of non-coding regulatory elements, modulation of gene dosage, reorganization of chromosomal architecture, and implementation of synthetic biology [...] Read more.
Large DNA fragment deletion (LDFD) provides a powerful means to reconfigure plant genomes at the kilobase to megabase scale, enabling the dissection of genome function, elucidation of non-coding regulatory elements, modulation of gene dosage, reorganization of chromosomal architecture, and implementation of synthetic biology designs. In this review, we systematically compare the mechanisms, efficiencies, advantages, and limitations of the major LDFD technologies that have been applied in plants, including ZFNs, TALENs, CRISPR/Cas systems (Cas9, Cas12a, Cas3), site-specific recombinases, transposon-based systems, and prime editing-derived strategies. We highlight how plant-specific features of chromatin organization and DNA repair constrain large deletions, and discuss the current bottlenecks in achieving efficient, precise, and predictable LDFD across diverse crop genomes. Finally, we outline future directions for plant LDFD, emphasizing AI-assisted design of nucleases and recombinases, protein-directed evolution, and improved DNA- and RNP-based delivery systems. Together, these advances are expected to transform LDFD from a specialized tool into a broadly accessible platform for functional genomics, trait engineering and rational genome design in plants. Full article
(This article belongs to the Special Issue Technologies, Applications and Innovations in Plant Genetics Research)
Show Figures

Figure 1

43 pages, 2831 KB  
Review
Infostructure: A Scoping Review and Reference Architectural Framework for Situation Awareness in Future Power System Control Rooms
by Bo Nørregaard Jørgensen and Zheng Grace Ma
Energies 2026, 19(6), 1472; https://doi.org/10.3390/en19061472 - 15 Mar 2026
Abstract
Power system control rooms are undergoing a profound transformation as renewable integration, distributed energy resources, sector coupling, and increasing operational uncertainty reshape the technical, organisational, and cognitive demands of grid operation. At the same time, Digital Twins and Agentic Artificial Intelligence offer new [...] Read more.
Power system control rooms are undergoing a profound transformation as renewable integration, distributed energy resources, sector coupling, and increasing operational uncertainty reshape the technical, organisational, and cognitive demands of grid operation. At the same time, Digital Twins and Agentic Artificial Intelligence offer new possibilities for monitoring, forecasting, reasoning, and decision support. However, existing control room architectures remain fragmented and insufficiently structured to support the coherent integration of digital models, intelligent reasoning systems, human operators, and regulatory accountability mechanisms in safety-critical power system environments. This article addresses that gap through a PRISMA ScR-informed scoping review combined with a structured architectural synthesis process. The study develops Infostructure as a reference architectural framework for situation awareness in future power system control rooms. The framework is derived from a synthesis of operational challenges, regulatory constraints, and human AI collaboration requirements identified across the scientific and regulatory literature. Infostructure formalises four interrelated architectural layers, Physical, Semantic, Orchestration, and Cognitive, constrained by cross cutting governance and compliance principles. The architectural coverage and internal coherence of the framework are illustrated through representative transmission and distribution system use cases, including wide area disturbance anticipation, distribution level congestion management, and cross organisational coordination during extreme events. A structured research and validation agenda is further outlined to support empirical evaluation and phased implementation. By transforming review-based synthesis into a coherent architectural formalisation, Infostructure contributes a rigorous foundation for the evolution of transparent, accountable, and resilient power system control rooms. Full article
Show Figures

Figure 1

28 pages, 2882 KB  
Article
Semantic Divergence in AI-Generated and Human Influencer Product Recommendations: A Computational Analysis of Dual-Agent Communication in Social Commerce
by Woo-Chul Lee, Jang-Suk Lee and Jungho Suh
Appl. Sci. 2026, 16(6), 2816; https://doi.org/10.3390/app16062816 - 15 Mar 2026
Abstract
The proliferation of generative artificial intelligence (AI) as an autonomous recommendation agent fundamentally challenges traditional paradigms of marketing communication. As AI systems increasingly mediate consumer–brand relationships, understanding how artificial agents construct persuasive discourse—distinct from human communicators—becomes critical for developing effective dual-channel marketing strategies. [...] Read more.
The proliferation of generative artificial intelligence (AI) as an autonomous recommendation agent fundamentally challenges traditional paradigms of marketing communication. As AI systems increasingly mediate consumer–brand relationships, understanding how artificial agents construct persuasive discourse—distinct from human communicators—becomes critical for developing effective dual-channel marketing strategies. Grounded in Source Credibility Theory and the Computers Are Social Actors (CASA) paradigm, this study investigates the semantic and structural divergence between AI-generated product recommendations and human influencer marketing messages in social commerce contexts. Employing a mixed-methods computational approach integrating term frequency analysis, TF-IDF weighting, Latent Dirichlet Allocation (LDA) topic modeling, and BERT-based contextualized semantic embedding analysis (KR-SBERT), we examined 330 Instagram influencer posts and 541 AI-generated responses concerning inner beauty enzyme products—a hybrid category combining functional health claims with hedonic beauty appeals—in the Korean social commerce market. AI-generated responses were collected through a systematically designed query protocol with empirically grounded prompts derived from actual consumer search behaviors, and analytical robustness was verified through sensitivity analyses across multiple parameter thresholds. Our findings reveal a fundamental divergence in persuasive architecture: human influencers construct experiential narratives exhibiting message characteristics typically associated with peripheral-route cues (sensory descriptions, emotional testimonials, social context), while AI recommendations employ systematic, evidence-based discourse exhibiting message characteristics typically associated with central-route argumentation (functional mechanisms, ingredient specifications, objective criteria). Topic modeling identified four distinct thematic clusters for each source type: human discourse centers on embodied experience and relational consumption, whereas AI discourse organizes around informational utility and rational decision support. Jensen–Shannon Divergence analysis (JSD = 0.213 bits) confirmed moderate distributional divergence, while chi-square testing (χ2 = 847.23, p < 0.001) and Cramér’s V (0.312, indicating a medium-to-large effect) demonstrated statistically significant and substantively meaningful differences. These findings extend CASA theory by demonstrating that AI recommendation agents develop a characteristic “AI communication signature” distinguishable from human persuasion patterns. We propose an integrated Dual-Agent Persuasion Proposition—synthesizing CASA, ELM, and Source Credibility perspectives—suggesting that AI and human recommenders serve complementary functions across different stages of the consumer decision journey—a proposition whose predictions regarding sequential persuasive effectiveness and consumer processing routes await experimental validation. These findings carry implications for AI content strategy optimization, platform design, and emerging regulatory frameworks for AI-generated content labeling. Full article
Show Figures

Figure 1

33 pages, 5767 KB  
Article
Hyper-Thyro Vision: An Integrated Framework for Hyperthyroidism Diagnostic Facial Image Analysis Based on Deep Learning
by Poonyisa Thepmangkorn and Suchada Sitjongsataporn
Biomimetics 2026, 11(3), 210; https://doi.org/10.3390/biomimetics11030210 - 15 Mar 2026
Abstract
This paper presents an integrated multi-modal framework for detecting hyperthyroidism-associated abnormalities, namely exophthalmos and thyroid-related neck swelling, through the joint analysis of frontal facial and neck images using a deep learning-based approach. The objective of this research is to develop an integrated AI [...] Read more.
This paper presents an integrated multi-modal framework for detecting hyperthyroidism-associated abnormalities, namely exophthalmos and thyroid-related neck swelling, through the joint analysis of frontal facial and neck images using a deep learning-based approach. The objective of this research is to develop an integrated AI framework that improves hyperthyroid-related abnormality detection by simultaneously analyzing facial images of both the eye and neck based on pattern clinical knowledge. The multi-modal framework mimics a biological visual mechanism by using a dual-pathway architecture that concurrently processes foveal-like details of the eyes and neck. It integrates these high-resolution visual embeddings with quantitative morphological measurements to simulate a clinician’s ability to fuse observation with physical assessment. The proposed system employs a multi-faceted decision-making process derived from three distinct data components: two from frontal face analysis and one from neck region analysis. Specifically, eye regions extracted from facial images are preprocessed using the YOLOv11s model. The proposed system leverages a dual-pathway processing architecture to extract comprehensive diagnostic features. For the eye dataset, the framework utilizes a face mesh-based eye landmark (FMEL) to extract both eye regions and perform eyes unfold processing. These regions are subsequently analyzed by the proposed sclera map unwrapping engine (SMUE) to derive quantitative sclera metrics from both the left and right eyes. To optimize classification, a dual-branch architecture is employed by integrating CNN visual embeddings with SMUE-derived statistical features through a feature fusion layer. Simultaneously, the neck processing path executes the neck region of interest (ROI) prediction {upper, lower} to segment critical regions for goiter assessment via the proposed neck μσ ensemble thresholding (NSET) algorithm. The experimental results demonstrate that the proposed algorithm for eye analysis achieved a mean average precision (mAP50) of 96.4%, with a specific mAP50 of 98.6% for the hyperthyroid class. Regarding quantitative scleral measurement, the SMUE process revealed distinct morphological differences, with the experimental data group exhibiting consistently higher pixel distances across the reference points compared with the normal group. Furthermore, the proposed NSET algorithm yielded the highest performance for swollen neck classification with an mAP50 of 92.0%, significantly outperforming the baseline deep learning models while maintaining lower computational complexity. Full article
Show Figures

Graphical abstract

28 pages, 1638 KB  
Article
A Self-Deciding Adaptive Digital Twin Framework Using Agentic AI for Fuzzy Multi-Objective Optimization of Food Logistics
by Hamed Nozari and Zornitsa Yordanova
Algorithms 2026, 19(3), 218; https://doi.org/10.3390/a19030218 - 14 Mar 2026
Abstract
Due to the perishable nature of products, high uncertainty, and conflicting objectives, food supply chain logistics management requires dynamic and adaptive decision-making frameworks. In this study, an integrated decision-making architecture is presented that integrates a multi-objective fuzzy optimization model into an adaptive digital [...] Read more.
Due to the perishable nature of products, high uncertainty, and conflicting objectives, food supply chain logistics management requires dynamic and adaptive decision-making frameworks. In this study, an integrated decision-making architecture is presented that integrates a multi-objective fuzzy optimization model into an adaptive digital twin along with an agentic AI-based dynamic goal reset mechanism. The main methodological innovation of this study is not in the separate development of each of these components but in their structured integration in the form of a self-regulating decision-making loop in which the priority of goals is dynamically adjusted based on the current state of the system. Computational results based on real and simulated data show that the proposed framework reduces the total logistics cost by about 4–5% and reduces product waste by about 13% while simultaneously improving the service level by about 4%. Resilience analysis shows faster performance recovery in the face of operational disruptions, and scalability results confirm the controlled growth of computational time with increasing problem size. These findings demonstrate the effectiveness of integrating adaptive digital twins and agentic AI in a multi-objective fuzzy optimization environment for intelligent and resilient food logistics management. Full article
(This article belongs to the Special Issue Optimizing Logistics Activities: Models and Applications)
Show Figures

Figure 1

35 pages, 501 KB  
Review
An Overview of Existing Applications of Artificial Intelligence in Histopathological Diagnostics of Lymphoma: A Scoping Review
by Mieszko Czaplinski, Grzegorz Redlarski, Mateusz Wieczorek, Paweł Kowalski, Piotr Mateusz Tojza, Adam Sikorski and Arkadiusz Żak
Appl. Sci. 2026, 16(6), 2803; https://doi.org/10.3390/app16062803 - 14 Mar 2026
Abstract
Background: Artificial intelligence (AI) shows promising results in lymphoma detection, prediction, and classification. However, translating these findings into practice requires a rigorous assessment of potential biases, clinical utility, and further validation of research models. Objective: The goal of this study was to summarize [...] Read more.
Background: Artificial intelligence (AI) shows promising results in lymphoma detection, prediction, and classification. However, translating these findings into practice requires a rigorous assessment of potential biases, clinical utility, and further validation of research models. Objective: The goal of this study was to summarize existing studies on artificial intelligence models for the histopathological detection of lymphoma. Design: This study adhered to the PRISMA Extension for Scoping Reviews (PRISMA-ScR) guidelines. A systematic search was conducted across three major databases (Scopus, PubMed, Web of Science) for English-language articles and reviews published between 2016 and 2025. Seven precise search queries were applied to identify relevant publications, accounting for variations in study modality, algorithmic architectures, and disease-specific terminology. Results: The search identified 612 records, of which 36 articles met the inclusion criteria. These studies presented 36 AI models, comprising 30 diagnostic and six prognostic applications, with Convolutional Neural Networks (CNNs) being the predominant architecture. Regarding data sources, 83% (30/36) of datasets utilized Hematoxylin and Eosin (H&E)-stained images, while the remainder relied on diverse modalities, including IHC-stained slides, bone marrow smears, and other tissue preparations. Studies predominantly utilized retrospective, private cohorts with sample sizes typically ranging from 50 to 400 patients; only a minority leveraged open-access repositories (e.g., Kaggle, TCGA). The primary application was slide-level multi-class classification, distinguishing between specific lymphoma subtypes and non-neoplastic controls. Beyond diagnosis, a subset of studies explored advanced prognostic tasks, such as predicting chemotherapy response and disease progression (e.g., in CLL), as well as automated biomarker quantification (c-MYC, BCL2, PD-L1). Reported diagnostic performance was generally high, with accuracy ranging from 60% to 100% (clustering around 90%) and AUC values spanning 0.70 to 0.99 (predominantly >0.90). Conclusions: While AI models demonstrate high diagnostic accuracy, their translation into practice is limited by unstandardized protocols, morphological complexity, and the “black box” nature of algorithms. Critical issues regarding data provenance, image noise, and lack of representativeness raise risks of systematic bias, hence the need for rigorous validation in diverse clinical environments. Full article
(This article belongs to the Special Issue Advances and Applications of Machine Learning for Bioinformatics)
Show Figures

Figure 1

33 pages, 1979 KB  
Article
eXCube2: Explainable Brain-Inspired Spiking Neural Network Framework for Emotion Recognition from Audio, Visual and Multimodal Audio–Visual Data
by N. K. Kasabov, A. Yang, Z. Wang, I. Abouhassan, A. Kassabova and T. Lappas
Biomimetics 2026, 11(3), 208; https://doi.org/10.3390/biomimetics11030208 - 14 Mar 2026
Abstract
This paper introduces a biomimetic framework and novel brain-inspired AI (BIAI) models based on spiking neural networks (SNNs) for emotional state recognition from audio (speech), visual (face), and integrated multimodal audio–visual data. The developed framework, named eXCube2, uses a three-dimensional SNN architecture NeuCube [...] Read more.
This paper introduces a biomimetic framework and novel brain-inspired AI (BIAI) models based on spiking neural networks (SNNs) for emotional state recognition from audio (speech), visual (face), and integrated multimodal audio–visual data. The developed framework, named eXCube2, uses a three-dimensional SNN architecture NeuCube that is spatially structured according to a human brain template. The BIAI models developed in eXCube2 are trainable on spatio- and spectro-temporal data using brain-inspired learning rules. Such models are explainable in terms of revealing patterns in data and are adaptable to new data. The eXCube2 models are implemented as software systems and tested on speech and video data of subjects expressing emotional states. The use of a brain template for the SNN structure enables brain-inspired tonotopic and stereo mapping of audio inputs, topographic mapping of visual data, and the combined use of both modalities. This novel approach brings AI-based emotional state recognition closer to human perception, provides a better explainability and adaptability than existing AI systems. It also results in a higher or competitive accuracy, even though this was not the main goal here. This is demonstrated through experiments on benchmark datasets, achieving classification accuracy above 80% on single-modality data and 88.9% when multimodal audio–visual data are used, and a “don’t know” output is introduced. The paper further discusses possible applications of the proposed eXCube2 framework to other audio, visual, and audio–visual data for solving challenging problems, such as recognizing emotional states of people from different origins; brain state diagnosis (e.g., Parkinson’s disease, Alzheimer’s disease, ADHD, dementia); measuring response to treatment over time; evaluating satisfaction responses from online clients; cognitive robotics; human–robot interaction; chatbots; and interactive computer games. The SNN-based implementation of BIAI also enables the use of neuromorphic chips and platforms, leading to reduced power consumption, smaller device size, higher performance accuracy, and improved adaptability and explainability. This research shows a step toward building brain-inspired AI systems. Full article
Back to TopTop