Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,610)

Search Parameters:
Keywords = multimodal data integration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1969 KB  
Article
Domain-Aware Interpretable Machine Learning Model for Predicting Postoperative Hospital Length of Stay from Perioperative Data: A Retrospective Observational Cohort Study
by Iqram Hussain, Joseph R. Scarpa and Richard Boyer
Bioengineering 2026, 13(2), 147; https://doi.org/10.3390/bioengineering13020147 - 27 Jan 2026
Abstract
Background and Objective: Postoperative hospital length of stay (LOS) reflects surgical recovery and resource demand but remains difficult to predict due to heterogeneous perioperative trajectories. We aimed to develop and validate an interpretable machine learning framework that integrates multimodal perioperative data to accurately [...] Read more.
Background and Objective: Postoperative hospital length of stay (LOS) reflects surgical recovery and resource demand but remains difficult to predict due to heterogeneous perioperative trajectories. We aimed to develop and validate an interpretable machine learning framework that integrates multimodal perioperative data to accurately predict LOS and uncover clinically meaningful drivers of prolonged hospitalization. Methods: We studied 97,937 adult surgical cases from a large perioperative registry. Routinely collected perioperative data included patient demographics, comorbid conditions, preoperative laboratory values, intraoperative physiologic summaries, and procedural characteristics. Length of stay was modeled using a supervised regression approach with internal cross-validation and independent holdout evaluation. Model performance was assessed at both the cohort and individual levels, and explanatory analyses were performed to quantify the contribution of clinically defined perioperative domains. Results: The model achieved R2 = 0.61 and MAE ≈ 1.34 days on the holdout set, with nearly identical cross-validation performance (R2 = 0.60, MAE ≈ 1.34 days). Operative duration, diagnostic complexity, intraoperative hemodynamic variability, and preoperative laboratory indices—particularly albumin and hematocrit—emerged as the strongest determinants of postoperative stay. Patients with shorter recoveries typically had brief operations, stable physiology, and normal laboratory profiles, whereas prolonged hospitalization was linked to complex procedures, malignant or respiratory diagnoses, and lower albumin levels. Conclusions: Interpretable machine learning enables accurate and generalizable estimation of postoperative LOS while revealing clinically actionable perioperative domains. Such frameworks may facilitate more efficient perioperative planning, improved allocation of hospital resources, and personalized recovery strategies. Full article
27 pages, 8004 KB  
Article
A Grid-Enabled Vision and Machine Learning Framework for Safer and Smarter Intersections: Enhancing Real-Time Roadway Intelligence and Vehicle Coordination
by Manoj K. Jha, Pranav K. Jha and Rupesh K. Yadav
Infrastructures 2026, 11(2), 41; https://doi.org/10.3390/infrastructures11020041 - 27 Jan 2026
Abstract
Urban intersections are critical nodes for roadway safety, congestion management, and autonomous vehicle coordination. Traditional traffic control systems based on fixed-time signals and static sensors lack adaptability to real-time risks such as red-light violations, near-miss incidents, and multimodal conflicts. This study presents a [...] Read more.
Urban intersections are critical nodes for roadway safety, congestion management, and autonomous vehicle coordination. Traditional traffic control systems based on fixed-time signals and static sensors lack adaptability to real-time risks such as red-light violations, near-miss incidents, and multimodal conflicts. This study presents a grid-enabled framework integrating computer vision and machine learning to enhance real-time intersection intelligence and road safety. The system overlays a computational grid on the roadway, processes live video feeds, and extracts dynamic parameters including vehicle trajectories, deceleration patterns, and queue evolution. A novel active learning module improves detection accuracy under low visibility and occlusion, reducing false alarms in collision and violation detection. Designed for edge-computing environments, the framework interfaces with signal controllers to enable adaptive signal timing, proactive collision avoidance, and emergency vehicle prioritization. Case studies from multiple intersections typical of US cities show improved phase utilization, reduced intersection conflicts, and enhanced throughput. A grid-based heatmap visualization highlights spatial risk zones, supporting data-driven decision-making. The proposed framework bridges static infrastructure and intelligent mobility systems, advancing safer, smarter, and more connected roadway operations. Full article
Show Figures

Figure 1

13 pages, 542 KB  
Review
Pharmacogenomics of Antineoplastic Therapy in Children: Genetic Determinants of Toxicity and Efficacy
by Zaure Dushimova, Timur Saliev, Aigul Bazarbayeva, Gaukhar Nurzhanova, Ainura Baibadilova, Gulnara Abdilova and Ildar Fakhradiyev
Pharmaceutics 2026, 18(2), 165; https://doi.org/10.3390/pharmaceutics18020165 - 27 Jan 2026
Abstract
Over the past decades, remarkable progress in multimodal therapy has significantly improved survival outcomes for children with cancer. Yet, considerable variability in treatment response and toxicity persists, often driven by underlying genetic differences that affect the pharmacokinetics and pharmacodynamics of anticancer drugs. Pharmacogenomics, [...] Read more.
Over the past decades, remarkable progress in multimodal therapy has significantly improved survival outcomes for children with cancer. Yet, considerable variability in treatment response and toxicity persists, often driven by underlying genetic differences that affect the pharmacokinetics and pharmacodynamics of anticancer drugs. Pharmacogenomics, the study of genetic determinants of drug response, offers a powerful approach to personalize pediatric cancer therapy by optimizing efficacy while minimizing adverse effects. This review synthesizes current evidence on key pharmacogenetic variants influencing the response to major classes of antineoplastic agents used in children, including thiopurines, methotrexate, anthracyclines, alkylating agents, vinca alkaloids, and platinum compounds. Established gene–drug associations such as TPMT, NUDT15, DPYD, SLC28A3, and RARG are discussed alongside emerging biomarkers identified through genome-wide and multi-omics studies. The review also examines the major challenges that impede clinical implementation, including infrastructural limitations, cost constraints, population-specific variability, and ethical considerations. Furthermore, it highlights how integrative multi-omics, systems pharmacology, and artificial intelligence may accelerate the translation of pharmacogenomic data into clinical decision-making. The integration of pharmacogenomic testing into pediatric oncology protocols has the potential to transform cancer care by improving drug safety, enhancing treatment precision, and paving the way toward ethically grounded, personalized therapy for children. Full article
Show Figures

Graphical abstract

19 pages, 1724 KB  
Article
Speech Impairment in Early Parkinson’s Disease Is Associated with Nigrostriatal Dopaminergic Dysfunction
by Sotirios Polychronis, Grigorios Nasios, Efthimios Dardiotis, Rayo Akande and Gennaro Pagano
J. Clin. Med. 2026, 15(3), 1006; https://doi.org/10.3390/jcm15031006 - 27 Jan 2026
Abstract
Background/Objectives: Speech difficulties are an early and disabling manifestation of Parkinson’s disease (PD), affecting communication and quality of life. This study aimed to examine demographic, clinical, dopaminergic imaging and cerebrospinal fluid (CSF) correlates of speech difficulties in early PD, comparing treatment-naïve and levodopa-treated [...] Read more.
Background/Objectives: Speech difficulties are an early and disabling manifestation of Parkinson’s disease (PD), affecting communication and quality of life. This study aimed to examine demographic, clinical, dopaminergic imaging and cerebrospinal fluid (CSF) correlates of speech difficulties in early PD, comparing treatment-naïve and levodopa-treated patients. Methods: A cross-sectional analysis was conducted using data from the Parkinson’s Progression Markers Initiative (PPMI). The sample included 376 treatment-naïve and 133 levodopa-treated early PD participants. Speech difficulties were defined by Movement Disorder Society—Unified Parkinson’s Disease Rating Scale (MDS-UPDRS) Part III, with Item 3.1 ≥ 1. Group comparisons and binary logistic regression identified predictors among demographic, clinical, dopaminergic and CSF biomarker variables, including [123I]FP-CIT specific binding ratios (SBRs). All analyses were cross-sectional, and findings reflect associative relationships rather than treatment effects or causal mechanisms. Results: Speech difficulties were present in 44% of treatment-naïve and 57% of levodopa-treated participants. In both cohorts, higher MDS-UPDRS Part III ON scores—reflecting greater motor severity—and lower mean putamen SBR values were significant independent predictors of speech impairment. Age was an additional predictor in the treatment-naïve group. No significant differences were found in CSF biomarkers (α-synuclein, amyloid-β, tau, phosphorylated tau). These findings indicate that striatal dopaminergic loss, particularly in the putamen, and motor dysfunction relate to early PD-related speech difficulties, whereas CSF neurodegeneration markers do not differentiate affected patients. Conclusions: Speech difficulties in early PD are primarily linked to dopaminergic and motor dysfunction rather than global neurodegenerative biomarker changes. Longitudinal and multimodal studies integrating acoustic, neuroimaging, and cognitive measures are warranted to elucidate the neural basis of speech decline and inform targeted interventions. Full article
(This article belongs to the Special Issue Innovations in Parkinson’s Disease)
Show Figures

Figure 1

32 pages, 3217 KB  
Review
Architecting the Orthopedical Clinical AI Pipeline: A Review of Integrating Foundation Models and FHIR for Agentic Clinical Assistants and Digital Twins
by Assiya Boltaboyeva, Zhanel Baigarayeva, Baglan Imanbek, Bibars Amangeldy, Nurdaulet Tasmurzayev, Kassymbek Ozhikenov, Zhadyra Alimbayeva, Chingiz Alimbayev and Nurgul Karymsakova
Algorithms 2026, 19(2), 99; https://doi.org/10.3390/a19020099 - 27 Jan 2026
Abstract
The exponential growth of multimodal orthopedic data, ranging from longitudinal Electronic Health Records to high-resolution musculoskeletal imaging, has rendered manual analysis insufficient. This has established Large Language Models (LLMs) as algorithmically necessary for managing healthcare complexity. However, their deployment in high-stakes surgical environments [...] Read more.
The exponential growth of multimodal orthopedic data, ranging from longitudinal Electronic Health Records to high-resolution musculoskeletal imaging, has rendered manual analysis insufficient. This has established Large Language Models (LLMs) as algorithmically necessary for managing healthcare complexity. However, their deployment in high-stakes surgical environments presents a fundamental algorithmic paradox: while generic foundation models possess vast reasoning capabilities, they often lack the precise, protocol-driven domain knowledge required for safe orthopedic decision support. This review provides a structured synthesis of the emerging algorithmic frameworks required to build modern clinical AI assistants. We deconstruct current methodologies into their core components: large-language-model adaptation, multimodal data fusion, and standardized data interoperability pipelines. Rather than proposing a single proprietary architecture, we analyze how recent literature connects specific algorithmic choices such as the trade-offs between full fine-tuning and Low-Rank Adaptation to their computational costs and factual reliability. Furthermore, we examine the theoretical architectures required for ‘agentic’ capabilities, where AI systems integrate outputs from deep convolutional neural networks and biosensors. The review concludes by outlining the unresolved challenges in algorithmic bias, security, and interoperability that must be addressed to transition these technologies from research prototypes to scalable clinical solutions. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms for Healthcare: 2nd Edition)
Show Figures

Figure 1

21 pages, 1214 KB  
Review
Large Language Models in Cardiovascular Prevention: A Narrative Review and Governance Framework
by José Ferreira Santos and Hélder Dores
Diagnostics 2026, 16(3), 390; https://doi.org/10.3390/diagnostics16030390 - 26 Jan 2026
Abstract
Background: Large language models (LLMs) are becoming progressively integrated into clinical practice; however, their role in cardiovascular (CV) prevention remains unclear. This review synthesizes current evidence on LLM applications in preventive cardiology and proposes a governance framework for their safe translation into practice. [...] Read more.
Background: Large language models (LLMs) are becoming progressively integrated into clinical practice; however, their role in cardiovascular (CV) prevention remains unclear. This review synthesizes current evidence on LLM applications in preventive cardiology and proposes a governance framework for their safe translation into practice. Methods: We conducted a comprehensive narrative review of literature published between January 2015 and November 2025. Evidence was synthesized across three functional domains: (1) patient applications for health literacy and behavior change; (2) clinician applications for decision support and workflow efficiency; and (3) system applications for automated data extraction, registry construction, and quality surveillance. Results: Evidence suggests that while LLMs generate empathetic, guideline-concordant patient education, they lack the nuance required for unsupervised, personalized advice. For clinicians, LLMs effectively summarize clinical notes and draft documentation but remain unreliable for deterministic risk calculations and autonomous decision-making. System-facing applications demonstrate potential for automated phenotyping and multimodal risk prediction. However, safe deployment is constrained by hallucinations, temporal obsolescence, automation bias, and data privacy concerns. Conclusions: LLMs could help mitigate structural barriers in CV prevention but should presently be deployed only as supervised “reasoning engines” that augment, rather than replace, clinician judgment. To guide the transition from in silico performance to bedside practice, we propose the C.A.R.D.I.O. framework (Clinical validation, Auditability, Risk stratification, Data privacy, Integration, and Ongoing vigilance) as a roadmap for responsible integration. Full article
(This article belongs to the Special Issue Artificial Intelligence and Computational Methods in Cardiology 2026)
Show Figures

Figure 1

26 pages, 1596 KB  
Article
Technological Pathways to Low-Carbon Supply Chains: Evaluating the Decarbonization Impact of AI and Robotics
by Mariem Mrad, Mohamed Amine Frikha, Younes Boujelbene and Mohieddine Rahmouni
Logistics 2026, 10(2), 31; https://doi.org/10.3390/logistics10020031 - 26 Jan 2026
Abstract
Background: Achieving deep decarbonization in global supply chains is essential for advancing net-zero objectives; however, the integrative role of artificial intelligence (AI) and robotics in this transition remains insufficiently explored. This study examines how these technologies support carbon-emission reduction across supply chain operations. [...] Read more.
Background: Achieving deep decarbonization in global supply chains is essential for advancing net-zero objectives; however, the integrative role of artificial intelligence (AI) and robotics in this transition remains insufficiently explored. This study examines how these technologies support carbon-emission reduction across supply chain operations. Methods: A curated corpus of 83 Scopus-indexed peer-reviewed articles published between 2013 and 2025 is analyzed and organized into six domains covering supply chain and logistics, warehousing operations, AI methodologies, robotic systems, emission-mitigation strategies, and implementation barriers. Results: AI-driven optimization consistently reduces transport emissions by enhancing routing efficiency, load consolidation, and multimodal coordination. Robotic systems simultaneously improve energy efficiency and precision in warehousing, yielding substantial indirect emission reductions. Major barriers include the high energy consumption of certain AI models, limited data interoperability, and poor scalability of current applications. Conclusions: AI and robotics hold substantial transformative potential for advancing supply chain decarbonization; nevertheless, their net environmental impact depends on improving the energy efficiency of digital infrastructures and strengthening cross-organizational data governance mechanisms. The proposed framework delineates technological and organizational pathways that can guide future research and industrial implementation, providing novel insights and actionable guidance for researchers and practitioners aiming to accelerate the low-carbon transition. Full article
Show Figures

Figure 1

21 pages, 4181 KB  
Review
Twenty Years of Advances in Material Identification of Polychrome Sculptures
by Weilin Zeng, Xinyou Liu and Liang Xu
Coatings 2026, 16(2), 156; https://doi.org/10.3390/coatings16020156 - 25 Jan 2026
Viewed by 52
Abstract
Polychrome sculptures are complex, multilayered artifacts that embody the intersection of artistic craftsmanship, material science, and cultural heritage. Over the past two decades, the study of material identification in polychrome sculptures has shown marked interdisciplinary development, driven by advances in analytical technologies that [...] Read more.
Polychrome sculptures are complex, multilayered artifacts that embody the intersection of artistic craftsmanship, material science, and cultural heritage. Over the past two decades, the study of material identification in polychrome sculptures has shown marked interdisciplinary development, driven by advances in analytical technologies that have transformed how these objects are studied, enabling high-resolution identification of pigments, binders, and structural substrates. This review synthesizes key developments in the identification of polychrome sculpture materials, focusing on the integration of non-destructive and molecular-level techniques such as XRF, FTIR, Raman, LIBS, GC-MS, and proteomics. It highlights regional and historical variations in materials and craft processes, with case studies from Brazil, China, and Central Africa demonstrating how multi-modal methods reveal both technical and ritual knowledge embedded in these artworks. The review also examines evolving research paradigms—from pigment identification to stratigraphic and cross-cultural interpretation—and discusses current challenges such as organic material degradation and the need for standardized protocols. Finally, it outlines future directions including AI-assisted diagnostics, multimodal data fusion, and collaborative conservation frameworks. By bridging scientific analysis with cultural context, this study offers a comprehensive methodological reference for the conservation and interpretation of polychrome sculptures worldwide. Full article
(This article belongs to the Section Surface Characterization, Deposition and Modification)
Show Figures

Figure 1

20 pages, 49658 KB  
Article
Dead Chicken Identification Method Based on a Spatial-Temporal Graph Convolution Network
by Jikang Yang, Chuang Ma, Haikun Zheng, Zhenlong Wu, Xiaohuan Chao, Cheng Fang and Boyi Xiao
Animals 2026, 16(3), 368; https://doi.org/10.3390/ani16030368 - 23 Jan 2026
Viewed by 83
Abstract
In intensive cage rearing systems, accurate dead hen detection remains difficult due to complex environments, severe occlusion, and the high visual similarity between dead hens and live hens in a prone posture. To address these issues, this study proposes a dead hen identification [...] Read more.
In intensive cage rearing systems, accurate dead hen detection remains difficult due to complex environments, severe occlusion, and the high visual similarity between dead hens and live hens in a prone posture. To address these issues, this study proposes a dead hen identification method based on a Spatial-Temporal Graph Convolutional Network (STGCN). Unlike conventional static image-based approaches, the proposed method introduces temporal information to enable dynamic spatial-temporal modeling of hen health states. First, a multimodal fusion algorithm is applied to visible light and thermal infrared images to strengthen multimodal feature representation. Then, an improved YOLOv7-Pose algorithm is used to extract the skeletal keypoints of individual hens, and the ByteTrack algorithm is employed for multi-object tracking. Based on these results, spatial-temporal graph-structured data of hens are constructed by integrating spatial and temporal dimensions. Finally, a spatial-temporal graph convolution model is used to identify dead hens by learning spatial-temporal dependency features from skeleton sequences. Experimental results show that the improved YOLOv7-Pose model achieves an average precision (AP) of 92.8% in keypoint detection. Based on the constructed spatial-temporal graph data, the dead hen identification model reaches an overall classification accuracy of 99.0%, with an accuracy of 98.9% for the dead hen category. These results demonstrate that the proposed method effectively reduces interference caused by feeder occlusion and ambiguous visual features. By using dynamic spatial-temporal information, the method substantially improves robustness and accuracy of dead hen detection in complex cage rearing environments, providing a new technical route for intelligent monitoring of poultry health status. Full article
(This article belongs to the Special Issue Welfare and Behavior of Laying Hens)
Show Figures

Figure 1

26 pages, 4329 KB  
Review
Advanced Sensor Technologies in Cutting Applications: A Review
by Motaz Hassan, Roan Kirwin, Chandra Sekhar Rakurty and Ajay Mahajan
Sensors 2026, 26(3), 762; https://doi.org/10.3390/s26030762 - 23 Jan 2026
Viewed by 213
Abstract
Advances in sensing technologies are increasingly transforming cutting operations by enabling data-driven condition monitoring, predictive maintenance, and process optimization. This review surveys recent developments in sensing modalities for cutting systems, including vibration sensors, acoustic emission sensors, optical and vision-based systems, eddy-current sensors, force [...] Read more.
Advances in sensing technologies are increasingly transforming cutting operations by enabling data-driven condition monitoring, predictive maintenance, and process optimization. This review surveys recent developments in sensing modalities for cutting systems, including vibration sensors, acoustic emission sensors, optical and vision-based systems, eddy-current sensors, force sensors, and emerging hybrid/multi-modal sensing frameworks. Each sensing approach offers unique advantages in capturing mechanical, acoustic, geometric, or electromagnetic signatures related to tool wear, process instability, and fault development, while also showing modality-specific limitations such as noise sensitivity, environmental robustness, and integration complexity. Recent trends show a growing shift toward hybrid and multi-modal sensor fusion, where data from multiple sensors are combined using advanced data analytics and machine learning to improve diagnostic accuracy and reliability under changing cutting conditions. The review also discusses how artificial intelligence, Internet of Things connectivity, and edge computing enable scalable, real-time monitoring solutions, along with the challenges related to data needs, computational costs, and system integration. Future directions highlight the importance of robust fusion architectures, physics-informed and explainable models, digital twin integration, and cost-effective sensor deployment to accelerate adoption across various manufacturing environments. Overall, these advancements position advanced sensing and hybrid monitoring strategies as key drivers of intelligent, Industry 4.0-oriented cutting processes. Full article
Show Figures

Figure 1

19 pages, 1193 KB  
Review
Tactical-Grade Wearables and Authentication Biometrics
by Fotios Agiomavritis and Irene Karanasiou
Sensors 2026, 26(3), 759; https://doi.org/10.3390/s26030759 - 23 Jan 2026
Viewed by 117
Abstract
Modern battlefield operations require wearable technologies to operate reliably under harsh physical, environmental, and security conditions. This review looks at today and tomorrow’s potential for ready field-grade wearables embedded with biometric authentication systems. It details physiological, kinematic, and multimodal sensor platforms built to [...] Read more.
Modern battlefield operations require wearable technologies to operate reliably under harsh physical, environmental, and security conditions. This review looks at today and tomorrow’s potential for ready field-grade wearables embedded with biometric authentication systems. It details physiological, kinematic, and multimodal sensor platforms built to withstand rugged, high-stress environments, and reviews biometric modalities like ECG, PPG, EEG, gait, and voice for continuous or on-demand identity confirmation. Accuracy, latency, energy efficiency, and tolerance to motion artifacts, environmental extremes, and physiological variability are critical performance drivers. Security threats, such as spoofing and data tapping, and techniques for template protection, liveness assurance, and protected on-device processing also come under review. Emerging trends in low-power edge AI, multimodal integration, adaptive learning from field experience, and privacy-preserving analytics in terms of defense readiness, and ongoing challenges, such as gear interoperability, long-term stability of templates, and common stress-testing protocols, are assessed. In conclusion, an R&D plan to lead the development of rugged, trustworthy, and operationally validated wearable authentication systems for the current and future militaries is proposed. Full article
(This article belongs to the Special Issue Biomedical Electronics and Wearable Systems—2nd Edition)
Show Figures

Figure 1

34 pages, 1418 KB  
Article
Hybrid Dual-Context Prompted Cross-Attention Framework with Language Model Guidance for Multi-Label Prediction of Human Off-Target Ligand–Protein Interactions
by Abdullah, Zulaikha Fatima, Muhammad Ateeb Ather, Liliana Chanona-Hernandez and José Luis Oropeza Rodríguez
Int. J. Mol. Sci. 2026, 27(2), 1126; https://doi.org/10.3390/ijms27021126 - 22 Jan 2026
Viewed by 44
Abstract
Accurately identifying drug off-targets is essential for reducing toxicity and improving the success rate of pharmaceutical discovery pipelines. However, current deep learning approaches often struggle to fuse chemical structure, protein biology, and multi-target context. Here, we introduce HDPC-LGT (Hybrid Dual-Prompt Cross-Attention Ligand–Protein Graph [...] Read more.
Accurately identifying drug off-targets is essential for reducing toxicity and improving the success rate of pharmaceutical discovery pipelines. However, current deep learning approaches often struggle to fuse chemical structure, protein biology, and multi-target context. Here, we introduce HDPC-LGT (Hybrid Dual-Prompt Cross-Attention Ligand–Protein Graph Transformer), a framework designed to predict ligand binding across sixteen human translation-related proteins clinically associated with antibiotic toxicity. HDPC-LGT combines graph-based chemical reasoning with protein language model embeddings and structural priors to capture biologically meaningful ligand–protein interactions. The model was trained on 216,482 experimentally validated ligand–protein pairs from the Chemical Database of Bioactive Molecules (ChEMBL) and the Protein–Ligand Binding Database (BindingDB) and evaluated using scaffold-level, protein-level, and combined holdout strategies. HDPC-LGT achieves a macro receiver operating characteristic–area under the curve (macro ROC–AUC) of 0.996 and a micro F1-score (micro F1) of 0.989, outperforming Deep Drug–Target Affinity Model (DeepDTA), Graph-based Drug–Target Affinity Model (GraphDTA), Molecule–Protein Interaction Transformer (MolTrans), Cross-Attention Transformer for Drug–Target Interaction (CAT–DTI), and Heterogeneous Graph Transformer for Drug–Target Affinity (HGT–DTA) by 3–7%. External validation using the Papyrus universal bioactivity resource (Papyrus), the Protein Data Bank binding subset (PDBbind), and the benchmark Yamanishi dataset confirms strong generalisation to unseen chemotypes and proteins. HDPC-LGT also provides biologically interpretable outputs: cross-attention maps, Integrated Gradients (IG), and Gradient-weighted Class Activation Mapping (Grad-CAM) highlight catalytic residues in aminoacyl-tRNA synthetases (aaRSs), ribosomal tunnel regions, and pharmacophoric interaction patterns, aligning with known biochemical mechanisms. By integrating multimodal biochemical information with deep learning, HDPC-LGT offers a practical tool for off-target toxicity prediction, structure-based lead optimisation, and polypharmacology research, with potential applications in antibiotic development, safety profiling, and rational compound redesign. Full article
(This article belongs to the Section Molecular Informatics)
Show Figures

Figure 1

36 pages, 13674 KB  
Article
A Reference-Point Guided Multi-Objective Crested Porcupine Optimizer for Global Optimization and UAV Path Planning
by Zelei Shi and Chengpeng Li
Mathematics 2026, 14(2), 380; https://doi.org/10.3390/math14020380 - 22 Jan 2026
Viewed by 19
Abstract
Balancing convergence accuracy and population diversity remains a fundamental challenge in multi-objective optimization, particularly for complex and constrained engineering problems. To address this issue, this paper proposes a novel Multi-Objective Crested Porcupine Optimizer (MOCPO), inspired by the hierarchical defensive behaviors of crested porcupines. [...] Read more.
Balancing convergence accuracy and population diversity remains a fundamental challenge in multi-objective optimization, particularly for complex and constrained engineering problems. To address this issue, this paper proposes a novel Multi-Objective Crested Porcupine Optimizer (MOCPO), inspired by the hierarchical defensive behaviors of crested porcupines. The proposed algorithm integrates four biologically motivated defense strategies—vision, hearing, scent diffusion, and physical attack—into a unified optimization framework, where global exploration and local exploitation are dynamically coordinated. To effectively extend the original optimizer to multi-objective scenarios, MOCPO incorporates a reference-point guided external archiving mechanism to preserve a well-distributed set of non-dominated solutions, along with an environmental selection strategy that adaptively partitions the objective space and enhances solution quality. Furthermore, a multi-level leadership mechanism based on Euclidean distance is introduced to provide region-specific guidance, enabling precise and uniform coverage of the Pareto front. The performance of MOCPO is comprehensively evaluated on 18 benchmark problems from the WFG and CF test suites. Experimental results demonstrate that MOCPO consistently outperforms several state-of-the-art multi-objective algorithms, including MOPSO and NSGA-III, in terms of IGD, GD, HV, and Spread metrics, achieving the best overall ranking in Friedman statistical tests. Notably, the proposed algorithm exhibits strong robustness on discontinuous, multimodal, and constrained Pareto fronts. In addition, MOCPO is applied to UAV path planning in four complex terrain scenarios constructed from real digital elevation data. The results show that MOCPO generates shorter, smoother, and more stable flight paths while effectively balancing route length, threat avoidance, flight altitude, and trajectory smoothness. These findings confirm the effectiveness, robustness, and practical applicability of MOCPO for solving complex real-world multi-objective optimization problems. Full article
(This article belongs to the Special Issue Advances in Metaheuristic Optimization Algorithms)
Show Figures

Figure 1

35 pages, 5497 KB  
Article
Robust Localization of Flange Interface for LNG Tanker Loading and Unloading Under Variable Illumination a Fusion Approach of Monocular Vision and LiDAR
by Mingqin Liu, Han Zhang, Jingquan Zhu, Yuming Zhang and Kun Zhu
Appl. Sci. 2026, 16(2), 1128; https://doi.org/10.3390/app16021128 - 22 Jan 2026
Viewed by 28
Abstract
The automated localization of the flange interface in LNG tanker loading and unloading imposes stringent requirements for accuracy and illumination robustness. Traditional monocular vision methods are prone to localization failure under extreme illumination conditions, such as intense glare or low light, while LiDAR, [...] Read more.
The automated localization of the flange interface in LNG tanker loading and unloading imposes stringent requirements for accuracy and illumination robustness. Traditional monocular vision methods are prone to localization failure under extreme illumination conditions, such as intense glare or low light, while LiDAR, despite being unaffected by illumination, suffers from limitations like a lack of texture information. This paper proposes an illumination-robust localization method for LNG tanker flange interfaces by fusing monocular vision and LiDAR, with three scenario-specific innovations beyond generic multi-sensor fusion frameworks. First, an illumination-adaptive fusion framework is designed to dynamically adjust detection parameters via grayscale mean evaluation, addressing extreme illumination (e.g., glare, low light with water film). Second, a multi-constraint flange detection strategy is developed by integrating physical dimension constraints, K-means clustering, and weighted fitting to eliminate background interference and distinguish dual flanges. Third, a customized fusion pipeline (ROI extraction-plane fitting-3D circle center solving) is established to compensate for monocular depth errors and sparse LiDAR point cloud limitations using flange radius prior. High-precision localization is achieved via four key steps: multi-modal data preprocessing, LiDAR-camera spatial projection, fusion-based flange circle detection, and 3D circle center fitting. While basic techniques such as LiDAR-camera spatiotemporal synchronization and K-means clustering are adapted from prior works, their integration with flange-specific constraints and illumination-adaptive design forms the core novelty of this study. Comparative experiments between the proposed fusion method and the monocular vision-only localization method are conducted under four typical illumination scenarios: uniform illumination, local strong illumination, uniform low illumination, and low illumination with water film. The experimental results based on 20 samples per illumination scenario (80 valid data sets in total) show that, compared with the monocular vision method, the proposed fusion method reduces the Mean Absolute Error (MAE) of localization accuracy by 33.08%, 30.57%, and 75.91% in the X, Y, and Z dimensions, respectively, with the overall 3D MAE reduced by 61.69%. Meanwhile, the Root Mean Square Error (RMSE) in the X, Y, and Z dimensions is decreased by 33.65%, 32.71%, and 79.88%, respectively, and the overall 3D RMSE is reduced by 64.79%. The expanded sample size verifies the statistical reliability of the proposed method, which exhibits significantly superior robustness to extreme illumination conditions. Full article
Show Figures

Figure 1

24 pages, 7898 KB  
Article
Unifying Aesthetic Evaluation via Multimodal Annotation and Fine-Grained Sentiment Analysis
by Kai Liu, Hangyu Xiong, Jinyi Zhang and Min Peng
Big Data Cogn. Comput. 2026, 10(1), 37; https://doi.org/10.3390/bdcc10010037 - 22 Jan 2026
Viewed by 35
Abstract
With the rapid growth of visual content, automated aesthetic evaluation has become increasingly important. However, existing research faces three key challenges: (1) the absence of datasets combining Image Aesthetic Assessment (IAA) scores and Image Aesthetic Captioning (IAC) descriptions; (2) limited integration of quantitative [...] Read more.
With the rapid growth of visual content, automated aesthetic evaluation has become increasingly important. However, existing research faces three key challenges: (1) the absence of datasets combining Image Aesthetic Assessment (IAA) scores and Image Aesthetic Captioning (IAC) descriptions; (2) limited integration of quantitative scores and qualitative text, hindering comprehensive modeling; (3) the subjective nature of aesthetics, which complicates consistent fine-grained evaluation. To tackle these issues, we propose a unified multimodal framework. To address the lack of data, we develop the Textual Aesthetic Sentiment Labeling Pipeline (TASLP) for automatic annotation and construct the Reddit Multimodal Sentiment Dataset (RMSD) with paired IAA and IAC labels. To improve annotation integration, we introduce the Aesthetic Category Sentiment Analysis (ACSA) task, which models fine-grained aesthetic attributes across modalities. To handle subjectivity, we design two models—LAGA for IAA and ACSFM for IAC—that leverage ACSA features to enhance consistency and interpretability. Experiments on RMSD and public benchmarks show that our approach alleviates data limitations and delivers competitive performance, highlighting the effectiveness of fine-grained sentiment modeling and multimodal learning in aesthetic evaluation. Full article
(This article belongs to the Special Issue Machine Learning and Image Processing: Applications and Challenges)
Show Figures

Figure 1

Back to TopTop