Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,087)

Search Parameters:
Keywords = synthetic learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 2040 KB  
Communication
A Minimal Synthetic IAA Pathway in Escherichia coli Using Avocado Seed Hydrolysate: A Sustainable and Didactic Platform for Synthetic Biology
by Ana Lilia Hernández-Orihuela, Lucía Carolina Alzati-Ramírez and Agustino Martínez-Antonio
SynBio 2026, 4(2), 8; https://doi.org/10.3390/synbio4020008 - 3 May 2026
Abstract
Indole-3-acetic acid (IAA) is the main natural auxin and a key regulator of plant growth. However, most commercial auxins are synthetically produced from non-renewable resources. Here, we present a minimal synthetic biology platform for microbial IAA production that also serves as a teaching [...] Read more.
Indole-3-acetic acid (IAA) is the main natural auxin and a key regulator of plant growth. However, most commercial auxins are synthetically produced from non-renewable resources. Here, we present a minimal synthetic biology platform for microbial IAA production that also serves as a teaching model for genetic circuit design and bioprocess development. We developed codon-optimized versions of the iaaM and iaaH genes, which encode tryptophan 2-monooxygenase and indole-3-acetamide hydrolase, and assembled them into a compact expression cassette in Escherichia coli TOP10. Correct expression of both enzymes was confirmed by SDS-PAGE. The engineered strain was cultivated in a low-cost medium made from avocado seed hydrolysate, an agro-industrial waste, supplemented with tryptophan as a precursor. IAA was quantified using the Salkowski colorimetric assay and further validated by HPLC, reaching approximately 303 µg/mL at 48 h, with the medium costing five times less locally than traditional LB. The supernatants containing biosynthetic IAA induced root formation in 100% of tobacco leaf explants, outperforming the commercial standard at the same concentration and confirming biological activity. Since this workflow follows the Design–Build–Test–Learn (DBTL) cycle, Design (pathway selection and codon optimization), Build (plasmid assembly), Test (protein expression, metabolite quantification, plant bioassays), and Learn (medium and process optimization), it provides a sustainable production method and an accessible educational platform for synthetic biology. Full article
(This article belongs to the Special Issue Advances in the Metabolic Engineering of Microorganisms)
Show Figures

Figure 1

12 pages, 2015 KB  
Communication
Synthetic Data-Driven Exoskeleton Control via Contralateral Gait Fusion for Variable-Speed Walking
by Jingshu Shi, Hongwu Zhu, Yifei Yang, Bowen Liu and Xingjun Wang
Biomimetics 2026, 11(5), 319; https://doi.org/10.3390/biomimetics11050319 - 3 May 2026
Abstract
Data-driven exoskeletons offer the potential for adaptive augmentation of human mobility. Yet their widespread adoption is hindered by labor-intensive biomechanical data collection and manual tuning. Herein, this study presents a highly efficient synthetic data approach to facilitate data-driven pipelines. We leveraged an Adversarial [...] Read more.
Data-driven exoskeletons offer the potential for adaptive augmentation of human mobility. Yet their widespread adoption is hindered by labor-intensive biomechanical data collection and manual tuning. Herein, this study presents a highly efficient synthetic data approach to facilitate data-driven pipelines. We leveraged an Adversarial Motion Priors (AMP) agent to learn stylized walking within a massively parallel, physics-based simulation. The resulting high-fidelity data were collected and validated against OpenSim inverse dynamics pipelines. Further, we trained an end-to-end torque prediction algorithm using the collected data. A novel CNN-Transformer architecture was developed to map contralateral swing-phase data to variable-length push-off torque profiles. This enabled real-time, adaptive torque assistance of exoskeletons for variable-speed walking. A custom ankle exoskeleton was used to demonstrate robust sim-to-real transferability. Our system achieved an average root mean square error of approximately 0.081 ± 0.015 newton-meters per kilogram and an average R2 of 0.836 ± 0.050 across speeds ranging from 0.6 to 1.75 m·s−1. The controller significantly reduced user-positive ankle mechanical work by up to 14 ± 6.30%. Finally, our multi-sensor configuration exhibited inherent fault tolerance, ensuring safe operation even under partial sensor failure. By taking a scalable, data-driven approach, this work offers a practical pathway toward deploying autonomous exoskeletons in versatile, real-world environments. Full article
(This article belongs to the Special Issue Advanced Human–Robot Interaction Challenges and Opportunities)
Show Figures

Graphical abstract

32 pages, 14688 KB  
Article
A Synthetic Lethality-Informed Multi-Omic Framework for Identifying a Five-Gene Diagnostic Signature in Chronic Obstructive Pulmonary Disease
by Yue Yang, Zengrui Wang, Xiaorong Su, Jiefu Tang, Zhi Zhang, Xinli Fan, Haitao Xu, Lihan Wang and Zhuang Luo
Curr. Issues Mol. Biol. 2026, 48(5), 475; https://doi.org/10.3390/cimb48050475 - 2 May 2026
Abstract
Chronic obstructive pulmonary disease (COPD) lacks reliable molecular biomarkers for early diagnosis and risk stratification beyond conventional spirometry-based assessment. Synthetic lethality (SL)-related gene prioritization provides a biologically informed framework for identifying disease-associated candidate biomarkers in COPD. In this study, we integrated public transcriptomic [...] Read more.
Chronic obstructive pulmonary disease (COPD) lacks reliable molecular biomarkers for early diagnosis and risk stratification beyond conventional spirometry-based assessment. Synthetic lethality (SL)-related gene prioritization provides a biologically informed framework for identifying disease-associated candidate biomarkers in COPD. In this study, we integrated public transcriptomic datasets, SL-related gene sets, and machine learning approaches to identify a diagnostic signature for COPD. Using GSE47460 as the training cohort (220 COPD and 108 controls) and GSE57148 as the external validation cohort (98 COPD and 91 controls), we identified 74 SL-related differentially expressed genes enriched in inflammatory signaling and extracellular matrix organization. LASSO regression and random forest analysis yielded a five-gene diagnostic signature consisting of CYP1B1, VEGFA, RET, FGG, and S100A9. The integrated nomogram showed good diagnostic performance in the validation cohort, with an AUC of 0.8311 (95% CI: 0.7839–0.8783), outperforming individual genes and supporting its potential use as an adjunctive molecular tool for COPD diagnosis and risk assessment. Single-cell RNA sequencing, immune infiltration analysis, and preliminary in vitro experiments further supported the biological relevance of the identified genes. Overall, this study supports SL-related gene prioritization combined with multi-omic integration as a useful strategy for COPD biomarker discovery while generating testable hypotheses regarding disease-associated vulnerability pathways. Full article
(This article belongs to the Special Issue Omics Analysis for Personalized Medicine)
26 pages, 7609 KB  
Article
MMDFRNet: Dynamic Cross-Modal Decoupling and Alignment for Robust Rice Mapping
by Tingyan Fu, Jia Ge and Shufang Tian
Remote Sens. 2026, 18(9), 1413; https://doi.org/10.3390/rs18091413 - 2 May 2026
Abstract
Accurate rice mapping is critical for grain yield estimation and food security, yet traditional methods often struggle with asynchronous data quality and the inherent statistical gap between SAR and optical signals. To bridge this gap, we propose MMDFRNet, a novel multi-modal deep learning [...] Read more.
Accurate rice mapping is critical for grain yield estimation and food security, yet traditional methods often struggle with asynchronous data quality and the inherent statistical gap between SAR and optical signals. To bridge this gap, we propose MMDFRNet, a novel multi-modal deep learning framework that synergistically integrates Sentinel-1 SAR and Sentinel-2 optical imagery. Unlike conventional static fusion approaches, MMDFRNet features a dual-stream modality-specific encoder architecture designed to decouple structural backscattering signals from spectral reflectance. Central to this framework is the multi-modal feature fusion (MMF) module, which employs an adaptive attention mechanism to dynamically align and recalibrate features based on their reliability, effectively mitigating noise from compromised modalities. Additionally, a multi-scale feature fusion (MSF) module is incorporated to coordinate hierarchical semantic information, enhancing boundary delineation in fragmented landscapes. Extensive experiments conducted across multiple study areas in China demonstrate the superiority of MMDFRNet. The model achieves a Precision of 0.9234, an IoU of 0.8612, and an F1-score of 0.9252. Notably, it consistently outperforms state-of-the-art benchmarks (e.g., UNetFormer, STMA, and CCRNet) by margins of up to 11.72% (Precision) and 7.39% (IoU) compared to classic baselines. Furthermore, rigorous ablation studies and degradation analyses confirm the model’s robustness, verifying its ability to transform the degradation paradox into a performance booster through pixel-wise adaptive alignment. Consequently, MMDFRNet offers a promising solution for precise rice area statistics and long-term monitoring in complex agricultural landscapes. Full article
Show Figures

Figure 1

23 pages, 3278 KB  
Article
Biologically Inspired Medical Multi-Modal Dataset Distillation via Contrast-Aware Alignment and Memory Compression
by Taoli Du, Ziming Wang, Yue Wang, Ming Ma and Wenhui Li
Biomimetics 2026, 11(5), 314; https://doi.org/10.3390/biomimetics11050314 - 2 May 2026
Abstract
Multi-modal Magnetic Resonance Imaging (MRI) provides complementary information for clinical diagnosis, yet its large-scale storage, privacy sensitivity, and annotation cost pose significant challenges. Inspired by biological vision systems, which integrate multi-sensory inputs and compress experiences into compact memory representations, we propose a bio-inspired [...] Read more.
Multi-modal Magnetic Resonance Imaging (MRI) provides complementary information for clinical diagnosis, yet its large-scale storage, privacy sensitivity, and annotation cost pose significant challenges. Inspired by biological vision systems, which integrate multi-sensory inputs and compress experiences into compact memory representations, we propose a bio-inspired framework termed Contrast-Guided Multi-modal Dataset Distillation (CGMDD). In biological perception, different sensory channels observe the same environment from complementary perspectives, while hierarchical neural processing ensures perceptual consistency across modalities. Meanwhile, memory systems such as the associated medial temporal lobe structures consolidate redundant experiences into efficient representations for long-term storage. Motivated by these principles, CGMDD treats multi-modal MRI as multi-view perceptual signals and introduces a hierarchical cross-modal contrastive learning mechanism that enforces perceptual alignment across modalities, analogous to multi-level processing in the visual cortex. Furthermore, we design a dynamic dataset distillation strategy that mimics memory consolidation by compressing large-scale data into compact, informative synthetic representations through gradient-based optimization. The proposed framework jointly optimizes perceptual alignment and memory compression in an end-to-end manner, achieving a biologically plausible integration of perception and learning. Experimental results on two MRI datasets demonstrate that CGMDD can compress the original dataset to 5% of its size while maintaining competitive performance, even with only 30% of the labels. These findings highlight the effectiveness of bio-inspired mechanisms in building efficient, robust, and privacy-preserving computer vision systems. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Bio-Inspired Computer Vision System)
Show Figures

Figure 1

17 pages, 1746 KB  
Article
A Hybrid Recommendation Approach for Adaptive Worksheet Generation Using Pedagogically Structured Learning Objects
by Iraklis Katsaris, Sakellaris Sfakiotakis, Ilias Logothetis and Nikolas Vidakis
Information 2026, 17(5), 437; https://doi.org/10.3390/info17050437 - 1 May 2026
Viewed by 16
Abstract
Adaptive recommendation mechanisms are widely used to personalise digital learning environments; however, many existing approaches prioritise algorithmic optimisation while providing limited insight into how recommendation behaviour aligns with pedagogically structured instructional artefacts, such as worksheets. To address this gap, this paper proposes a [...] Read more.
Adaptive recommendation mechanisms are widely used to personalise digital learning environments; however, many existing approaches prioritise algorithmic optimisation while providing limited insight into how recommendation behaviour aligns with pedagogically structured instructional artefacts, such as worksheets. To address this gap, this paper proposes a hybrid recommendation approach for adaptive worksheet generation that integrates content-based and collaborative filtering with explicit pedagogical constraints derived from Bloom’s Revised Taxonomy. The system ranks and selects learning and evaluation objects across cognitive levels by combining learner profiles, behavioural signals, and similarity-based information within a unified scoring framework. A simulation-based evaluation was conducted to examine the internal behaviour, stability, and instructional alignment of the recommendation engine under controlled conditions, using Bloom-aligned worksheets and synthetic learner profiles. The analysis focuses on expected–actual alignment and adaptive variation across cognitive levels rather than learning outcomes. Results indicate strong alignment with the intended instructional structure at lower cognitive levels, while bounded and interpretable adaptive variation emerges at higher levels. Evaluation object recommendations showed high agreement with the instructional design, exceeding 95% across simulated conditions. Overall, the study demonstrates how hybrid recommendation mechanisms can support adaptive content selection in pedagogically structured learning scenarios, offering a transparent and robust foundation for information-driven educational systems. Full article
16 pages, 3093 KB  
Article
Integrating Risk Factors and Symptoms for Urinary Tract Infection Diagnosis Using an Explainable AI Approach in Low-Resource Regions
by Kingsley Attai, Daniel Asuquo, Kingsley Akputu, Okure Obot, Cornelia Thomas, Faith-Valentine Uzoka, Ekerette Attai, Christie Akwaowo and Faith-Michael Uzoka
Information 2026, 17(5), 435; https://doi.org/10.3390/info17050435 - 1 May 2026
Viewed by 15
Abstract
Urinary Tract Infections (UTIs) represent one of the most prevalent bacterial infections globally, posing significant health burdens, especially in low- and middle-income countries (LMICs), due to delayed diagnoses, limited access to laboratory services, and rising antimicrobial resistance. This study presents a machine learning [...] Read more.
Urinary Tract Infections (UTIs) represent one of the most prevalent bacterial infections globally, posing significant health burdens, especially in low- and middle-income countries (LMICs), due to delayed diagnoses, limited access to laboratory services, and rising antimicrobial resistance. This study presents a machine learning (ML)-based diagnostic support framework for early UTI detection, leveraging structured clinical data and explainable artificial intelligence (XAI) techniques to enhance interpretability and trust among healthcare providers. A patient dataset containing 4865 records was used in the study to train and test Extreme Gradient Boosting (XGBoost), Decision Tree (DT) and Random Forest (RF) classifiers, while class imbalance was addressed using Synthetic Minority Over-sampling Technique (SMOTE). The performance of the models was evaluated through accuracy, precision, recall, F1-score, Log Loss, and AUC-ROC, and random forest showed the best results (accuracy: 86.43%, F1-score: 86.71%, AUC-ROC: 0.8695). To ensure that such models can be adopted by stakeholders in the health sector, Local Interpret-able Model-agnostic Explanations (LIME) were integrated, which identified painful urination, urinary frequency, and suprapubic pain as primary predictors in the model. This study shows that interpretable ML models can be helpful in resource-limited regions in predicting UTIs, thereby rendering a solution to improve the management of infections in these regions. Full article
(This article belongs to the Section Artificial Intelligence)
22 pages, 14961 KB  
Article
From Single-Look to Multi-Temporal SAR Despeckling: A Latent-Space Guided Transfer Learning Approach
by Baojing Pan, Ze Yu, Xianxun Yao, Zhiqiang Tian and Wei Ren
Remote Sens. 2026, 18(9), 1402; https://doi.org/10.3390/rs18091402 - 1 May 2026
Viewed by 74
Abstract
Synthetic Aperture Radar (SAR) images are affected by speckle noise, which limits their application in fine object interpretation and quantitative analysis. Recent deep learning-based single-image SAR despeckling methods have made significant progress in spatial structure modeling but struggle to exploit temporal redundancy in [...] Read more.
Synthetic Aperture Radar (SAR) images are affected by speckle noise, which limits their application in fine object interpretation and quantitative analysis. Recent deep learning-based single-image SAR despeckling methods have made significant progress in spatial structure modeling but struggle to exploit temporal redundancy in multi-temporal data. Existing multi-temporal despeckling methods usually rely on complex spatiotemporal network structures, which are prone to overfitting or excessive smoothing of details when training samples are limited. To address these challenges, this paper proposes a latent-space-guided multi-temporal SAR despeckling method from the perspective of transfer learning and representation alignment, achieving effective knowledge transfer from single-image SAR despeckling to multi-temporal despeckling tasks. The method treats the single-image SAR despeckling task as a knowledge source domain, using stable latent space representations learned from the pre-trained single-image despeckling model as prior constraints. A latent space regularization mechanism is introduced during the training of the multi-temporal despeckling model, thereby establishing an explicit representation bridge between the 2D spatial model and the 3D spatiotemporal model. With this strategy, the multi-temporal model inherits the structural perception capability of the single-image model under limited training samples, improving speckle suppression while effectively maintaining image detail and structural consistency. Additionally, a pure convolutional network architecture is employed to support variable-length multi-temporal sequence input, enhancing the method’s adaptability under different temporal sampling conditions. Full article
Show Figures

Figure 1

26 pages, 1500 KB  
Article
Cost-Aware Multi-modal Multi-Fidelity Gaussian Process Fusion for Lithium-Ion Battery Pack Crash Damage Prediction
by Sheng Jiang, Jun Lu, Fanghua Bai, Xin Yang, Liang Zhou and Wei Hu
Mathematics 2026, 14(9), 1539; https://doi.org/10.3390/math14091539 - 1 May 2026
Viewed by 61
Abstract
With the rapid development of new energy vehicles, fast and reliable prediction of power battery collision damage has become increasingly important. Traditional finite-element analysis is computationally expensive and difficult to deploy for rapid prediction under varying conditions. Although learning-based methods are faster, they [...] Read more.
With the rapid development of new energy vehicles, fast and reliable prediction of power battery collision damage has become increasingly important. Traditional finite-element analysis is computationally expensive and difficult to deploy for rapid prediction under varying conditions. Although learning-based methods are faster, they usually rely on single-fidelity data: high-fidelity data is accurate but scarce and costly, while low-fidelity data is abundant but less reliable. Existing multi-fidelity methods alleviate this issue, yet often suffer from imbalanced sample allocation and weak cross-fidelity modeling. Moreover, current adaptive sampling strategies cannot dynamically determine the appropriate fidelity for different regions of the design space. To address these challenges, we propose HNGP-LCA, a multi-fidelity active learning framework for battery pack collision damage prediction. Our method consists of two components: (1) an Ensemble Nested Gaussian Process module that integrates single-layer and double-layer nested Gaussian process regression to better capture high–low fidelity correlations; and (2) a Location Information Cost-aware Active Learning strategy that leverages positional information to reconstruct expected improvement under different fidelities, enabling dynamic fidelity selection during sampling. Experiments on multiple synthetic benchmarks and a real battery pack engineering case demonstrate that HNGP-LCA achieves a better trade-off among accuracy, efficiency, and cost than strong baselines such as NARCO and MFBO. In the engineering case, it improves prediction accuracy by 0.6% over NARCO and 1.29% over MFBO, while reducing dependence on expensive high-fidelity data. These results show that HNGP-LCA provides an effective and practical solution for battery collision damage prediction. Full article
(This article belongs to the Special Issue Networks in Complex Systems: Modeling, Analysis, and Control)
38 pages, 888 KB  
Article
Data-Centric AI Manifesto: How Data Quality Drives Modern AI
by Donato Malerba, Antonella Poggi, Mario Alviano, Tommaso Boccali, Maria Teresa Camerlingo, Roberto Maria Delfino, Domenico Diacono, Domenico Elia, Vincenzo Pasquadibisceglie, Mara Sangiovanni, Vincenzo Spinoso and Gioacchino Vino
Electronics 2026, 15(9), 1913; https://doi.org/10.3390/electronics15091913 - 1 May 2026
Viewed by 149
Abstract
Artificial Intelligence (AI) has traditionally been developed according to a model-centric paradigm, in which progress is driven by increasingly sophisticated learning architectures applied to largely fixed datasets. However, this paradigm exhibits well-known limitations, including sensitivity to label noise, distribution shifts, adversarial perturbations, and [...] Read more.
Artificial Intelligence (AI) has traditionally been developed according to a model-centric paradigm, in which progress is driven by increasingly sophisticated learning architectures applied to largely fixed datasets. However, this paradigm exhibits well-known limitations, including sensitivity to label noise, distribution shifts, adversarial perturbations, and limited transparency and reproducibility. These issues indicate that many of the current bottlenecks of AI systems arise from deficiencies in data rather than from model design. In this paper, we adopt and formalize the Data-Centric Artificial Intelligence (DCAI) paradigm, which places data quality, semantic consistency, and representativeness at the core of the AI lifecycle. From this perspective, performance, robustness, interpretability, and regulatory compliance are primarily achieved through systematic data engineering, including data curation, enrichment, validation, and continuous monitoring, rather than through repeated model re-engineering. The contributions of this work are threefold. First, a conceptual framework is provided to clarify the epistemic and methodological foundations of DCAI and distinguish it from traditional model-centric approaches. Second, a data-centric lifecycle is presented, covering training data development, inference data design, and data maintenance and integrating techniques such as semantic data representation, active learning, synthetic data generation, and drift-aware quality control. Third, the role of DCAI in the context of Generative AI is analyzed, showing how data-centric practices are essential to ensure robustness, accountability, and responsible deployment of large-scale generative models. Overall, this work positions DCAI as a coherent methodological and technological framework for the development of trustworthy, resilient, and sustainable AI systems, making a research contribution and providing a reference model for industrial and regulatory contexts. Full article
15 pages, 739 KB  
Technical Note
Large Language Models for Clinical Narrative Processing: Methods, Applications, and Challenges
by Achilleas Livieratos, Junjing Lin, Paraskevi Chasani, Mina Gaga, Fotios S. Fousekis, Charalambos Gogos, Karolina Akinosoglou, Konstantinos H. Katsanos and Margaret Gamalo
Methods Protoc. 2026, 9(3), 69; https://doi.org/10.3390/mps9030069 - 1 May 2026
Viewed by 158
Abstract
Large language models (LLMs) have rapidly advanced natural language processing and are increasingly used to analyze clinical narratives. Their ability to extract information, summarize records, and support clinical workflows makes them potential tools for enhancing documentation efficiency and the secondary application in the [...] Read more.
Large language models (LLMs) have rapidly advanced natural language processing and are increasingly used to analyze clinical narratives. Their ability to extract information, summarize records, and support clinical workflows makes them potential tools for enhancing documentation efficiency and the secondary application in the analysis of electronic health record (EHR) data. The aim of this work is to synthesize recent evidence on methodological approaches and applications of LLMs for clinical narrative processing, and to assess their performance, benefits, limitations, and implications for clinical practice. Across 2022–2026 studies, LLMs demonstrated strong performance in information extraction, summarization, triage prediction, section classification, and synthetic text generation, often surpassing traditional machine-learning models. Overall, LLMs improved the conversion of unstructured notes into actionable clinical insights, reduced documentation burden, and supported decision-making tasks. Key challenges included hallucinations, variable reproducibility, sensitivity to prompting, domain adaptation gaps, and limited transparency. Our findings indicate that LLMs show substantial promise for transforming clinical narrative processing, but safe adoption requires rigorous evaluation and continuous model auditing. This work provides a structured, non-systematic synthesis of representative studies and is intended as a high-level overview of emerging applications rather than a comprehensive systematic review. Full article
(This article belongs to the Section Public Health Research)
Show Figures

Figure 1

32 pages, 1172 KB  
Article
A Simulation-Based Integrated Decision-Support Framework for Auditable Green Logistics
by Gábor Nagy, Akylbek Umetaliev and Szabolcs Szentesi
Logistics 2026, 10(5), 98; https://doi.org/10.3390/logistics10050098 - 1 May 2026
Viewed by 110
Abstract
Background: Green logistics requires decision-support approaches that jointly address cost efficiency, emissions reduction, service reliability, and reporting transparency under dynamic operating conditions. Existing studies often treat optimization, predictive updating, stakeholder coordination, and emissions traceability separately, limiting integration. Methods: This study develops [...] Read more.
Background: Green logistics requires decision-support approaches that jointly address cost efficiency, emissions reduction, service reliability, and reporting transparency under dynamic operating conditions. Existing studies often treat optimization, predictive updating, stakeholder coordination, and emissions traceability separately, limiting integration. Methods: This study develops a simulation-based integrated decision-support framework that combines multi-objective mixed-integer linear programming (MILP), machine learning-based travel-time prediction in a rolling-horizon setting, cooperative allocation using a Shapley value mechanism, and ISO 14083:2023-aligned emissions accounting. A permissioned blockchain layer is included as a post-decision governance mechanism to support traceability. The framework is evaluated using industry-calibrated synthetic scenarios over a 30-day planning horizon with 50 independent simulation runs. Results: Under the tested scenarios, the integrated configuration reduced average CO2 emissions per route by 27.6% (±2.4%), improved the cost index by 17.3% relative to the baseline, and increased on-time delivery to 96.8%. Robustness analyses showed average key performance indicator (KPI) deviations below 5%. Component-level analysis suggests that the main operational gains arise from the interaction between predictive updating and prescriptive optimization, while the blockchain layer mainly improves auditability. Conclusions: The framework improves environmental and operational performance under the tested simulation scenarios, although real-world validation remains necessary before deployment-level conclusions can be drawn. Full article
(This article belongs to the Section Sustainable Supply Chains and Logistics)
Show Figures

Figure 1

9 pages, 1210 KB  
Data Descriptor
Preferred Colleague Dataset: A Human-Annotated Dataset of Perceived Colleague Preference
by Deepu Krishnareddy, Bakir Hadžić, Hamid Gazerpour, Michael Danner, Zhuoqi Zeng and Matthias Rätsch
Data 2026, 11(5), 100; https://doi.org/10.3390/data11050100 - 1 May 2026
Viewed by 124
Abstract
Recruitment is a time-consuming process, and AI systems are increasingly being used to support the decision-making process. However, machine learning models used in such systems can inherit bias if the underlying training data reflects biased human preferences. It is essential to analyze and [...] Read more.
Recruitment is a time-consuming process, and AI systems are increasingly being used to support the decision-making process. However, machine learning models used in such systems can inherit bias if the underlying training data reflects biased human preferences. It is essential to analyze and quantify these biases in order to develop fairer AI systems. To address this issue, we collected human judgments of colleague preference for 2200 face images. The face image set includes images of different ethnicities and genders, as well as both real and synthetically generated faces. The images were annotated by humans from diverse backgrounds in terms of age, gender, and ethnicity. Annotators were shown series of pairs of face images and asked to select which individual they would prefer as a colleague. We gathered responses from 451 annotators and aggregated the annotations to compute a preference score for each image. This dataset provides a basis for understanding human bias in colleague preference and can support the development of fair and unbiased AI models for use in recruitment settings. Full article
Show Figures

Figure 1

27 pages, 1898 KB  
Article
Parallel Bilingual Datasets: A Multimodal Deep Learning Framework for Proficiency and Style Classification
by Padmavathi Kesavan, Miranda Lakshmi Travis, Martin Aruldoss and Martin Wynn
Multimodal Technol. Interact. 2026, 10(5), 47; https://doi.org/10.3390/mti10050047 - 30 Apr 2026
Viewed by 89
Abstract
This study presents a multimodal deep learning framework for automatic proficiency and style classification of parallel Bilingual Tamil–Hindi learner data. The proposed system employs a dual-headed neural architecture to simultaneously predict proficiency levels (Basic, Advanced) and stylistic categories (Formal, Literary) using shared feature [...] Read more.
This study presents a multimodal deep learning framework for automatic proficiency and style classification of parallel Bilingual Tamil–Hindi learner data. The proposed system employs a dual-headed neural architecture to simultaneously predict proficiency levels (Basic, Advanced) and stylistic categories (Formal, Literary) using shared feature representations. A curated dataset of bilingual text samples is utilized, along with synthetic speech generated through text-to-speech (TTS) to enable controlled multimodal experimentation. Five deep learning architectures are evaluated under text-only, audio-only, and learnable fusion settings. Experimental findings indicate that text-based models consistently achieve strong performance in both proficiency and style classification tasks. In contrast, the audio-only model demonstrates limited effectiveness, highlighting the constraints of synthetic acoustic features in capturing meaningful linguistic information. The fusion models provide only marginal improvements over text-based approaches, suggesting that textual representations play a dominant role in proficiency and stylistic classification within controlled datasets. These results emphasize the importance of linguistic features over acoustic signals for automated language assessment in low-resource settings. The proposed framework provides a scalable and reproducible approach and offers a foundation for future work incorporating real speech data and more diverse linguistic inputs. Full article
28 pages, 2998 KB  
Article
SHAP-Value-Weighted Case-Based Reasoning Model with Improved Mixup Data Augmentation for Software Effort Estimation
by Jing Li, Han Zhang, Shengxiang Sun, Mingchi Lin, Sishi Liu, Chen Zhu and Kai Li
Information 2026, 17(5), 431; https://doi.org/10.3390/info17050431 - 30 Apr 2026
Viewed by 152
Abstract
Software effort estimation (SEE) serves as a cornerstone of effective software project management, and case-based reasoning (CBR) stands out as one of the most extensively adopted approaches within this domain. Nevertheless, CBR-based SEE models are still plagued by two critical challenges: conventional case [...] Read more.
Software effort estimation (SEE) serves as a cornerstone of effective software project management, and case-based reasoning (CBR) stands out as one of the most extensively adopted approaches within this domain. Nevertheless, CBR-based SEE models are still plagued by two critical challenges: conventional case retrieval mechanisms lack the ability to differentiate the relative importance of various features, and data scarcity remains a persistent bottleneck. Both issues significantly compromise the estimation accuracy and interpretability of the models. To address these limitations, we propose a SHAP–Mixup synergistic framework that enhances both feature-aware similarity learning and data distribution modeling. Specifically, we introduce (1) a stability-aware SHAP-weighted similarity metric that integrates both the magnitude and variance of feature contributions to improve retrieval robustness, and (2) a density-aware Mixup augmentation strategy that generates synthetic samples guided by local data manifold structure rather than random interpolation. Experimental results on seven benchmark datasets demonstrate that the proposed method reduces MAE and MSE by up to 20.2% on average compared to baseline CBR models, while consistently improving Pred(0.25). Furthermore, by enhancing model interpretability, the proposed method equips project managers with actionable insights into the key drivers of software effort, thereby facilitating more informed and efficient resource allocation. Building on these findings, this study provides a novel and effective pathway for developing SEE models that are more accurate, robust, and transparent. Full article
(This article belongs to the Special Issue Artificial Intelligence and Decision Support Systems)
Show Figures

Figure 1

Back to TopTop