Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (751)

Search Parameters:
Keywords = distillation learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 3177 KB  
Article
Dual-Distillation Vision-Language Model for Multimodal Emotion Recognition in Conversation with Quantized Edge Deployment
by DeogHwa Kim, Yu il Lee, Da Hyun Yoon, Byeong Jun Kim and Deok-Hwan Kim
Appl. Sci. 2026, 16(6), 3103; https://doi.org/10.3390/app16063103 - 23 Mar 2026
Abstract
Multimodal Emotion Recognition in Conversation (ERC) has attracted attention as a key technology in human–computer interaction, mental healthcare, and intelligent services. However, deploying ERC in real-world settings remains challenging due to reliability gaps across modalities, instability in visual representations, and the high computational [...] Read more.
Multimodal Emotion Recognition in Conversation (ERC) has attracted attention as a key technology in human–computer interaction, mental healthcare, and intelligent services. However, deploying ERC in real-world settings remains challenging due to reliability gaps across modalities, instability in visual representations, and the high computational cost of large pretrained models. In particular, on resource-constrained edge devices, it is difficult to reduce model size and inference latency while preserving accuracy. To address these challenges, we jointly propose a knowledge-distillation-based multimodal ERC model, called DDVLM, with an edge-optimized Weight-Only Quantization (WOQ) pipeline for efficient edge deployment. DDVLM assigns the textual modality as the teacher and the visual modality as the student, transferring emotion-distribution knowledge to improve non-verbal representations and stabilize multimodal learning. In addition, Exponential Moving Average (EMA)-based self-distillation enhances the consistency and generalization capability of text features. Meanwhile, the proposed WOQ pipeline quantizes linear-layer weights to INT8 while preserving precision-sensitive operations in mixed precision, thereby minimizing accuracy loss and reducing model size, memory usage, and inference latency. Experiments on the MELD dataset demonstrated that the proposed approach achieves state-of-the-art performance while also enabling real-time inference on edge devices such as NVIDIA Jetson. Overall, this work presents a practical ERC framework that jointly considers accuracy and deployability. Full article
(This article belongs to the Special Issue Multimodal Emotion Recognition and Affective Computing)
Show Figures

Figure 1

30 pages, 2355 KB  
Article
SGCAD: A SAR-Guided Confidence-Gated Distillation Framework of Optical and SAR Images for Water-Enhanced Land-Cover Semantic Segmentation
by Junjie Ma, Zhiyi Wang, Yanyi Yuan and Fengming Hu
Remote Sens. 2026, 18(6), 962; https://doi.org/10.3390/rs18060962 - 23 Mar 2026
Abstract
Multimodal fusion of synthetic aperture radar (SAR) and optical imagery is widely used in Earth observation for applications such as land-cover mapping and surface-water mapping (including post-event flood mapping under near-synchronous acquisitions) and land-use inventory. Optical images provide rich spectral and texture cues, [...] Read more.
Multimodal fusion of synthetic aperture radar (SAR) and optical imagery is widely used in Earth observation for applications such as land-cover mapping and surface-water mapping (including post-event flood mapping under near-synchronous acquisitions) and land-use inventory. Optical images provide rich spectral and texture cues, whereas SAR offers all-weather structural information that is complementary but heterogeneous. In practice, this heterogeneity often introduces fusion conflicts in multi-class segmentation, causing critical categories such as water bodies to be under-optimized. To address this issue, this paper presents a SAR-guided class-aware knowledge distillation (SGCAD) method for multimodal semantic segmentation. First, a SAR-only HRNet is trained as a water-expert teacher to learn discriminative backscattering and boundary priors for water extraction. Second, a lightweight multimodal student model (LightMCANet) is optimized using a class-aware distillation strategy that transfers teacher knowledge only within high-confidence water regions, thereby suppressing noisy supervision and reducing interference to other classes. Third, a SAR edge guidance module (SEGM) is introduced in the decoder to enhance boundary continuity for slender structures such as water bodies and roads. Overall, SGCAD improves targeted category learning while maintaining stable performance across the remaining classes. Experiments on a self-built dataset from GF-1 optical and LuTan-1 SAR imagery demonstrate higher overall accuracy and more coherent water/road predictions than representative baselines. Future work will extend the proposed distillation scheme to additional categories and broader geographic scenes. Full article
(This article belongs to the Section Remote Sensing Image Processing)
27 pages, 590 KB  
Perspective
Machine Unlearning: A Perspective, Taxonomy, and Benchmark Evaluation
by Cristian Cosentino, Simone Gatto, Pietro Liò and Fabrizio Marozzo
Future Internet 2026, 18(3), 174; https://doi.org/10.3390/fi18030174 - 23 Mar 2026
Abstract
Machine Learning (ML) models trained on large-scale datasets learn useful predictive patterns, but they may also memorize undesired information, leading to risks such as information leakage, bias, copyright violations, and privacy attacks. As these models are increasingly deployed in real-world and regulated settings, [...] Read more.
Machine Learning (ML) models trained on large-scale datasets learn useful predictive patterns, but they may also memorize undesired information, leading to risks such as information leakage, bias, copyright violations, and privacy attacks. As these models are increasingly deployed in real-world and regulated settings, the consequences of such memorization become practical and high-stakes, reinforced by data-protection frameworks that grant individuals a Right to be Forgotten (e.g., the GDPR). Simply removing a record from the training dataset does not guarantee the elimination of its influence from the model, while retrain-from-scratch procedures are often prohibitive for modern architectures, including Transformers and Large Language Models (LLMs). In this work, we provide a perspective on Machine Unlearning (MU) in supervised learning settings, with a particular focus on Natural Language Processing (NLP) scenarios, grounded in a PRISMA-driven systematic review. We propose a multi-level taxonomy that organizes MU techniques along practical and conceptual dimensions, including exactness (exact versus approximate), unlearning granularity, guarantees, and application constraints. To complement this perspective, we run an illustrative benchmark evaluation using a standardized unlearning protocol on DistilBERT trained on a public corpus of news headlines for topic classification, contrasting the retraining gold standard with representative design-for-unlearning and approximate post hoc techniques. For completeness, we also report two oracle-assisted upper-bound baselines (distillation and scrubbing) that rely on a clean retrained reference model, and we account for their incremental cost separately. Our analysis jointly considers model utility, probabilistic quality, forgetting and privacy indicators, as well as computational efficiency. The results highlight systematic trade-offs between accuracy, computational cost, and removal effectiveness, providing practical guidance for selecting machine unlearning techniques in realistic deployment scenarios. Full article
Show Figures

Graphical abstract

22 pages, 2186 KB  
Article
ConvDeiT-Tiny: Adding Local Inductive Bias to DeiT-Ti for Enhanced Maize Leaf Disease Classification
by Damaris Waema, Waweru Mwangi and Petronilla Muriithi
Plants 2026, 15(6), 982; https://doi.org/10.3390/plants15060982 - 23 Mar 2026
Abstract
Reliable identification of maize leaf diseases is critical for mitigating crop losses, particularly in regions where farmers have limited access to experts. Although vision transformers (ViTs) have recently demonstrated strong performance in image recognition, their weak inductive bias and limited modeling of local [...] Read more.
Reliable identification of maize leaf diseases is critical for mitigating crop losses, particularly in regions where farmers have limited access to experts. Although vision transformers (ViTs) have recently demonstrated strong performance in image recognition, their weak inductive bias and limited modeling of local texture patterns make them non-ideal for fine-grained maize leaf disease classification. To address these limitations, we propose ConvDeiT-Tiny, a lightweight hybrid ViT that improves DeiT-Ti by placing depthwise convolutions in parallel with multi-head self-attention modules in the first three transformer blocks. The local and global features captured by the convolution and attention modules are concatenated along the embedding dimension and fused using a multilayer perceptron. This results in richer token representations without significantly increasing model size. Across three datasets, ConvDeiT-Tiny (6.9 M parameters) consistently outperformed DeiT-Ti, DeiT-Ti-Distilled, and DeiT-S (21.7 M parameters) when trained from scratch. With transfer learning, ConvDeiT-Tiny achieved an accuracy of 99.15%, 99.35%, and 98.60% on the CD&S, primary, and Kaggle datasets, respectively, surpassing many previous studies with far fewer parameters. For explainability, we present gradient-weighted transformer attribution visualizations showing the disease lesions driving model predictions. These results indicate that injecting local inductive bias in early transformer blocks is beneficial for accurate maize leaf disease classification. Full article
(This article belongs to the Special Issue AI-Driven Machine Vision Technologies in Plant Science)
Show Figures

Figure 1

26 pages, 621 KB  
Article
Co-Evolutionary Proximal Distilled Evolutionary Reinforcement Learning with Gated Knowledge Transfer
by Ying Zhao, Yi Ding and Yinglong Dai
Mathematics 2026, 14(6), 1078; https://doi.org/10.3390/math14061078 - 23 Mar 2026
Abstract
Evolutionary reinforcement learning (ERL) offers a compelling alternative for continuous control by combining the population-level exploration of evolutionary algorithms with the gradient-based exploitation of reinforcement learning. However, applying conventional genetic operators to deep networks can be highly destructive, often inducing abrupt behavioral shifts [...] Read more.
Evolutionary reinforcement learning (ERL) offers a compelling alternative for continuous control by combining the population-level exploration of evolutionary algorithms with the gradient-based exploitation of reinforcement learning. However, applying conventional genetic operators to deep networks can be highly destructive, often inducing abrupt behavioral shifts that erase previously learned skills. Proximal distilled evolutionary reinforcement learning (PDERL) addresses this issue with phenotype-aware operators, leveraging proximal mutation and distillation crossover to produce safer and more constructive variations. Despite these advances, PDERL and many ERL frameworks still exhibit a fundamental evaluation asymmetry: an evolving actor population is guided by a single, centralized critic for fitness evaluation and action filtering. This single-critic dependence creates a bottleneck and a potential single point of failure, where bias or instability in value estimation can misdirect the evolutionary search. To overcome this limitation, we propose co-evolutionary proximal distilled evolutionary reinforcement learning (Co-PDERL), a heterogeneous dual-population framework that co-evolves both actor and critic populations. Co-PDERL extends phenotype-aware evolution to the value-function landscape via a loss-filtered distillation crossover and a Jacobian-based proximal mutation tailored for critics, and employs a condition-gated synchronization mechanism to enable robust bidirectional knowledge transfer between the evolutionary populations and the reinforcement learning agent. Experiments on MuJoCo continuous control benchmarks show that Co-PDERL outperforms competitive baselines on most tasks, including standard ERL and PDERL, improving both sample efficiency and asymptotic performance by effectively alleviating the single-critic bottleneck. Full article
Show Figures

Figure 1

25 pages, 4865 KB  
Article
Hybrid Attention-Augmented Deep Reinforcement Learning for Intelligent Machining Process Route Planning
by Ruizhe Wang, Minrui Wang, Ziyan Du, Xiaochuan Dong and Yibing Peng
Machines 2026, 14(3), 343; https://doi.org/10.3390/machines14030343 - 18 Mar 2026
Viewed by 39
Abstract
Machining process route planning (MPRP) is vital for autonomous manufacturing yet remains challenging under complex, multi-dimensional engineering constraints. This paper proposes an attention-augmented deep reinforcement learning (DRL) framework to achieve intelligent process orchestration. First, an Optional Process Attribute Adjacency Graph (OPAAG) is established [...] Read more.
Machining process route planning (MPRP) is vital for autonomous manufacturing yet remains challenging under complex, multi-dimensional engineering constraints. This paper proposes an attention-augmented deep reinforcement learning (DRL) framework to achieve intelligent process orchestration. First, an Optional Process Attribute Adjacency Graph (OPAAG) is established to formally model the “feature–process–resource–constraint” coupling, enhancing the agent’s perception of manufacturing semantics. The architecture synergistically integrates Graph Attention Networks (GAT) to perceive spatial benchmark dependencies and a Transformer-based encoder to capture sequential resource correlations within variable-length machining chains. Furthermore, a dynamic action masking mechanism is integrated to guarantee a 100% constraint satisfaction rate during both training and inference stages. Experimental evaluations across diverse part geometries demonstrate that the proposed method offers significant advantages in cost optimization, inference efficiency, and topological stability compared to traditional heuristic algorithms and standard DRL models. By effectively distilling the search space and maintaining action feasibility, the framework provides an efficient and robust solution for autonomous process planning in complex industrial scenarios. Full article
(This article belongs to the Section Advanced Manufacturing)
Show Figures

Figure 1

26 pages, 977 KB  
Article
KE-MLLM: A Knowledge-Enhanced Multi-Sensor Learning Framework for Explainable Fake Review Detection
by Jiaying Chen, Jingyi Liu, Yiwen Liang and Mengjie Zhou
Appl. Sci. 2026, 16(6), 2909; https://doi.org/10.3390/app16062909 - 18 Mar 2026
Viewed by 68
Abstract
The proliferation of fake reviews on e-commerce and social platforms has severely undermined consumer trust and market integrity, necessitating robust and interpretable real-time detection mechanisms with multi-sensor data fusion capabilities. While traditional machine learning approaches have shown promise in identifying fraudulent reviews, they [...] Read more.
The proliferation of fake reviews on e-commerce and social platforms has severely undermined consumer trust and market integrity, necessitating robust and interpretable real-time detection mechanisms with multi-sensor data fusion capabilities. While traditional machine learning approaches have shown promise in identifying fraudulent reviews, they often lack transparency and fail to leverage the rich contextual knowledge embedded in large-scale datasets. In this paper, we propose KE-MLLM (Knowledge-Enhanced Multimodal Large Language Model), a unified framework that integrates knowledge-enhanced prompting with parameter-efficient fine-tuning for explainable fake review detection. Our approach employs LoRA (Low-Rank Adaptation) to fine-tune lightweight large language models (LLaMA-3-8B) on review text, while incorporating multimodal behavioral sensor signals including temporal patterns, user metadata, and social network characteristics for comprehensive anomaly sensing. To address the critical need for interpretability in fraud detection systems, we implement a Chain-of-Thought (CoT) reasoning module that generates human-understandable explanations for classification decisions, highlighting linguistic anomalies, sentiment inconsistencies, and behavioral red flags. We enhance the model’s discriminative capability through a knowledge distillation strategy that transfers domain-specific expertise from larger teacher models while maintaining computational efficiency suitable for edge sensing devices. Extensive experiments on two benchmark datasets—YelpChi and Amazon Reviews from the DGL Fraud Dataset—show that KE-MLLM achieves strong performance, reaching an F1-score of 94.3% and an AUC-ROC of 96.7% on YelpChi and outperforming the strongest baseline in our comparison by 5.8 and 4.2 percentage points, respectively. Furthermore, human evaluation indicates that the generated explanations achieve 89.5% consistency with expert annotations, suggesting that the framework can improve the interpretability and practical usefulness of automated fraud detection systems. The proposed framework provides a useful step toward more accurate and interpretable fake review detection and offers a practical reference for building more transparent and accountable AI systems in high-stakes applications. Full article
Show Figures

Figure 1

22 pages, 7355 KB  
Article
IAE-Net: Incremental Learning-Based Attention-Enhanced DenseNet for Robust Facial Emotion Recognition
by Haseeb Ali Khan and Jong-Ha Lee
Mathematics 2026, 14(6), 1023; https://doi.org/10.3390/math14061023 - 18 Mar 2026
Viewed by 54
Abstract
Facial emotion recognition (FER) is an important component of human–computer interaction and healthcare-oriented affective computing. However, reliable deployment remains difficult in unconstrained settings due to appearance and geometric variability (e.g., pose, illumination, and occlusion), demographic imbalance, and dataset bias. In practice, two additional [...] Read more.
Facial emotion recognition (FER) is an important component of human–computer interaction and healthcare-oriented affective computing. However, reliable deployment remains difficult in unconstrained settings due to appearance and geometric variability (e.g., pose, illumination, and occlusion), demographic imbalance, and dataset bias. In practice, two additional constraints frequently limit real-world FER systems: the computational overhead of heavy architectures and limited adaptability when data evolve over time, where sequential updates can cause catastrophic forgetting. To address these challenges, we propose the Incremental Attention-Enhanced Network (IAE-Net), a compact single-branch framework built on a DenseNet121 backbone and a cascaded refinement pipeline. The model incorporates Channel Attention (CA) to emphasize expression-relevant feature channels and suppress less informative responses, followed by a deformable attention module (DA) that reduces feature misalignment caused by non-rigid facial motion and pose shifts, thereby improving robustness under geometric variability. For continual deployment, IAE-Net supports class-incremental updates via weight transfer, exemplar replay, and knowledge distillation to improve retention during sequential learning. We evaluate IAE-Net on four widely used benchmarks, FER2013, FERPlus, KDEF, and AffectNet, covering both controlled and in-the-wild conditions under a unified training protocol. The proposed approach achieves accuracies of 79.15%, 92.03%, 99.48%, and 74.20% on FER2013, FERPlus, KDEF, and AffectNet, respectively, with balanced precision, recall, and F1-score trends. These results indicate that IAE-Net provides an efficient and extensible FER framework with potential utility in dynamic real-world and longitudinal healthcare-oriented applications. Full article
(This article belongs to the Special Issue Recent Advances and Applications of Artificial Neural Networks)
Show Figures

Figure 1

20 pages, 1948 KB  
Article
Contra-KD: A Lightweight Transformer Model for Malicious URL Detection with Contrastive Representation and Model Distillation
by Zheng You Lim, Ying Han Pang, Edwin Chan Kah Jun, Shih Yin Ooi and Goh Fan Ling
Future Internet 2026, 18(3), 157; https://doi.org/10.3390/fi18030157 - 17 Mar 2026
Viewed by 87
Abstract
Infected URLs are always regarded as a serious threat to cybersecurity, serving as pathways to phishing, maliciousness, and other offenses. Although transformer-based models have demonstrated good performance in malicious URL detection, their high computational cost and latency make them impractical for deployment in [...] Read more.
Infected URLs are always regarded as a serious threat to cybersecurity, serving as pathways to phishing, maliciousness, and other offenses. Although transformer-based models have demonstrated good performance in malicious URL detection, their high computational cost and latency make them impractical for deployment in real-time or resource-constrained systems. Allocated on the basis of knowledge distillation (KD), lightweight models tend to be efficient but are commonly not sufficiently discriminative to distinguish between malicious and benign URLs with non-cataclysmic lexical overlaps, particularly when dealing with an imbalanced dataset. In order to address these issues, we propose Contra-KD, a lightweight transformer model that incorporates contrastive learning (CL) and KD. This proposed framework imposes structured embedding matching, allowing the student model to learn more meaningful and generalized depictions. Contra-KD uses a compact 6-layer student transformer architecture based on ELECTRA to scale parameters up and can achieve more than 90% computational fidelity with a high accuracy. In this scheme, CL improves the feature of discrimination by semantically clustering similar URLs and separating different URLs. This tendency serves to limit confusion, especially when a common lexical trait is held between two words and/or in the presence of adversarial obfuscation. Through a large-scale publicly available Kaggle dataset of 651,191 URLs in imbalanced scenarios, the proposed Contra-KD can achieve 99.05% accuracy, 99.96% ROC-AUC, and 98.18% MCC which are superior to their counterparts including lightweight models and transformer-based ones. To summarize, Contra-KD proposes an efficient transformer architecture that is both small and effective in computation while delivering stable detection performance. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Graphical abstract

24 pages, 2064 KB  
Article
Meta-Label-Corrected Knowledge Distillation for Partial Multi-Label Learning
by Jiwei Shuai, Can Xu, Haiyan Jiang and Bin Hu
Electronics 2026, 15(6), 1233; https://doi.org/10.3390/electronics15061233 - 16 Mar 2026
Viewed by 177
Abstract
Partial multi-label learning (PML) assigns each instance a candidate label set that contains all relevant labels but may also include irrelevant noisy ones, making reliable disambiguation essential. Although a small number of verified clean labels is often available in practice, existing PML methods [...] Read more.
Partial multi-label learning (PML) assigns each instance a candidate label set that contains all relevant labels but may also include irrelevant noisy ones, making reliable disambiguation essential. Although a small number of verified clean labels is often available in practice, existing PML methods rarely exploit such information to explicitly guide candidate-label correction. Meanwhile, directly applying knowledge distillation (KD) to PML is highly vulnerable to noisy supervision during representation learning, which can aggravate error accumulation under overlapping candidate labels. To address these issues, we propose a meta-guided distillation framework for PML that integrates teacher–student learning with nested meta-optimization. Specifically, the teacher is optimized with large-scale noisy data under the guidance of limited clean labels, so that it can learn calibrated probabilistic label semantics and generate corrected soft targets for student training. To make this meta-correction process scalable, a truncated meta-gradient approximation is further adopted to reduce computational overhead. The resulting corrected teacher outputs are then used to drive robust multi-label distillation for the student. Experiments on multiple benchmark multi-label image datasets demonstrate consistent improvements over seven representative PML methods across standard evaluation metrics. These results show that meta-guided calibration effectively reduces semantic ambiguity and mitigates noise-induced error propagation in partial multi-label learning. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning: Real-World Applications)
Show Figures

Figure 1

22 pages, 4100 KB  
Article
Explainable Machine Learning-Based Urban Waterlogging Prediction Framework
by Yinghua Deng and Xin Lu
Urban Sci. 2026, 10(3), 156; https://doi.org/10.3390/urbansci10030156 - 13 Mar 2026
Viewed by 206
Abstract
Urban waterlogging has become a critical challenge to urban sustainability under the combined pressures of rapid urbanization and increasingly frequent extreme weather events. However, traditional predictive models struggle to achieve real-time, point-specific early warning effectively, primarily due to the interference of redundant high-dimensional [...] Read more.
Urban waterlogging has become a critical challenge to urban sustainability under the combined pressures of rapid urbanization and increasingly frequent extreme weather events. However, traditional predictive models struggle to achieve real-time, point-specific early warning effectively, primarily due to the interference of redundant high-dimensional data and the inability to handle severe data imbalance. This study proposes a lightweight and interpretable machine learning framework for real-time waterlogging hotspot prediction, based on a multi-dimensional feature space. Specifically, we implement a Lasso-based mechanism to distill 37 multi-source variables into five core determinants. This process effectively isolates dominant environmental drivers while filtering noise. To further overcome the recall bottleneck, we propose a Synthetic Minority Over-sampling Technique based on Weighted Distance and Cleaning (SMOTE-WDC) algorithm that incorporates weighted feature distances and density-based noise cleaning. Validating the framework on datasets from Shenzhen (2023–2024), we demonstrate that the integrated Gradient Boosting Decision Tree (GBDT) model integrated with this strategy achieves optimal performance using only five features, yielding an F1-score of 0.808 and an Area Under the Precision-Recall Curve (AUC-PR) of 0.895. Notably, a Recall of 0.882 is attained, representing a 4.6% improvement over the baseline. This study contributes a cost-effective, high-sensitivity approach to disaster risk reduction, advancing predictive urban waterlogging management. Full article
Show Figures

Figure 1

16 pages, 22406 KB  
Article
Isotropic Reconstruction of Anisotropic vEM Volumes with ViT-Guided Diffusion
by Junchao Qiu, Guojia Wan, Zhengyun Zhou, Minghui Liao, Xiangdong Liu, Xinyuan Li and Bo Du
Electronics 2026, 15(6), 1181; https://doi.org/10.3390/electronics15061181 - 12 Mar 2026
Viewed by 204
Abstract
Volume electron microscopy (vEM) provides nanometer-scale 3D imaging, yet its axial (z) resolution is often much lower than the in-plane (xy) resolution, yielding anisotropic volumes that hinder segmentation and connectomic reconstruction. We present a two-stage cross-axial super-resolution framework [...] Read more.
Volume electron microscopy (vEM) provides nanometer-scale 3D imaging, yet its axial (z) resolution is often much lower than the in-plane (xy) resolution, yielding anisotropic volumes that hinder segmentation and connectomic reconstruction. We present a two-stage cross-axial super-resolution framework for isotropic reconstruction that combines a conditional diffusion model and domain-specific self-supervised pretraining of a vision transformer (ViT). First, the student–teacher self-distillation paradigm of DINOv3 is adopted to learn representations from large sets of high-resolution xy sections, capturing vEM-specific texture statistics and ultrastructural patterns. Second, a conditional diffusion denoiser is trained with supervised anisotropic degradation simulated by z-downsampling, while a perceptual loss based on frozen ViT feature distances constrains generated slices to match real-section distributions. These constraints recover axial high-frequency details and reduce hallucinated textures and inter-slice drift, improving cross-slice consistency. Experiments on two public vEM datasets show improved fidelity, perceptual quality, and membrane-boundary continuity over interpolation and learning-based baselines. Full article
Show Figures

Figure 1

40 pages, 3992 KB  
Article
Toward Energy-Efficient and Low-Carbon Intrusion Detection in Edge and Cloud Computing Based on GreenShield Cybersecurity Framework
by Abdullah Alshammari
Sensors 2026, 26(6), 1780; https://doi.org/10.3390/s26061780 - 11 Mar 2026
Viewed by 337
Abstract
The fast growth of edge–cloud computing infrastructures has increased the cybersecurity burden even as it has substantially amplified the energy use and carbon footprint of intrusion detection systems (IDSs). In order to overcome this challenge, this paper suggests GreenShield, which is a framework [...] Read more.
The fast growth of edge–cloud computing infrastructures has increased the cybersecurity burden even as it has substantially amplified the energy use and carbon footprint of intrusion detection systems (IDSs). In order to overcome this challenge, this paper suggests GreenShield, which is a framework of low-carbon cybersecurity involving lightweight cryptography, deep learning that is energy efficient, and carbon conscious system optimization across distributed edges and in cloud setup. GreenShield employs a hierarchical federated learning architecture with integrated knowledge distillation and a carbon-aware scheduling controller that dynamically adjusts security response execution based on threat intensity and renewable energy availability. As extensive experiments on the UNSW-NB15 and CIC-IDS2017 datasets show, GreenShield attains 98.73% detection accuracy and is 67.4% more energy efficient than traditional deeplearning-based IDSs. Further, the suggested system reduces the operational carbon emissions up to 97.6%, which is equivalent to a reduction of around 2.8 kg CO2-equivalent/per hour in a typical edge-deployment situation, yet it does not undermine the performance of the detection. These findings suggest that GreenShield can be one of the meaningful alternatives in providing viable and scalable sustainable cybersecurity that supports carbon-conscious security workflows in the future edge–cloud computing architecture. Full article
Show Figures

Figure 1

39 pages, 2921 KB  
Article
Reasoning-Enhanced Query–Service Matching: A Large Language Model Approach with Adaptive Scoring and Diversity Optimization
by Yue Xiang, Jing Lu, Jinqian Wei and Yaowen Hu
Mathematics 2026, 14(6), 950; https://doi.org/10.3390/math14060950 - 11 Mar 2026
Viewed by 216
Abstract
Query–service matching in customer service systems faces a critical challenge of accurately aligning user queries expressed in colloquial language with formally defined services while balancing business objectives. Traditional keyword-based and embedding approaches fail to capture complex semantic nuances and cannot provide interpretable explanations. [...] Read more.
Query–service matching in customer service systems faces a critical challenge of accurately aligning user queries expressed in colloquial language with formally defined services while balancing business objectives. Traditional keyword-based and embedding approaches fail to capture complex semantic nuances and cannot provide interpretable explanations. We address this problem by proposing a novel reasoning-enhanced framework that leverages large language models (LLMs) for structured multi-criteria evaluation. Our key innovation is a reasoning-first scoring architecture where the model generates detailed explanations before numerical scores, reducing score variance by 18% through conditional mutual information. We introduce a controlled stochastic perturbation mechanism with theoretically derived optimal parameters that balance diversity and relevance, alongside a knowledge distillation pipeline enabling 960× model compression (480B→0.5B parameters) while retaining 94% performance. Rigorous theoretical analysis establishes Pareto optimality guarantees for multi-criteria evaluation, information-theoretic entropy reduction bounds, and PAC learning guarantees for distillation. Experimental validation on real-world telecommunications data demonstrates 89% Precision@1 (15.3% improvement over baselines), 23% diversity enhancement, and 96× latency reduction, with deployment cost decreasing 1200× compared to direct LLM inference. This work bridges the gap between LLM capabilities and production deployment requirements through principled mathematical foundations and practical system design. Full article
(This article belongs to the Special Issue Industrial Improvement with AI in Applied Mathematics)
Show Figures

Figure 1

24 pages, 1495 KB  
Article
Predicting Bioactive Compounds in Arbutus unedo L. Leaves Using Machine Learning: Influence of Extraction Technique, Solvent Type, and Geographical Location
by Jasmina Lapić, Anica Bebek Markovinović, Nikolina Račić, Lana Vujanić, Marko Kostić, Dušan Rakić, Senka Djaković and Danijela Bursać Kovačević
Foods 2026, 15(6), 993; https://doi.org/10.3390/foods15060993 - 11 Mar 2026
Viewed by 197
Abstract
This study investigates the effects of extraction technique, solvent type, and geographical origin on the recovery of bioactive compounds from Arbutus unedo L. leaves collected from two Croatian islands (Vis and Mali Lošinj) and extracted using conventional, Soxhlet, and ultrasound-assisted extraction (UAE) with [...] Read more.
This study investigates the effects of extraction technique, solvent type, and geographical origin on the recovery of bioactive compounds from Arbutus unedo L. leaves collected from two Croatian islands (Vis and Mali Lošinj) and extracted using conventional, Soxhlet, and ultrasound-assisted extraction (UAE) with green solvents (distilled water, 70% ethanol, and ethyl acetate). Extracts were purified and characterized by thin-layer chromatography, column chromatography, and FTIR spectroscopy. Total phenols, hydroxycinnamic acids, flavonols, condensed tannins, and antioxidant capacity were quantified spectrophotometrically. Solvent type had the greatest influence, with 70% ethanol yielding the highest levels of bioactives and antioxidant capacity. Geographical origin significantly affected total phenolics and condensed tannins, with leaves from Vis outperforming those from Mali Lošinj. UAE was slightly more efficient than conventional and Soxhlet methods, particularly for thermolabile phenolics. Machine learning algorithms were applied as exploratory tools, using total phenols as a proxy variable to estimate selected bioactive compounds and antioxidant capacity based on extraction parameters. Decision Tree and Gradient Boosting models showed high goodness of fit within the experimental dataset (R2 > 0.91). These results support the potential of green extraction strategies combined with data-driven screening for the valorization of A. unedo leaf extracts, while highlighting the need for further validation prior to industrial application. Full article
Show Figures

Figure 1

Back to TopTop