Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (652)

Search Parameters:
Keywords = integrated distillation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 4006 KB  
Article
Deformable Pyramid Sparse Transformer for Semi-Supervised Driver Distraction Detection
by Qiang Zhao, Zhichao Yu, Jiahui Yu, Simon James Fong, Yuchu Lin, Rui Wang and Weiwei Lin
Sensors 2026, 26(3), 803; https://doi.org/10.3390/s26030803 - 25 Jan 2026
Viewed by 41
Abstract
Ensuring sustained driver attention is critical for intelligent transportation safety systems; however, the performance of data-driven driver distraction detection models is often limited by the high cost of large-scale manual annotation. To address this challenge, this paper proposes an adaptive semi-supervised driver distraction [...] Read more.
Ensuring sustained driver attention is critical for intelligent transportation safety systems; however, the performance of data-driven driver distraction detection models is often limited by the high cost of large-scale manual annotation. To address this challenge, this paper proposes an adaptive semi-supervised driver distraction detection framework based on teacher–student learning and deformable pyramid feature fusion. The framework leverages a limited amount of labeled data together with abundant unlabeled samples to achieve robust and scalable distraction detection. An adaptive pseudo-label optimization strategy is introduced, incorporating category-aware pseudo-label thresholding, delayed pseudo-label scheduling, and a confidence-weighted pseudo-label loss to dynamically balance pseudo-label quality and training stability. To enhance fine-grained perception of subtle driver behaviors, a Deformable Pyramid Sparse Transformer (DPST) module is integrated into a lightweight YOLOv11 detector, enabling precise multi-scale feature alignment and efficient cross-scale semantic fusion. Furthermore, a teacher-guided feature consistency distillation mechanism is employed to promote semantic alignment between teacher and student models at the feature level, mitigating the adverse effects of noisy pseudo-labels. Extensive experiments conducted on the Roboflow Distracted Driving Dataset demonstrate that the proposed method outperforms representative fully supervised baselines in terms of mAP@0.5 and mAP@0.5:0.95 while maintaining a balanced trade-off between precision and recall. These results indicate that the proposed framework provides an effective and practical solution for real-world driver monitoring systems under limited annotation conditions. Full article
(This article belongs to the Section Vehicular Sensing)
54 pages, 3083 KB  
Review
A Survey on Green Wireless Sensing: Energy-Efficient Sensing via WiFi CSI and Lightweight Learning
by Rod Koo, Xihao Liang, Deepak Mishra and Aruna Seneviratne
Energies 2026, 19(2), 573; https://doi.org/10.3390/en19020573 - 22 Jan 2026
Viewed by 75
Abstract
Conventional sensing expends energy at three stages: powering dedicated sensors, transmitting measurements, and executing computationally intensive inference. Wireless sensing re-purposes WiFi channel state information (CSI) inherent in every packet, eliminating extra sensors and uplink traffic, though reliance on deep neural networks (DNNs) often [...] Read more.
Conventional sensing expends energy at three stages: powering dedicated sensors, transmitting measurements, and executing computationally intensive inference. Wireless sensing re-purposes WiFi channel state information (CSI) inherent in every packet, eliminating extra sensors and uplink traffic, though reliance on deep neural networks (DNNs) often trained and run on graphics processing units (GPUs) can negate these gains. This review highlights two core energy efficiency levers in CSI-based wireless sensing. First ambient CSI harvesting cuts power use by an order of magnitude compared to radar and active Internet of Things (IoT) sensors. Second, integrated sensing and communication (ISAC) embeds sensing functionality into existing WiFi links, thereby reducing device count, battery waste, and carbon impact. We review conventional handcrafted and accuracy-first methods to set the stage for surveying green learning strategies and lightweight learning techniques, including compact hybrid neural architectures, pruning, knowledge distillation, quantisation, and semi-supervised training that preserve accuracy while reducing model size and memory footprint. We also discuss hardware co-design from low-power microcontrollers to edge application-specific integrated circuits (ASICs) and WiFi firmware extensions that align computation with platform constraints. Finally, we identify open challenges in domain-robust compression, multi-antenna calibration, energy-proportionate model scaling, and standardised joules per inference metrics. Our aim is a practical battery-friendly wireless sensing stack ready for smart home and 6G era deployments. Full article
Show Figures

Graphical abstract

15 pages, 2027 KB  
Article
Weight Standardization Fractional Binary Neural Network for Image Recognition in Edge Computing
by Chih-Lung Lin, Zi-Qing Liang, Jui-Han Lin, Chun-Chieh Lee and Kuo-Chin Fan
Electronics 2026, 15(2), 481; https://doi.org/10.3390/electronics15020481 - 22 Jan 2026
Viewed by 36
Abstract
In order to achieve better accuracy, modern models have become increasingly large, leading to an exponential increase in computational load, making it challenging to apply them to edge computing. Binary neural networks (BNNs) are models that quantize the filter weights and activations to [...] Read more.
In order to achieve better accuracy, modern models have become increasingly large, leading to an exponential increase in computational load, making it challenging to apply them to edge computing. Binary neural networks (BNNs) are models that quantize the filter weights and activations to 1-bit. These models are highly suitable for small chips like advanced RISC machines (ARMs), field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), system-on-chips (SoCs) and other edge computing devices. To design a model that is more friendly to edge computing devices, it is crucial to reduce the floating-point operations (FLOPs). Batch normalization (BN) is an essential tool for binary neural networks; however, when convolution layers are quantized to 1-bit, the floating-point computation cost of BN layers becomes significantly high. This paper aims to reduce the floating-point operations by removing the BN layers from the model and introducing the scaled weight standardization convolution (WS-Conv) method to avoid the significant accuracy drop caused by the absence of BN layers, and to enhance the model performance through a series of optimizations, adaptive gradient clipping (AGC) and knowledge distillation (KD). Specifically, our model maintains a competitive computational cost and accuracy, even without BN layers. Furthermore, by incorporating a series of training methods, the model’s accuracy on CIFAR-100 is 0.6% higher than the baseline model, fractional activation BNN (FracBNN), while the total computational load is only 46% of the baseline model. With unchanged binary operations (BOPs), the FLOPs are reduced to nearly zero, making it more suitable for embedded platforms like FPGAs or other edge computers. Full article
(This article belongs to the Special Issue Advances in Algorithm Optimization and Computational Intelligence)
Show Figures

Figure 1

41 pages, 7490 KB  
Review
Research Progress and Application Status of Evaporative Cooling Technology
by Lin Xia, Haogen Li, Suoying He, Zhe Geng, Shuzhen Zhang, Feiyang Long, Zongjun Long, Jisheng Li, Wujin Yuan and Ming Gao
Energies 2026, 19(2), 570; https://doi.org/10.3390/en19020570 - 22 Jan 2026
Viewed by 40
Abstract
This review systematically examines the latest research progress and diverse applications of direct evaporative cooling and indirect evaporative cooling across five core sectors: industrial and energy engineering, the built environment, agriculture and food preservation, transportation and aerospace, and emerging interdisciplinary fields. While existing [...] Read more.
This review systematically examines the latest research progress and diverse applications of direct evaporative cooling and indirect evaporative cooling across five core sectors: industrial and energy engineering, the built environment, agriculture and food preservation, transportation and aerospace, and emerging interdisciplinary fields. While existing research often focuses on single application silos, this paper distills two common foundational challenges: climate adaptability and water resource management. Quantitative analysis demonstrates significant performance gains. Hybrid systems in data centers increase annual energy-saving potential by 14% to 41%, while precision root-zone cooling in greenhouses boosts crop yields by 13.22%. Additionally, passive cooling blankets reduce post-harvest losses by up to 45%, and integrated desalination cycles achieve 18.64% lower energy consumption compared to conventional systems. Innovative strategies to overcome humidity bottlenecks include vacuum-assisted membranes, advanced porous materials, and hybrid radiative-evaporative systems. The paper also analyzes sustainable water management through rainwater harvesting, seawater utilization, and atmospheric water capture. Collectively, these advancements provide a comprehensive framework to guide the future development and commercialization of sustainable cooling technologies. Full article
(This article belongs to the Section J: Thermal Management)
Show Figures

Graphical abstract

18 pages, 1702 KB  
Article
Dynamic Modeling and Calibration of an Industrial Delayed Coking Drum Model for Digital Twin Applications
by Vladimir V. Bukhtoyarov, Ivan S. Nekrasov, Alexey A. Gorodov, Yadviga A. Tynchenko, Oleg A. Kolenchukov and Fedor A. Buryukin
Processes 2026, 14(2), 375; https://doi.org/10.3390/pr14020375 - 21 Jan 2026
Viewed by 91
Abstract
The increasing share of heavy and high-sulfur crude oils in refinery feed slates worldwide highlights the need for models of delayed coking units (DCUs) that are both physically meaningful and computationally efficient. In this study, we develop and calibrate a simplified yet dynamic [...] Read more.
The increasing share of heavy and high-sulfur crude oils in refinery feed slates worldwide highlights the need for models of delayed coking units (DCUs) that are both physically meaningful and computationally efficient. In this study, we develop and calibrate a simplified yet dynamic one-dimensional model of an industrial coke drum intended for integration into digital twin frameworks. The model includes a three-phase representation of the drum contents, a temperature-dependent global kinetic scheme for vacuum residue cracking, and lumped descriptions of heat transfer and phase holdups. Only three physically interpretable parameters—the kinetic scaling factors for distillate and coke formation and an effective wall temperature—were calibrated using routinely measured plant data, namely the overhead vapor and drum head temperatures and the final coke bed height. The calibrated model reproduces the temporal evolution of the top head and overhead temperatures and the final bed height with mean relative errors of a few percent, while capturing the more complex bottom-head temperature dynamics qualitatively. Scenario simulations illustrate how the coking severity (represented here by the effective wall temperature) affects the coke yield, bed growth, and cycle duration. Overall, the results indicate that low-order dynamic models can provide a practical balance between physical fidelity and computational speed, making them suitable as mechanistic cores for digital twins and optimization tools in delayed coking operations. Full article
Show Figures

Figure 1

32 pages, 16166 KB  
Article
A Multimodal Ensemble-Based Framework for Detecting Fake News Using Visual and Textual Features
by Muhammad Abdullah, Hongying Zan, Arifa Javed, Muhammad Sohail, Orken Mamyrbayev, Zhanibek Turysbek, Hassan Eshkiki and Fabio Caraffini
Mathematics 2026, 14(2), 360; https://doi.org/10.3390/math14020360 - 21 Jan 2026
Viewed by 96
Abstract
Detecting fake news is essential in natural language processing to verify news authenticity and prevent misinformation-driven social, political, and economic disruptions targeting specific groups. A major challenge in multimodal fake news detection is effectively integrating textual and visual modalities, as semantic gaps and [...] Read more.
Detecting fake news is essential in natural language processing to verify news authenticity and prevent misinformation-driven social, political, and economic disruptions targeting specific groups. A major challenge in multimodal fake news detection is effectively integrating textual and visual modalities, as semantic gaps and contextual variations between images and text complicate alignment, interpretation, and the detection of subtle or blatant inconsistencies. To enhance accuracy in fake news detection, this article introduces an ensemble-based framework that integrates textual and visual data using ViLBERT’s two-stream architecture, incorporates VADER sentiment analysis to detect emotional language, and uses Image–Text Contextual Similarity to identify mismatches between visual and textual elements. These features are processed through the Bi-GRU classifier, Transformer-XL, DistilBERT, and XLNet, combined via a stacked ensemble method with soft voting, culminating in a T5 metaclassifier that predicts the outcome for robustness. Results on the Fakeddit and Weibo benchmarking datasets show that our method outperforms state-of-the-art models, achieving up to 96% and 94% accuracy in fake news detection, respectively. This study highlights the necessity for advanced multimodal fake news detection systems to address the increasing complexity of misinformation and offers a promising solution. Full article
Show Figures

Figure 1

35 pages, 3598 KB  
Article
PlanetScope Imagery and Hybrid AI Framework for Freshwater Lake Phosphorus Monitoring and Water Quality Management
by Ying Deng, Daiwei Pan, Simon X. Yang and Bahram Gharabaghi
Water 2026, 18(2), 261; https://doi.org/10.3390/w18020261 - 19 Jan 2026
Viewed by 190
Abstract
Accurate estimation of Total Phosphorus, referred to as “Phosphorus, Total” (PPUT; µg/L) in the sourced monitoring data, is essential for understanding eutrophication dynamics and guiding water-quality management in inland lakes. However, lake-wide PPUT mapping at high resolution is challenging to achieve using conventional [...] Read more.
Accurate estimation of Total Phosphorus, referred to as “Phosphorus, Total” (PPUT; µg/L) in the sourced monitoring data, is essential for understanding eutrophication dynamics and guiding water-quality management in inland lakes. However, lake-wide PPUT mapping at high resolution is challenging to achieve using conventional in-situ sampling, and nearshore gradients are often poorly resolved by medium- or low-resolution satellite sensors. This study exploits multi-generation PlanetScope imagery (Dove Classic, Dove-R, and SuperDove; 3–5 m, near-daily revisit) to develop a hybrid AI framework for PPUT retrieval in Lake Simcoe, Ontario, Canada. PlanetScope surface reflectance, short-term meteorological descriptors (3 to 7-day aggregates of air temperature, wind speed, precipitation, and sea-level pressure), and in-situ Secchi depth (SSD) were used to train five ensemble-learning models (HistGradientBoosting, CatBoost, RandomForest, ExtraTrees, and GradientBoosting) across eight feature-group regimes that progressively extend from bands-only, to combinations with spectral indices and day-of-year (DOY), and finally to SSD-inclusive full-feature configurations. The inclusion of SSD led to a strong and systematic performance gain, with mean R2 increasing from about 0.67 (SSD-free) to 0.94 (SSD-aware), confirming that vertically integrated optical clarity is the dominant constraint on PPUT retrieval and cannot be reconstructed from surface reflectance alone. To enable scalable SSD-free monitoring, a knowledge-distillation strategy was implemented in which an SSD-aware teacher transfers its learned representation to a student using only satellite and meteorological inputs. The optimal student model, based on a compact subset of 40 predictors, achieved R2 = 0.83, RMSE = 9.82 µg/L, and MAE = 5.41 µg/L, retaining approximately 88% of the teacher’s explanatory power. Application of the student model to PlanetScope scenes from 2020 to 2025 produces meter-scale PPUT maps; a 26 July 2024 case study shows that >97% of the lake surface remains below 10 µg/L, while rare (<1%) but coherent hotspots above 20 µg/L align with tributary mouths and narrow channels. The results demonstrate that combining commercial high-resolution imagery with physics-informed feature engineering and knowledge transfer enables scalable and operationally relevant monitoring of lake phosphorus dynamics. These high-resolution PPUT maps enable lake managers to identify nearshore nutrient hotspots, tributary plume structures. In doing so, the proposed framework supports targeted field sampling, early warning for eutrophication events, and more robust, lake-wide nutrient budgeting. Full article
(This article belongs to the Section Water Quality and Contamination)
Show Figures

Figure 1

20 pages, 390 KB  
Systematic Review
Systematic Review of Quantization-Optimized Lightweight Transformer Architectures for Real-Time Fruit Ripeness Detection on Edge Devices
by Donny Maulana and R Kanesaraj Ramasamy
Computers 2026, 15(1), 69; https://doi.org/10.3390/computers15010069 - 19 Jan 2026
Viewed by 321
Abstract
Real-time visual inference on resource-constrained hardware remains a core challenge for edge computing and embedded artificial intelligence systems. Recent deep learning architectures, particularly Vision Transformers (ViTs) and Detection Transformers (DETRs), achieve high detection accuracy but impose substantial computational and memory demands that limit [...] Read more.
Real-time visual inference on resource-constrained hardware remains a core challenge for edge computing and embedded artificial intelligence systems. Recent deep learning architectures, particularly Vision Transformers (ViTs) and Detection Transformers (DETRs), achieve high detection accuracy but impose substantial computational and memory demands that limit their deployment on low-power edge platforms such as NVIDIA Jetson and Raspberry Pi devices. This paper presents a systematic review of model compression and optimization strategies—specifically quantization, pruning, and knowledge distillation—applied to lightweight object detection architectures for edge deployment. Following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, peer-reviewed studies were analyzed from Scopus, IEEE Xplore, and ScienceDirect to examine the evolution of efficient detectors from convolutional neural networks to transformer-based models. The synthesis highlights a growing focus on real-time transformer variants, including Real-Time DETR (RT-DETR) and low-bit quantized approaches such as Q-DETR, alongside optimized YOLO-based architectures. While quantization enables substantial theoretical acceleration (e.g., up to 16× operation reduction), aggressive low-bit precision introduces accuracy degradation, particularly in transformer attention mechanisms, highlighting a critical efficiency-accuracy tradeoff. The review further shows that Quantization-Aware Training (QAT) consistently outperforms Post-Training Quantization (PTQ) in preserving performance under low-precision constraints. Finally, this review identifies critical open research challenges, emphasizing the efficiency–accuracy tradeoff and the high computational demands imposed by Transformer architectures. Future directions are proposed, including hardware-aware optimization, robustness to imbalanced datasets, and multimodal sensing integration, to ensure reliable real-time inference in practical agricultural edge computing environments. Full article
Show Figures

Figure 1

38 pages, 4734 KB  
Article
Robust Disturbance-Response Feature Modeling and Multi-Perspective Validation of Compensation Capacitor Signals
by Tongdian Wang and Pan Wang
Mathematics 2026, 14(2), 316; https://doi.org/10.3390/math14020316 - 16 Jan 2026
Viewed by 149
Abstract
In high-speed railways, the reliability of jointless track circuits largely hinges on the operational integrity of compensation capacitors. These capacitors are periodically installed along the track to mitigate rail inductive impedance and stabilize signal transmission. The induced voltage response, referred to as the [...] Read more.
In high-speed railways, the reliability of jointless track circuits largely hinges on the operational integrity of compensation capacitors. These capacitors are periodically installed along the track to mitigate rail inductive impedance and stabilize signal transmission. The induced voltage response, referred to as the compensation-capacitor signal, serves as a critical diagnostic indicator of circuit health. Yet it is often distorted by electromagnetic interference and structural resonance, posing significant challenges for robust feature extraction. To address this challenge, we propose a Disturbance-Robust Feature Distillation (DRFD) framework that performs multi-perspective modeling and validation of robust features. The framework formulates a unified multi-objective optimization model that jointly considers statistical significance, environmental stability, and structural separability. These objectives are harmonized through an adaptive Bayesian weighting mechanism, enabling automatic identification of disturbance-resistant and discriminative features under complex operating conditions. Experimental evaluations on real-world datasets collected at a 100 kHz sampling rate from roadbed, tunnel, and bridge environments demonstrate that the DRFD framework achieves 96.2% accuracy and 95.4% F1-score, outperforming the best-performing baseline by 4.2–7.8% in accuracy and 6.5% in F1-score. Moreover, the framework achieves the lowest cross-condition relative variance (RV < 0.015), confirming its high robustness against electromagnetic and structural disturbances. The extracted core features—Root Mean Square (RMS), Peak Factor (PF), and Center Frequency (CF)—faithfully capture the intrinsic electromagnetic behaviors of compensation capacitors, thus linking statistical robustness with physical interpretability for enhanced reliability assessment of railway signal systems. Full article
Show Figures

Figure 1

33 pages, 4885 KB  
Article
Two-Stage Fine-Tuning of Large Vision-Language Models with Hierarchical Prompting for Few-Shot Object Detection in Remote Sensing Images
by Yongqi Shi, Ruopeng Yang, Changsheng Yin, Yiwei Lu, Bo Huang, Yu Tao and Yihao Zhong
Remote Sens. 2026, 18(2), 266; https://doi.org/10.3390/rs18020266 - 14 Jan 2026
Viewed by 304
Abstract
Few-shot object detection (FSOD) in high-resolution remote sensing (RS) imagery remains challenging due to scarce annotations, large intra-class variability, and high visual similarity between categories, which together limit the generalization ability of convolutional neural network (CNN)-based detectors. To address this issue, we explore [...] Read more.
Few-shot object detection (FSOD) in high-resolution remote sensing (RS) imagery remains challenging due to scarce annotations, large intra-class variability, and high visual similarity between categories, which together limit the generalization ability of convolutional neural network (CNN)-based detectors. To address this issue, we explore leveraging large vision-language models (LVLMs) for FSOD in RS. We propose a two-stage, parameter-efficient fine-tuning framework with hierarchical prompting that adapts Qwen3-VL for object detection. In the first stage, low-rank adaptation (LoRA) modules are inserted into the vision and text encoders and trained jointly with a Detection Transformer (DETR)-style detection head on fully annotated base classes under three-level hierarchical prompts. In the second stage, the vision LoRA parameters are frozen, the text encoder is updated using K-shot novel-class samples, and the detection head is partially frozen, with selected components refined using the same three-level hierarchical prompting scheme. To preserve base-class performance and reduce class confusion, we further introduce knowledge distillation and semantic consistency losses. Experiments on the DIOR and NWPU VHR-10.v2 datasets show that the proposed method consistently improves novel-class performance while maintaining competitive base-class accuracy and surpasses existing baselines, demonstrating the effectiveness of integrating hierarchical semantic reasoning into LVLM-based FSOD for RS imagery. Full article
Show Figures

Figure 1

17 pages, 710 KB  
Article
KD-SecBERT: A Knowledge-Distilled Bidirectional Encoder Optimized for Open-Source Software Supply Chain Security in Smart Grid Applications
by Qinman Li, Xixiang Zhang, Weiming Liao, Tao Dai, Hongliang Zheng, Beiya Yang and Pengfei Wang
Electronics 2026, 15(2), 345; https://doi.org/10.3390/electronics15020345 - 13 Jan 2026
Viewed by 187
Abstract
With the acceleration of digital transformation, open-source software has become a fundamental component of modern smart grids and other critical infrastructures. However, the complex dependency structures of open-source ecosystems and the continuous emergence of vulnerabilities pose substantial challenges to software supply chain security. [...] Read more.
With the acceleration of digital transformation, open-source software has become a fundamental component of modern smart grids and other critical infrastructures. However, the complex dependency structures of open-source ecosystems and the continuous emergence of vulnerabilities pose substantial challenges to software supply chain security. In power information networks and cyber–physical control systems, vulnerabilities in open-source components integrated into Supervisory Control and Data Acquisition (SCADA), Energy Management System (EMS), and Distribution Management System (DMS) platforms and distributed energy controllers may propagate along the supply chain, threatening system security and operational stability. In such application scenarios, large language models (LLMs) often suffer from limited semantic accuracy when handling domain-specific security terminology, as well as deployment inefficiencies that hinder their practical adoption in critical infrastructure environments. To address these issues, this paper proposes KD-SecBERT, a domain-specific semantic bidirectional encoder optimized through multi-level knowledge distillation for open-source software supply chain security in smart grid applications. The proposed framework constructs a hierarchical multi-teacher ensemble that integrates general language understanding, cybersecurity-domain knowledge, and code semantic analysis, together with a lightweight student architecture based on depthwise separable convolutions and multi-head self-attention. In addition, a dynamic, multi-dimensional distillation strategy is introduced to jointly perform layer-wise representation alignment, ensemble knowledge fusion, and task-oriented optimization under a progressive curriculum learning scheme. Extensive experiments conducted on a multi-source dataset comprising National Vulnerability Database (NVD) and Common Vulnerabilities and Exposures (CVE) entries, security-related GitHub code, and Open Web Application Security Project (OWASP) test cases show that KD-SecBERT achieves an accuracy of 91.3%, a recall of 90.6%, and an F1-score of 89.2% on vulnerability classification tasks, indicating strong robustness in recognizing both common and low-frequency security semantics. These results demonstrate that KD-SecBERT provides an effective and practical solution for semantic analysis and software supply chain risk assessment in smart grids and other critical-infrastructure environments. Full article
Show Figures

Figure 1

15 pages, 1363 KB  
Article
Hierarchical Knowledge Distillation for Efficient Model Compression and Transfer: A Multi-Level Aggregation Approach
by Titinunt Kitrungrotsakul and Preeyanuch Srichola
Information 2026, 17(1), 70; https://doi.org/10.3390/info17010070 - 12 Jan 2026
Viewed by 260
Abstract
The success of large-scale deep learning models in remote sensing tasks has been transformative, enabling significant advances in image classification, object detection, and image–text retrieval. However, their computational and memory demands pose challenges for deployment in resource-constrained environments. Knowledge distillation (KD) alleviates these [...] Read more.
The success of large-scale deep learning models in remote sensing tasks has been transformative, enabling significant advances in image classification, object detection, and image–text retrieval. However, their computational and memory demands pose challenges for deployment in resource-constrained environments. Knowledge distillation (KD) alleviates these issues by transferring knowledge from a strong teacher to a student model, which can be compact for efficient deployment or architecturally matched to improve accuracy under the same inference budget. In this paper, we introduce Hierarchical Multi-Segment Knowledge Distillation (HIMS_KD), a multi-stage framework that sequentially distills knowledge from a teacher into multiple assistant models specialized in low-, mid-, and high-level representations, and then aggregates their knowledge into the final student. We integrate feature-level alignment, auxiliary similarity-logit alignment, and supervised loss during distillation. Experiments on benchmark remote sensing datasets (RSITMD and RSICD) show that HIMS_KD improves retrieval performance and enhances zero-shot classification; and when a compact student is used, it reduces deployment cost while retaining strong accuracy. Full article
(This article belongs to the Special Issue AI-Based Image Processing and Computer Vision)
Show Figures

Figure 1

44 pages, 9272 KB  
Systematic Review
Toward a Unified Smart Point Cloud Framework: A Systematic Review of Definitions, Methods, and a Modular Knowledge-Integrated Pipeline
by Mohamed H. Salaheldin, Ahmed Shaker and Songnian Li
Buildings 2026, 16(2), 293; https://doi.org/10.3390/buildings16020293 - 10 Jan 2026
Viewed by 377
Abstract
Reality-capture has made point clouds a primary spatial data source, yet processing and integration limits hinder their potential. Prior reviews focus on isolated phases; by contrast, Smart Point Clouds (SPCs)—augmenting points with semantics, relations, and query interfaces to enable reasoning—received limited attention. This [...] Read more.
Reality-capture has made point clouds a primary spatial data source, yet processing and integration limits hinder their potential. Prior reviews focus on isolated phases; by contrast, Smart Point Clouds (SPCs)—augmenting points with semantics, relations, and query interfaces to enable reasoning—received limited attention. This systematic review synthesizes the state-of-the-art SPC terminology and methods to propose a modular pipeline. Following PRISMA, we searched Scopus, Web of Science, and Google Scholar up to June 2025. We included English-language studies in geomatics and engineering presenting novel SPC methods. Fifty-eight publications met eligibility criteria: Direct (n = 22), Indirect (n = 22), and New Use (n = 14). We formalize an operative SPC definition—queryable, ontology-linked, provenance-aware—and map contributions across traditional point cloud processing stages (from acquisition to modeling). Evidence shows practical value in cultural heritage, urban planning, and AEC/FM via semantic queries, rule checks, and auditable updates. Comparative qualitative analysis reveals cross-study trends: higher and more uniform density stabilizes features but increases computation, and hybrid neuro-symbolic classification improves long-tail consistency; however, methodological heterogeneity precluded quantitative synthesis. We distill a configurable eight-module pipeline and identify open challenges in data at scale, domain transfer, temporal (4D) updates, surface exports, query usability, and sensor fusion. Finally, we recommend lightweight reporting standards to improve discoverability and reuse. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

25 pages, 1403 KB  
Review
Green Innovation for Solid Post-Distillation Residues Valorization: Narrative Review of Circular Bio-Economy Solutions
by Milica Aćimović, Anita Leovac Maćerak, Branimir Pavlić, Vladimir Sikora, Tijana Zeremski, Tamara Erceg and Djordje Djatkov
Processes 2026, 14(2), 244; https://doi.org/10.3390/pr14020244 - 9 Jan 2026
Viewed by 465
Abstract
The production of essential oils generates substantial quantities of solid post-distillation residues, a largely unutilized waste stream rich in bioactive compounds (e.g., phenolics, flavonoids) as well as polysaccharides. Managing this organic waste presents both environmental and economic challenges. This review critically examines environmentally [...] Read more.
The production of essential oils generates substantial quantities of solid post-distillation residues, a largely unutilized waste stream rich in bioactive compounds (e.g., phenolics, flavonoids) as well as polysaccharides. Managing this organic waste presents both environmental and economic challenges. This review critically examines environmentally friendly green innovations and resource-efficient technologies within circular bio-economy strategies for valorizing these residues, focusing on four primary conversion pathways: physico-mechanical, thermochemical, biological, and chemical methods. We highlight their potential for practical applications, including the extraction of active compounds for food, cosmetic, and pharmaceutical industries, utilization in agriculture, incorporation into construction materials and wastewater treatment. Despite these opportunities, wider industrial adoption remains limited by high processing costs and the lack of scalable, cost-effective technologies. Key research gaps included the need for methods applicable at the farm level, optimization of the residue-specific conversion process, and life-cycle assessments to evaluate environmental and economic impacts. Addressing these gaps is crucial to fully exploit the economic and ecological potential of post-distillation solid residues and integrate them into sustainable circular bio-economy practices through various processes. Full article
(This article belongs to the Special Issue Analysis and Processes of Bioactive Components in Natural Products)
Show Figures

Figure 1

42 pages, 3251 KB  
Article
Efficient and Accurate Epilepsy Seizure Prediction and Detection Based on Multi-Teacher Knowledge Distillation RGF-Model
by Wei Cao, Qi Li, Anyuan Zhang and Tianze Wang
Brain Sci. 2026, 16(1), 83; https://doi.org/10.3390/brainsci16010083 - 9 Jan 2026
Viewed by 359
Abstract
Background: Epileptic seizures are unpredictable, and while existing deep learning models achieve high accuracy, their deployment on wearable devices is constrained by high computational costs and latency. To address this, this work proposes the RGF-Model, a lightweight network that unifies seizure prediction and [...] Read more.
Background: Epileptic seizures are unpredictable, and while existing deep learning models achieve high accuracy, their deployment on wearable devices is constrained by high computational costs and latency. To address this, this work proposes the RGF-Model, a lightweight network that unifies seizure prediction and detection within a single causal framework. Methods: By integrating Feature-wise Linear Modulation (FiLM) with a Ring-Buffer Gated Recurrent Unit (Ring-GRU), the model achieves adaptive task-specific feature conditioning while strictly enforcing causal consistency for real-time inference. A multi-teacher knowledge distillation strategy is employed to transfer complementary knowledge from complex teacher ensembles to the lightweight student, significantly reducing complexity without sacrificing accuracy. Results: Evaluations on the CHB-MIT and Siena datasets demonstrate that the RGF-Model outperforms state-of-the-art teacher models in terms of efficiency while maintaining comparable accuracy. Specifically, on CHB-MIT, it achieves 99.54% Area Under the Curve (AUC) and 0.01 False Prediction Rate per hour (FPR/h) for prediction, and 98.78% Accuracy (Acc) for detection, with only 0.082 million parameters. Statistical significance was assessed using a random predictor baseline (p < 0.05). Conclusions: The results indicate that the RGF-Model provides a highly efficient solution for real-time wearable epilepsy monitoring. Full article
(This article belongs to the Section Neurotechnology and Neuroimaging)
Show Figures

Figure 1

Back to TopTop