Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (8,368)

Search Parameters:
Keywords = light-weighting model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 1535 KB  
Article
Nighttime Image Dehazing for Urban Monitoring via a Mixed-Norm Variational Model
by Xianglei Liu, Yahao Wu, Runjie Wang and Yuhang Liu
Appl. Sci. 2026, 16(8), 3929; https://doi.org/10.3390/app16083929 - 17 Apr 2026
Abstract
As modern urban systems advance, video surveillance has become indispensable for ensuring high-quality urban development. Nighttime images acquired in urban monitoring scenarios are often degraded by haze and non-uniform illumination, resulting in reduced visibility, color distortion, and blurred structural boundaries. To address these [...] Read more.
As modern urban systems advance, video surveillance has become indispensable for ensuring high-quality urban development. Nighttime images acquired in urban monitoring scenarios are often degraded by haze and non-uniform illumination, resulting in reduced visibility, color distortion, and blurred structural boundaries. To address these issues, this paper proposes a nighttime image dehazing framework that combines mixed-norm variational atmospheric-light estimation with adaptive boundary-constrained transmission refinement. Specifically, an  L2 − Lp mixed-norm regularization model is introduced to improve atmospheric-light estimation under complex nighttime illumination and suppress halo diffusion and color distortion around strong light sources. In addition, an adaptive boundary-constrained transmission refinement strategy with weighted soft-threshold shrinkage is developed to reduce residual artifacts while preserving structural edges. Experimental results on synthetic and real nighttime haze datasets demonstrate that the proposed method consistently outperforms representative state-of-the-art methods in both visual quality and quantitative metrics, showing superior robustness and restoration performance for nighttime urban monitoring applications. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
19 pages, 1775 KB  
Article
A Reproducible Monte Carlo Framework for Evaluating Cost–Latency Trade-Offs in Cloud Continuum
by Enrico Barbierato, Emanuele Goldoni and Daniele Tessera
Electronics 2026, 15(8), 1708; https://doi.org/10.3390/electronics15081708 - 17 Apr 2026
Abstract
Parallel, data-intensive applications are now commonly executed on infrastructures that combine Cloud, Fog, and Edge resources. In these environments, execution takes place on devices with markedly different computational power and over networks whose latency and bandwidth can fluctuate over time. Under these conditions, [...] Read more.
Parallel, data-intensive applications are now commonly executed on infrastructures that combine Cloud, Fog, and Edge resources. In these environments, execution takes place on devices with markedly different computational power and over networks whose latency and bandwidth can fluctuate over time. Under these conditions, overall performance is influenced not only by processing speed but also by communication delays arising from data dependencies between tasks. This leads to a basic issue: whether scheduling strategies developed under computation-focused assumptions continue to perform well once communication costs are made explicit. This work examines the behavior of simple and widely adopted scheduling heuristics when network effects are modeled directly within the system. No new scheduling algorithms are introduced. Instead, the analysis focuses on how execution time and monetary cost change for deterministic parallel workloads deployed on hierarchical Cloud–Edge infrastructures exposed to stochastic latency and bandwidth variations. For this purpose, we introduce CLOWNSim, a lightweight discrete-event simulation framework that supports large-scale Monte Carlo experiments on fixed task graphs, allowing infrastructural and scheduling effects to be examined independently of workload variability. The experimental analysis covers fully centralized Cloud deployments, intermediate Fog configurations, and resource-constrained IoT scenarios. Scheduling policies based on computational speed, execution cost, or random device selection are evaluated across these settings. In Cloud and Fog environments, communication latency and data transfers represent a substantial portion of the overall makespan, weakening the impact of scheduling decisions driven primarily by computation. In IoT scenarios, limited processing capacity becomes the main limiting factor, while communication overhead remains present but less influential in comparison. The results indicate that performance trends across the Cloud–Edge continuum cannot be attributed to scheduler choice alone. Execution behavior arises from the combined effects of workload structure, placement decisions, and network properties, with different elements becoming dominant depending on the deployment context. The proposed simulation framework offers a practical way to study these interactions and to assess cost–performance trade-offs under communication conditions that reflect realistic operating environments. Full article
(This article belongs to the Special Issue Advances in Mobile Networked Systems)
Show Figures

Figure 1

32 pages, 8881 KB  
Article
WS-R-IR Adapter: A Multimodal RGB–Infrared Remote Sensing Framework for Water Surface Object Detection
by Bin Xue, Qiang Yu, Kun Ding, Mengxin Jiang, Ying Wang, Shiming Xiang and Chunhong Pan
Remote Sens. 2026, 18(8), 1220; https://doi.org/10.3390/rs18081220 - 17 Apr 2026
Abstract
Water surface object detection in shipborne remote sensing is challenged by unstable wave-induced backgrounds, illumination variations, extreme scale changes with tiny objects, and limited annotations. Multimodal RGB–infrared (RGB–IR) sensing leverages complementary visible and infrared cues to enhance robustness. However, most existing RGB–IR methods [...] Read more.
Water surface object detection in shipborne remote sensing is challenged by unstable wave-induced backgrounds, illumination variations, extreme scale changes with tiny objects, and limited annotations. Multimodal RGB–infrared (RGB–IR) sensing leverages complementary visible and infrared cues to enhance robustness. However, most existing RGB–IR methods rely on backbones pretrained on limited-scale data, which constrain their performance for complex water surface scenes. In this work, we propose the WS-R-IR Adapter, a parameter-efficient vision foundation model (VFM)-based framework for shipborne RGB–IR object detection. Instead of full fine-tuning, it adapts frozen VFM representations via lightweight task-specific designs. the WS-R-IR Adapter includes (1) a water scene domain-aware modal adapter that progressively guides frozen backbone features with evolving semantic cues, (2) a parallel multi-scale structural perception module for fine-grained, scale-sensitive modeling, (3) an adaptive RGB–IR feature modulation fusion strategy, and (4) a resolution-aligned context semantic and structural detail fusion module. Moreover, we introduce an object-guided global-to-local registration framework to address dynamic cross-modal misalignment, and construct modality-aligned PoLaRIS-DET and ASV-RI-DET datasets that cover diverse water surface scenes. On the two datasets, the proposed method achieves mAP@0.5:0.95 scores of 74.2% and 50.2%, respectively, significantly outperforming existing methods with only 11.9M additional parameters. These results demonstrate the effectiveness of parameter-efficient VFM adaptation for multimodal water surface remote sensing. Full article
(This article belongs to the Section Remote Sensing Image Processing)
30 pages, 1288 KB  
Article
Efficient and Dynamically Consistent Joint Torque Estimation for Wearable Neurotechnology via Knowledge Distillation
by Shu Xu, Zheng Chang, Zenghui Ding, Xianjun Yang, Tao Wang and Dezhang Xu
Bioengineering 2026, 13(4), 474; https://doi.org/10.3390/bioengineering13040474 - 17 Apr 2026
Abstract
Wearable neurotechnology depends critically on continuous movement monitoring to characterize motor impairment and recovery in real-world settings. While joint torque serves as a clinically essential kinetic marker, estimating it directly on-device from inertial signals remains challenging due to stringent computational, memory, and energy [...] Read more.
Wearable neurotechnology depends critically on continuous movement monitoring to characterize motor impairment and recovery in real-world settings. While joint torque serves as a clinically essential kinetic marker, estimating it directly on-device from inertial signals remains challenging due to stringent computational, memory, and energy constraints. Lightweight pipelines typically omit computationally expensive time–frequency processing; however, this omission degrades the observability of dynamics encoded in 1D IMU signals and diminishes the effectiveness of standard knowledge distillation strategies. To enable reliable on-device torque inference, we propose a Physically Guided Dual-Consistency Knowledge Distillation (PDC-KD) framework that explicitly integrates biomechanical priors into the learning process through two collaborative pathways: parameter-manifold alignment and physics-guided compensation. The student network receives guidance through Fisher-information-weighted parameter transfer, ensuring robust knowledge distillation despite significant model capacity mismatch. Furthermore, the framework incorporates a physics-guided regularization term that enforces dynamically consistent torque trajectories via a numerically stable Cholesky-parameterized constraint. Experiments demonstrate that the student model preserves teacher-level predictive accuracy while operating within the stringent resource constraints of edge devices (achieving a 98% parameter reduction, ∼2× faster inference, and ∼1 ms latency). Moreover, the proposed method yields torque estimates with enhanced dynamical consistency, providing an efficient biosignal-processing solution for wearable neurotechnology platforms demanding real-time movement analytics. Full article
(This article belongs to the Special Issue Wearable Devices for Neurotechnology)
32 pages, 8395 KB  
Article
An Efficient Image Distortion Correction Technique for Synthetic Aperture Radar Phase Gradient Autofocus
by Qingjin Song, Hongjun Song, Jian Liu, Wenbao Li and Zhen Chen
Remote Sens. 2026, 18(8), 1216; https://doi.org/10.3390/rs18081216 - 17 Apr 2026
Abstract
In airborne synthetic aperture radar (SAR) imaging, slant-range errors vary across the swath, making phase errors range-dependent. However, the conventional phase gradient autofocus (PGA) method assumes a range-invariant phase model and becomes unreliable when range-dependent phase errors are pronounced. Although range-partitioned PGA can [...] Read more.
In airborne synthetic aperture radar (SAR) imaging, slant-range errors vary across the swath, making phase errors range-dependent. However, the conventional phase gradient autofocus (PGA) method assumes a range-invariant phase model and becomes unreliable when range-dependent phase errors are pronounced. Although range-partitioned PGA can substantially improve focusing performance, it may still introduce block-dependent azimuth shifts after compensation, causing geometric distortion in the focused image. To address this problem, this paper proposes a lightweight post-autofocus distortion-correction method for SAR images processed by range-partitioned PGA. Instead of re-estimating the full residual phase, the method operates on the block-wise phase-error estimates after global linear-phase removal, extracts the distortion-related linear trend using a sliding-window fitting strategy, converts it into azimuth-shift profiles, and performs sinc-based realignment. The proposed method is validated using both simulation and real unmanned aerial vehicle (UAV) SAR data. Experimental results demonstrate that the method effectively corrects geometric distortion while preserving the focusing gain achieved by range-partitioned PGA. In two representative real-data regions, the azimuth misalignment is reduced from 20 pixels to 3 pixels and from 34 pixels to 2 pixels, respectively. Full article
(This article belongs to the Section Remote Sensing Image Processing)
24 pages, 1591 KB  
Article
Feasibility of Full-Range Replacement of Natural Coarse Aggregates with Recycled Foam Concrete Aggregate: Effects on Rheology, Mechanical Degradation, and Shear Resistance
by Huan Liu, Xiaoyuan Fan, Alipujiang Jierula, Tian Tan, Yuhao Zhou and Nuerlanbaike Abudujiapaer
Materials 2026, 19(8), 1622; https://doi.org/10.3390/ma19081622 - 17 Apr 2026
Abstract
The urgent global need for sustainable infrastructure drives the demand for high-value buildings and waste removal. This paper studies the feasibility of using recycled foam concrete aggregate (FCA) as a substitute for natural coarse aggregate (NCA) in concrete and studies its impact on [...] Read more.
The urgent global need for sustainable infrastructure drives the demand for high-value buildings and waste removal. This paper studies the feasibility of using recycled foam concrete aggregate (FCA) as a substitute for natural coarse aggregate (NCA) in concrete and studies its impact on rheology, mechanical degradation, shear resistance, and the full-range replacement ratio (0–100). The experimental results show that the monotonic change in the workability of fresh concrete determines the lubrication threshold at 60% replacement, which is driven by the volume proportion effect. Beyond this value, capillary suction dominates, and the viscosity rises rapidly. From a mechanical perspective, the porous structure of FCA is conducive to “internal curing” so that moisture is released from the drying interface, but it also becomes a source of defects that change the fault topology. Specifically, the critical transition of the shear failure mode shifts from the debonding of the interface to the crushing of the cross-particle aggregate. At this time, the shear capacity decreases substantially, experiencing a reduction of 71.8% when completely replaced. There is a strong correlation between ultrasonic pulse velocity (UPV), rebound number, and compressive strength, and a multivariate nonlinear regression model (R2 > 0.85) with non-destructive strength prediction is ultimately obtained. Based on the balance between mechanical capacity and resource cyclability, an optimal alternative zone of 20% to 40% is proposed. This work not only provides a mechanism for multi-scale coupling between pore structure and structural properties but also provides a data-driven method for the safety assessment of lightweight recycled aggregate concrete (RAC). Full article
24 pages, 912 KB  
Article
Advanced Insurance Risk Modeling for Pseudo-New Customers Using Balanced Ensembles and Transformer Architectures
by Finn L. Solly, Raquel Soriano-Gonzalez, Angel A. Juan and Antoni Guerrero
Risks 2026, 14(4), 91; https://doi.org/10.3390/risks14040091 - 17 Apr 2026
Abstract
In insurance portfolios, classifying customers without a prior history at a given company is particularly challenging due to the absence of historical behavior, extreme class imbalance, heavy-tailed loss distributions, and strict operational constraints. Traditional machine learning approaches, including the baseline methodology proposed in [...] Read more.
In insurance portfolios, classifying customers without a prior history at a given company is particularly challenging due to the absence of historical behavior, extreme class imbalance, heavy-tailed loss distributions, and strict operational constraints. Traditional machine learning approaches, including the baseline methodology proposed in previous studies, typically optimize global predictive accuracy and therefore fail to capture business-critical outcomes, especially the identification of high-risk clients. This study extends the existing approach by evaluating two complementary business-aware classification strategies: (i) a balanced bagging ensemble specifically designed to handle class imbalance and maximize expected profit under explicit customer-omission constraints, and (ii) a lightweight Transformer-based architecture capable of learning richer feature representations. Both approaches incorporate the asymmetric financial cost structure of insurance and operate under operational selection limits. The empirical analysis is conducted on a proprietary large-scale auto insurance dataset comprising 51,618 customers and is complemented by validation on nine synthetic datasets to assess robustness. Model performance is evaluated using statistical tests (ANOVA, Friedman, and pair-wise comparisons) together with business-oriented metrics. The results show that both proposed approaches consistently outperform the baseline methodology (p < 0.001) in terms of profit, with the ensemble offering a better balance of performance and efficiency, while the Transformer shows stronger robustness and generalization under data perturbations. The balanced ensemble provides the most favourable trade-off between predictive performance, robustness, interpretability, and computational efficiency, making it suitable for deployment in regulated insurance environments, while the Transformer achieves competitive results and exhibits stronger generalization under data perturbations. The proposed approach aligns machine learning with actuarial portfolio optimization by explicitly integrating profit-driven objectives and operational constraints, offering two practical and scalable solutions for risk-based decision-making in real-world insurance settings. Full article
(This article belongs to the Special Issue Artificial Intelligence Risk Management)
23 pages, 2953 KB  
Article
VGPO-MCTS: Distilling Step-Level Supervision from Value-Guided Tree Search for Mathematical Reasoning
by Pin Wu, Yufei Zhu and Huiyan Wang
AI 2026, 7(4), 146; https://doi.org/10.3390/ai7040146 - 17 Apr 2026
Abstract
Large language models (LLMs) are increasingly used in applied intelligent systems, but mid-sized models still lag on mathematical reasoning, partly because reliable step-level supervision is scarce. Many existing remedies rely on costly human annotation, stronger teacher models, or heavy training pipelines, which limits [...] Read more.
Large language models (LLMs) are increasingly used in applied intelligent systems, but mid-sized models still lag on mathematical reasoning, partly because reliable step-level supervision is scarce. Many existing remedies rely on costly human annotation, stronger teacher models, or heavy training pipelines, which limits practical adoption. We propose VGPO-MCTS (Value-Guided Group-wise Policy Optimization over Monte Carlo Tree Search), a search-and-distillation framework that constructs reusable step-level supervision from datasets that provide only problems and final answers. VGPO-MCTS augments a frozen backbone with (i) a lightweight value model that scores candidate reasoning states formed by a reasoning prefix and its candidate next step, and (ii) a policy updated with parameter-efficient adaptation. During search, the value model guides tree expansion and selection, while verified outcomes are propagated backward to correct node utilities. The corrected search trees are then distilled into two complementary datasets: a value regression dataset for value learning and group-wise sibling candidate sets for GRPO-style policy optimization. Experiments on GSM8K and the MATH dataset with ChatGLM3-6B and SciGLM-6B show stable round-wise improvements in final-answer exact match under a lightweight adaptation setting. After three rounds of self-training, the proposed framework improves performance by about 6.3 percentage points on GSM8K and about 3.9 percentage points on MATH across the two backbones. Full article
29 pages, 7709 KB  
Article
Toward Adversarial Robustness Network Intrusion Detection Based on Multi-Model Ensemble Approach
by Thi-Thu-Huong Le, Jaehan Cho, Dawit Shin and Howon Kim
Sensors 2026, 26(8), 2478; https://doi.org/10.3390/s26082478 - 17 Apr 2026
Abstract
Machine learning-based network intrusion detection systems (NIDSs) remain vulnerable to adversarial manipulation, but the robustness literature for tabular NIDS data is still dominated by single-model, single-dataset, and non-adaptive evaluations. In this paper, we reposition the manuscript as a comparative robustness study of a [...] Read more.
Machine learning-based network intrusion detection systems (NIDSs) remain vulnerable to adversarial manipulation, but the robustness literature for tabular NIDS data is still dominated by single-model, single-dataset, and non-adaptive evaluations. In this paper, we reposition the manuscript as a comparative robustness study of a four-component defense pipeline rather than as a claim of a universal defense primitive. We evaluate XGBoost, LightGBM, TabNet, and Residual MLP on RT_IOT2022 and Web_IDS23 under standard attacks, representative constrained/adaptive attacks, component-wise ablations, sample-fraction sensitivity, repeated-run significance tests, per-class F1 analysis, and computational-overhead measurements. The results show strong dataset and architecture dependence. On RT_IOT2022, tree-based models close most of the robustness gap under strong attacks but often only after large clean-accuracy reductions; Residual MLP achieves a more favorable balance, while the full defense stack over-regularizes TabNet. On Web_IDS23, aggregate robustness-gap reduction remains positive, yet simpler baselines such as adversarial-training-only or ensemble-only configurations frequently outperform the full four-stage pipeline in absolute clean/attack accuracy. Across both datasets, median filtering is the most fragile component: larger filter windows substantially degrade both clean and attacked accuracy, whereas contamination rate, anomaly-mixing weight, and ensemble size are comparatively stable. Representative constrained/adaptive evaluations reduce performance only modestly relative to standard FGSM/PGD, but per-class and overhead analyses show that minority-class collapse and training cost remain important deployment limitations. These findings support a more cautious conclusion: adversarial defense for tabular NIDS is validation driven and dataset specific, and the full defense stack should not be treated as a universal default. Full article
(This article belongs to the Special Issue Advances and Challenges in Sensor Security Systems)
21 pages, 12845 KB  
Article
VETA-CLIP: Lightweight Video Adaptation with Efficient Spatio-Temporal Attention and Variation Loss
by Jing Huang and Jiaxin Liao
Electronics 2026, 15(8), 1701; https://doi.org/10.3390/electronics15081701 - 17 Apr 2026
Abstract
Full fine-tuning of large-scale vision-language models for video action recognition incurs prohibitive computational cost and often degrades pre-trained spatial representations. To address this, we propose VETA-CLIP, a Video Efficient Temporal Adaptation framework that enhances temporal modeling while preserving cross-modal alignment. By incorporating lightweight [...] Read more.
Full fine-tuning of large-scale vision-language models for video action recognition incurs prohibitive computational cost and often degrades pre-trained spatial representations. To address this, we propose VETA-CLIP, a Video Efficient Temporal Adaptation framework that enhances temporal modeling while preserving cross-modal alignment. By incorporating lightweight adapters into a frozen backbone, VETA-CLIP introduces only 3.55M trainable parameters (a 98% reduction compared to full fine-tuning). Our approach features two key innovations: (1) an Efficient Spatio-Temporal Attention (ESTA) mechanism with a parameter-free boundary replication temporal shift (BRTS) module, which explicitly decouples spatial and temporal attention heads to capture inter-frame dynamics while minimizing disruption to the pre-trained spatial representations; and (2) a novel Variation Loss that maximizes both local inter-frame differences and global temporal variance, encouraging the model to focus on action-related changes rather than static backgrounds. Extensive experiments on HMDB-51, UCF-101, and Something-Something v2 demonstrate that VETA-CLIP achieves competitive performance across zero-shot, base-to-novel, and few-shot protocols, while and remains competitive on the Kinetics-400 dataset. Notably, our eight-frame variant requires only 4.7 GB of peak GPU memory and 2.47 ms of inference per video, demonstrating exceptional computational efficiency alongside consistent accuracy gains. Full article
(This article belongs to the Section Artificial Intelligence)
20 pages, 4688 KB  
Article
Neutral-Axis Ti3C2Tx/GO Sandwich Sensor with Bending Immunity and Deep Learning Tactile Recognition
by Jiahao Qi, Tianshun Gong and Debo Wang
Sensors 2026, 26(8), 2471; https://doi.org/10.3390/s26082471 - 17 Apr 2026
Abstract
Flexible piezoresistive sensors are often vulnerable to modal ambiguity and bending-induced drift, both of which can obscure true pressure and strain signals under practical operation. Here, we address these limitations by suppressing bending sensitivity at the device level and disambiguating tactile modes at [...] Read more.
Flexible piezoresistive sensors are often vulnerable to modal ambiguity and bending-induced drift, both of which can obscure true pressure and strain signals under practical operation. Here, we address these limitations by suppressing bending sensitivity at the device level and disambiguating tactile modes at the algorithmic level. We propose and fabricate a Ti3C2Tx/graphene oxide (GO) sandwich sensor in which the conductive network is positioned near the neutral axis, thereby ensuring that bending induces negligible axial strain in the active layer. In contrast, out-of-plane pressing enlarges microcontacts, while in-plane stretching disrupts percolation pathways. We develop a composite-beam model to quantify neutral-axis alignment and the resultant bending immunity, realize the device via a straightforward casting process, and systematically characterize its electromechanical response under bending, pressing, nail pressing, and stretching. To further reduce modal ambiguity and improve tactile recognition, a lightweight one-dimensional convolutional neural network (1D-CNN) was introduced to classify temporal resistance signals from the sensor. Experimental results showed that the 1D-CNN achieved a high classification accuracy of 98.52% under flat-state training and testing conditions, and maintained 96.67% accuracy when evaluated on bending-state samples, demonstrating strong robustness against bending-induced interference. Together, the neutral-axis device architecture and the learning-based inference pipeline deliver high sensitivity to pressing and stretching while markedly suppressing the response to bending, thereby enabling wrist-worn pulse monitoring, soft-robotic joint sensing, and plantar pressure insoles. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

20 pages, 2397 KB  
Article
Towards Sustainable AI: Benchmarking Energy Efficiency of Deep Neural Networks for Resource-Constrained Edge Devices
by Rohail Qamar, Raheela Asif and Syed Muslim Jameel
Information 2026, 17(4), 380; https://doi.org/10.3390/info17040380 - 17 Apr 2026
Abstract
Deep learning models represent one of the most advanced and effective approaches in predictive modeling. Their hierarchical architectures enable the extraction of complex, non-linear feature relationships and the identification of latent patterns within data, making them highly suitable for tasks involving high-dimensional or [...] Read more.
Deep learning models represent one of the most advanced and effective approaches in predictive modeling. Their hierarchical architectures enable the extraction of complex, non-linear feature relationships and the identification of latent patterns within data, making them highly suitable for tasks involving high-dimensional or unstructured inputs. However, these models are computationally demanding, requiring significant processing resources and time. Furthermore, their predictive performance is largely contingent upon the availability of large-scale datasets. In this study, a Deep Green Framework is employed for the prediction of two computer vision tasks. CIFAR-10 and CIFAR-00 have been taken for image classification. Fifteen convolutional neural network (CNN) variants categorized into light-weight and heavy-weight are trained for the prediction of these two datasets. Based on energy footprint, time, memory usage, Top-1 accuracy, Top-3 accuracy, model size, and model parameters. The study highlights that MobileNetV3-Small produces the best outcomes when compared to other trained models having low task latency and higher efficiency, making it highly suitable for edge environments where resources are scarce. Full article
Show Figures

Graphical abstract

23 pages, 9764 KB  
Article
Design and Structural Validation of a Device for Assisted Vehicle Boarding
by Albert Mareš, Peter Malega, Naqib Daneshjo, Zuzana Štofková and Tomáš Mišenčík
Appl. Sci. 2026, 16(8), 3898; https://doi.org/10.3390/app16083898 - 17 Apr 2026
Abstract
Population aging increases the demand for different assistive devices enabling independent mobility and safe vehicle boarding. This paper presents the design and development of a universal lifting platform intended to support the legs of people with reduced mobility during vehicle entry. The device [...] Read more.
Population aging increases the demand for different assistive devices enabling independent mobility and safe vehicle boarding. This paper presents the design and development of a universal lifting platform intended to support the legs of people with reduced mobility during vehicle entry. The device was designed to be independent of a specific vehicle and to be powered by vehicle standard 12 Volt current. A CAD model of the proposed device was modeled in SolidWorks 2017 and validated through analytical calculations and finite element simulations. Based on the calculation results, a functional prototype was manufactured and tested under real operating conditions, confirming the feasibility and usability of the proposed solution. The presented platform provides a low-cost, lightweight and vehicle-independent assistive device, supporting controlled and safe leg transfer without the need for vehicle modification or homologation. Full article
(This article belongs to the Section Mechanical Engineering)
Show Figures

Figure 1

20 pages, 8567 KB  
Article
Latent Diffusion Model for Chlorophyll Remote Sensing Spectral Synthesis Integrating Bio-Optical Priors and Band Attention Mechanisms
by Jinming Liu, Haoran Zhang, Jianlong Huang, Hanbin Wen, Qinpei Chen, Jiayi Liu, Chaowen Wen, Huiling Tang and Zhaohua Sun
Appl. Sci. 2026, 16(8), 3892; https://doi.org/10.3390/app16083892 - 17 Apr 2026
Abstract
Global freshwater resources face severe water quality degradation, with chlorophyll-a (Chl-a) concentration serving as a critical eutrophication indicator. While deep learning methods enable accurate Chl-a retrieval from remote sensing reflectance (Rrs) spectra, the scarcity of paired Rrs-Chl-a samples limits model generalization and causes [...] Read more.
Global freshwater resources face severe water quality degradation, with chlorophyll-a (Chl-a) concentration serving as a critical eutrophication indicator. While deep learning methods enable accurate Chl-a retrieval from remote sensing reflectance (Rrs) spectra, the scarcity of paired Rrs-Chl-a samples limits model generalization and causes overfitting, particularly in optically complex inland waters. To address this data bottleneck, we propose a physics-constrained latent diffusion model for synthesizing high-fidelity paired Rrs-Chl-a data to augment limited training sets for deep learning-based water quality retrieval. Our framework integrates three key innovations: (1) a lightweight variational autoencoder achieving 8.6:1 latent space compression, reducing computational overhead while preserving spectral features; (2) band-selective attention mechanisms targeting chlorophyll-sensitive wavelengths (440, 550, 680, and 700–750 nm) based on bio-optical principles; and (3) physics-guided conditional encoding that captures concentration-dependent spectral responses across oligotrophic to eutrophic regimes. Evaluated on the GLORIA dataset, our model demonstrates superior performance in spectral similarity (0.535), sample diversity (0.072), and distribution matching (Fréchet distance 0.0008) compared to conventional generative models. When applied to data augmentation, synthetic spectra improved downstream Chl-a retrieval from R2= 0.75 to 0.91, reducing RMSE by 39%. This physics-informed generative approach addresses data scarcity in aquatic remote sensing research, supporting global needs for enhanced understanding of inland and coastal water quality dynamics in data-limited regions. Full article
Show Figures

Figure 1

31 pages, 753 KB  
Article
Heterogeneity-Aware Dynamic Federated Split Learning with Adaptive Compression (HADFL-AC) Edge–Cloud Inference in IoT Environments
by Norah Alrusayni and Asma A. Al-Shargabi
Future Internet 2026, 18(4), 213; https://doi.org/10.3390/fi18040213 - 17 Apr 2026
Abstract
In resource-constrained environments, distributed split learning allows for collaborative training; however, the system suffers from high communication overhead and is sensitive to system heterogeneity. Despite advances in IoT data reduction and distributed learning, existing approaches treat heterogeneity, adaptability, and communication efficiency as separate [...] Read more.
In resource-constrained environments, distributed split learning allows for collaborative training; however, the system suffers from high communication overhead and is sensitive to system heterogeneity. Despite advances in IoT data reduction and distributed learning, existing approaches treat heterogeneity, adaptability, and communication efficiency as separate problems. As a result, the Heterogeneity-Aware Dynamic Federated Split Learning with Adaptive Compression (HADFL-AC) framework is proposed, enabling adaptive adjustment of communication payloads to instantaneous bandwidth conditions during training. This approach distinguishes itself by focusing on feature-representation-level adaptation, offering seamless transitions between linear PCA, nonlinear Tiny Autoencoder (TinyAE), and hybrid PCA–AE compression methods without requiring changes to architecture or retraining. Experiments were conducted using the CIFAR10 and CI=NIC datasets with a lightweight ResNet-18 backbone under Dirichlet-based non-IID data partitioning and fluctuating network scenarios. HADFL-AC achieves significant communication reductions of 80.86% on CIFAR-10 and 77.2% on CINIC-10, as well as significant reductions in training time and energy consumption. In addition, the framework achieved these gains while maintaining competitive performance, reaching 79.58% on CIFAR-10 and exhibiting stable convergence on CINIC-10. Consequently, the results demonstrate that leveraging network heterogeneity as an adaptive signal facilitates efficient and scalable distributed learning while effectively balancing communication efficiency and model accuracy. Full article
Show Figures

Figure 1

Back to TopTop