Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (267)

Search Parameters:
Keywords = Gate Than Change Unit

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 5700 KB  
Article
A Deep Learning-Based EIT System for Robust Gesture Recognition Under Confounding Factors
by Hancong Wu, Guanghong Huang, Wentao Wang and Yuan Wen
Biosensors 2026, 16(4), 200; https://doi.org/10.3390/bios16040200 - 1 Apr 2026
Viewed by 449
Abstract
Gesture recognition with electrical impedance tomography (EIT) is an enormous potential tool for human–machine interaction because of its low cost, low complexity and high temporal resolution. Although high-precision EIT-based gesture recognition has been achieved in ideal scenarios, ensuring its consistent performance under interference [...] Read more.
Gesture recognition with electrical impedance tomography (EIT) is an enormous potential tool for human–machine interaction because of its low cost, low complexity and high temporal resolution. Although high-precision EIT-based gesture recognition has been achieved in ideal scenarios, ensuring its consistent performance under interference remains challenging. This article presents a novel method to alleviate the effect of confounding factors on EIT gesture recognition. An EIT armband was designed to mitigate the effect of contact impedance variation based on equivalent circuit analysis, and a spatial–temporal fusion network, named the Fold Atrous Spatial Pyramid Pooling-Gated Recurrent Unit (FASPP-GRU), was developed for robust gesture classification. The results showed that the proposed two-layer electrode maintained a stable contact impedance when its contact force with the skin was changed. Although confounding factors caused significant changes in baseline forearm impedance, FASPP-GRU achieved 80% accuracy under the effect of limb position changes and dynamic changes in muscle state over time, which outperforms conventional classifiers. With an 87 μs inference time, the proposed system shows enormous potential in real-time applications. Full article
Show Figures

Figure 1

19 pages, 2342 KB  
Article
An Improved GRU Financial Time Series Prediction Model
by Yong Li
Fractal Fract. 2026, 10(4), 227; https://doi.org/10.3390/fractalfract10040227 - 28 Mar 2026
Viewed by 476
Abstract
Forecasting financial time series (FTS) is essential for analyzing and understanding the dynamics of financial markets. Traditional recurrent neural network (RNN) models often suffer from low prediction accuracy on non-stationary and abruptly changing data, as their gating mechanisms struggle to capture evolving trends [...] Read more.
Forecasting financial time series (FTS) is essential for analyzing and understanding the dynamics of financial markets. Traditional recurrent neural network (RNN) models often suffer from low prediction accuracy on non-stationary and abruptly changing data, as their gating mechanisms struggle to capture evolving trends in FTS. This paper introduces variational mode decomposition (VMD) and multifractal analysis to enhance the gating mechanism of the gated recurrent unit (GRU). By quantifying the changing characteristics of FTS, the proposed model dynamically adjusts the gating weights. In addition, a state fusion strategy is employed to improve the utilization efficiency of historical information. Experiments are conducted using daily data of the SSE 50, CSI 300, and CSI 1000 indices, spanning from 4 January 2002, to 26 December 2025. The results demonstrate that, compared to traditional models, the proposed model better captures the evolving characteristics of FTS and achieves higher prediction accuracy. Full article
(This article belongs to the Special Issue Multifractal Analysis and Complex Systems)
Show Figures

Figure 1

18 pages, 2559 KB  
Article
A Multi-Attention Gated Fusion and Physics-Informed Model for Steam Turbine Regulating-Stage Fault Detection
by Yuanli Ma, Gang Ding, Qiang Zhang, Jiangming Zhou and Yue Cao
Energies 2026, 19(7), 1665; https://doi.org/10.3390/en19071665 - 27 Mar 2026
Viewed by 415
Abstract
The increasing proportion of renewable energy leads to frequent changes in turbine load, making the regulating stage more prone to degradation. Traditional anomaly detection methods lack sufficient sensitivity and generalization. To address this issue, this study proposes a method combining multi-attention gated fusion [...] Read more.
The increasing proportion of renewable energy leads to frequent changes in turbine load, making the regulating stage more prone to degradation. Traditional anomaly detection methods lack sufficient sensitivity and generalization. To address this issue, this study proposes a method combining multi-attention gated fusion and physical information learning. A gated fusion mechanism is proposed to adaptively extract and fuse key temporal and feature information. Furthermore, the generalization ability of the model is improved by introducing physical constraints derived from the relationship between pressure, temperature, and valve position. Finally, a dynamic temperature prediction model is established using the multi-output long short-term memory neural network. Experiments using actual power plant data demonstrate that the proposed method effectively improves the accuracy of post-regulating-stage temperature prediction and the sensitivity of anomaly detection. The proposed gating fusion method improves prediction accuracy by 4.6% compared to direct addition, while the fusion of physical information reduces the generalization error by more than 6%. In addition, compared to traditional deep learning and machine learning models, the proposed method improves anomaly detection accuracy by at least 3.9%. This research is of great significance for the safe operation of thermal power units and the power grid. Full article
Show Figures

Figure 1

21 pages, 771 KB  
Article
Optimizing Vineyard Sustainability for Climate-Smart Food Systems: An Integrated Carbon Footprint and DEA Approach
by Eleni Adam, Athanasia Mavrommati, Alexandra Pliakoura, Angelos Patakas and Fotios Chatzitheodoridis
Sustainability 2026, 18(7), 3277; https://doi.org/10.3390/su18073277 - 27 Mar 2026
Viewed by 369
Abstract
The sustainability of the wine sector depends on primary production practices and on the adaptability of plant material to climate change. This study evaluates the carbon footprint and technical efficiency of four grape varieties in Paionia using an integrated Life Cycle Assessment and [...] Read more.
The sustainability of the wine sector depends on primary production practices and on the adaptability of plant material to climate change. This study evaluates the carbon footprint and technical efficiency of four grape varieties in Paionia using an integrated Life Cycle Assessment and Data Envelopment Analysis framework. A cradle-to-gate approach was adopted, with system boundaries extending from input production to harvest, and functional units of kg CO2e/ha to capture input intensity and kg CO2e/kg grape to assess product-level environmental efficiency. The analysis included 82 vineyards, with DEA scores ranging from 0.744 to 1.000; most vineyards operated below the efficiency frontier, and the input-oriented VRS model identified potential input reductions without affecting output. Merlot showed the highest footprint (3794.02 kg CO2e/ha), followed by Assyrtiko (2798.40) and Xinomavro (2784.48), while Roditis had the lowest (1958.07); on a per-kg basis, emissions were 0.340, 0.304, 0.281, and 0.143 kg CO2e/kg respectively. The DEA identified targeted input-saving opportunities, including reduced irrigation needs in white varieties and lower nutrient and plant-protection requirements in red varieties, while the strong performance of Roditis highlights the advantages of locally adapted, low-input plant material for improving efficiency. Full article
Show Figures

Figure 1

24 pages, 8415 KB  
Article
UAV-Based River Velocity Estimation Using Optical Flow and FEM-Supported Multiframe RAFT Extension
by Andrius Kriščiūnas, Vytautas Akstinas, Dalia Čalnerytė, Diana Meilutytė-Lukauskienė, Karolina Gurjazkaitė, Tautvydas Fyleris and Rimantas Barauskas
Drones 2026, 10(3), 221; https://doi.org/10.3390/drones10030221 - 21 Mar 2026
Viewed by 514
Abstract
Quantifying river surface flow velocity is essential for hydrodynamic modelling, flood forecasting, and water resource management. Traditional in situ methods provide accurate point measurements but are costly and limited in spatial coverage. Unmanned aerial vehicles (UAVs) offer a flexible, non-contact alternative for high-resolution [...] Read more.
Quantifying river surface flow velocity is essential for hydrodynamic modelling, flood forecasting, and water resource management. Traditional in situ methods provide accurate point measurements but are costly and limited in spatial coverage. Unmanned aerial vehicles (UAVs) offer a flexible, non-contact alternative for high-resolution monitoring. Optical flow is a tracer-independent technique for deriving velocity fields from RGB video, making it well suited to UAV-based surveys. However, its operational use is hindered by the limited availability of annotated datasets and by instability under low-texture or noisy conditions. This study combines a Finite element method (FEM)-based physical flow model with UAV video to generate reference datasets and introduces a modified Recurrent All-Pairs Field Transforms (RAFT) architecture based on multiframe sequences. A Gated Recurrent Unit fusion module (Fuse-GRU) is incorporated prior to correlation computation, improving robustness to illumination changes and surface homogeneity while maintaining computational efficiency. The proposed model delivers stable, physically consistent velocity estimates across multiple rivers and flow conditions. Accuracy improves with higher spatial resolution and moderate temporal spacing. Compared to field measurements, the average angular difference ranged from 8 to 15°. The high error values were mainly caused by inaccuracies in the physical model and by complex river features. These findings confirm that multiframe optical flow can reproduce realistic river flow patterns with accuracy comparable to physically-based simulations, thereby supporting UAV-based hydrometric monitoring and model validation. Full article
(This article belongs to the Special Issue Drones in Hydrological Research and Management)
Show Figures

Figure 1

29 pages, 6240 KB  
Article
Explainable Prediction of Power Generation for Cascaded Hydropower Systems Under Complex Spatiotemporal Dependencies
by Zexin Li, Xiaodong Shen, Yuhang Huang and Yuchen Ren
Energies 2026, 19(6), 1540; https://doi.org/10.3390/en19061540 - 20 Mar 2026
Viewed by 262
Abstract
Hydropower plays a key regulating role in new-type power systems, and both forecasting accuracy and interpretability are critical for power dispatch. However, cascade hydropower forecasting is constrained by strong spatiotemporal coupling among multi-dimensional features, flow propagation delays, as well as the limited transparency [...] Read more.
Hydropower plays a key regulating role in new-type power systems, and both forecasting accuracy and interpretability are critical for power dispatch. However, cascade hydropower forecasting is constrained by strong spatiotemporal coupling among multi-dimensional features, flow propagation delays, as well as the limited transparency of deep learning models. To tackle these issues, this paper develops a hybrid framework integrating Maximal Information Coefficient (MIC), the Long- and Short-term Time-series Network (LSTNet), and the SHapley Additive exPlanations (SHAP) interpretability method. First, an MIC-based nonlinear screening mechanism is employed to remove redundant noise and construct a high-quality input space. Second, an LSTNet model is developed to deeply extract spatiotemporal coupling features among cascade stations and flow evolution patterns, achieving high-accuracy forecasting of both system-level and station-level outputs. Finally, SHAP is used for global and local interpretability analysis to perform physics-consistency verification with respect to the model’s decision-making rationale. Experimental results indicate that the proposed approach achieves low errors in total output forecasting, reducing error levels by approximately 57–88% compared with Recurrent Neural Network (RNN), Gated Recurrent Unit (GRU), and Informer. Moreover, SHAP feature-dependence analysis reveals a nonlinear response change of station D around 7.8 MW, providing evidence for the physical consistency of the model outputs and improving model interpretability. Full article
(This article belongs to the Section F5: Artificial Intelligence and Smart Energy)
Show Figures

Figure 1

42 pages, 1779 KB  
Article
Uncertainty-First Forecasting of the South African Equity Market Using Deep Learning and Temporal Conformal Prediction
by Phumudzo Lloyd Seabe, Claude Rodrigue Bambe Moutsinga and Maggie Aphane
Big Data Cogn. Comput. 2026, 10(3), 93; https://doi.org/10.3390/bdcc10030093 - 20 Mar 2026
Viewed by 632
Abstract
Accurate forecasting of equity returns remains fundamentally constrained by weak short-horizon predictability, pronounced noise, and structural non-stationarity. While deep learning models have been widely applied to financial time series, most studies prioritize point prediction and provide limited guidance on reliable uncertainty quantification, particularly [...] Read more.
Accurate forecasting of equity returns remains fundamentally constrained by weak short-horizon predictability, pronounced noise, and structural non-stationarity. While deep learning models have been widely applied to financial time series, most studies prioritize point prediction and provide limited guidance on reliable uncertainty quantification, particularly in emerging markets. This study developed an uncertainty-aware forecasting framework for the South African equity market by integrating variational mode decomposition (VMD), gated recurrent units (GRUs), and temporal conformal prediction (TCP) to construct distribution-free prediction intervals with finite-sample coverage guarantees. Using daily returns from the FTSE/JSE All Share Index, we first confirmed that baseline recurrent models applied directly to raw returns exhibited negligible out-of-sample explanatory power, consistent with weak-form market efficiency. Incorporating VMD enhanced representation learning and improved point forecast accuracy by isolating latent frequency components. However, model-based predictive variance alone proved insufficient for reliable calibration. Embedding the models within a rolling conformal prediction framework restored near-nominal coverage across multiple confidence levels while allowing interval widths to adapt dynamically to changing volatility regimes. Robustness analyses, including walk-forward validation, stress-regime evaluation, and block permutation negative control experiments, indicated that the observed performance was not driven by temporal leakage or alignment artifacts. The results further highlight a trade-off between interval sharpness and tail-risk protection, particularly during extreme market events. Overall, the findings support a shift from return-level prediction toward calibrated uncertainty estimation as a more stable and economically meaningful objective in non-stationary financial environments. Full article
Show Figures

Figure 1

18 pages, 1620 KB  
Article
Adaptive Knowledge Tracing with Dynamic Memory and Reinforcement Learning
by Li Li, Zheng Duan, Zhi Zhou and Lian Liu
Sensors 2026, 26(6), 1878; https://doi.org/10.3390/s26061878 - 17 Mar 2026
Viewed by 551
Abstract
Accurately assessing students’ knowledge states and dynamically adapting instructional interactions to their cognitive levels are fundamental to optimizing personalized learning. However, conventional knowledge tracing (KT) approaches are constrained by three critical limitations: data sparsity undermines prediction robustness, the neglect of forgetting behavior misrepresents [...] Read more.
Accurately assessing students’ knowledge states and dynamically adapting instructional interactions to their cognitive levels are fundamental to optimizing personalized learning. However, conventional knowledge tracing (KT) approaches are constrained by three critical limitations: data sparsity undermines prediction robustness, the neglect of forgetting behavior misrepresents real learning processes, and static knowledge-state modeling fails to capture learners’ dynamic cognitive changes. To overcome these shortcomings, this study proposes DRAKT (Dynamic Reinforcement learning-based Adaptive Knowledge Tracing), a novel model that introduces two key innovations: (1) a Q-learning-based knowledge-state adjustment mechanism, which dynamically updates mastery levels via a reward structure integrated with the Ebbinghaus forgetting curve; and (2) a dynamic memory update module that combines a gated recurrent unit (GRU) with attention-based filtering to capture long-term learning dependencies and suppress irrelevant memory traces. Experiments conducted on three public ASSISTments datasets (2009, 2012, and 2017) demonstrate that DRAKT consistently outperforms state-of-the-art baselines. On ASSISTments2017 and ASSISTments2009, DRAKT achieves AUC scores of 82.08% and 81.47%, respectively, surpassing the second-best model (GKT) by 2.75–6.57 percentage points in AUC and 4.77–5.75 percentage points in accuracy. In practice, DRAKT offers a reliable technical foundation for enabling personalized learning-path recommendation and real-time cognitive adaptation in intelligent educational systems. Full article
Show Figures

Figure 1

17 pages, 4808 KB  
Article
Predicting Groundwater Depth Using Historical Data Trend Decomposition: Based on the VMD-LSTM Hybrid Deep Learning Model
by Jie Yue, Hong Guo, Deng Pan, Huanxiang Wang, Yawen Xin, Furong Yu, Yingying Shao and Rui Dun
Water 2026, 18(6), 689; https://doi.org/10.3390/w18060689 - 15 Mar 2026
Viewed by 365
Abstract
Groundwater is a critical natural and strategic economic resource, and the accurate prediction of groundwater depth dynamics is essential for the rational development and utilization of water resources. However, under the combined influence of climate variability, human activities, and complex hydrogeological conditions, groundwater [...] Read more.
Groundwater is a critical natural and strategic economic resource, and the accurate prediction of groundwater depth dynamics is essential for the rational development and utilization of water resources. However, under the combined influence of climate variability, human activities, and complex hydrogeological conditions, groundwater level time series exhibit strong nonlinear and non-stationary characteristics, posing great challenges to the accurate prediction of groundwater level dynamics. Most existing prediction models rely on sufficient hydro-meteorological and exploitation data that are difficult to obtain in water-scarce regions, or fail to effectively decouple the multi-scale features of non-stationary groundwater level signals, resulting in limited prediction accuracy and insufficient generalization ability. To address these research gaps, this study takes Zhengzhou, a typical water-deficient city in the Yellow River Basin, as the study area, and proposes a hybrid deep learning framework combining Variational Mode Decomposition (VMD) and Long Short-Term Memory (LSTM) neural network for predicting shallow and intermediate-deep groundwater level changes. Kolmogorov–Arnold Networks (KANs) and Gated Recurrent Units (GRUs) are selected as benchmark models to verify the superior performance of the proposed framework. In this framework, the non-stationary groundwater level signal is adaptively decomposed into Intrinsic Mode Functions (IMFs) with distinct frequency characteristics via VMD. An independent LSTM model is constructed for each IMF to capture its unique temporal variation pattern, and the final groundwater level prediction is obtained by linearly reconstructing the predicted results of all IMFs. The results show that the coefficient of determination (R2) of the VMD-LSTM model exceeds 0.90 for all monitoring datasets, with low Mean Absolute Error (MAE) and Mean Squared Error (MSE). It significantly outperforms the benchmark models in handling nonlinear and non-stationary time series features. Using only historical groundwater level data as input, the proposed framework effectively overcomes the limitation of insufficient driving variables in data-scarce regions and fully explores the multi-scale evolution of groundwater dynamics through the synergistic effect of multi-scale decomposition and deep learning. The method presented in this study provides a novel and reliable technical approach for groundwater level prediction in water-deficient and data-limited areas, and also offers scientific support for the rational management and sustainable utilization of regional groundwater resources. Future research will incorporate driving factors such as meteorology and exploitation to further improve the model’s ability to capture abrupt changes in groundwater level dynamics. Full article
Show Figures

Figure 1

21 pages, 1986 KB  
Article
Environmental Performance of Chlorella sp.-Based Phytoremediation Across Multiple Wastewater Scenarios: A Comparative Life Cycle Assessment
by Janet B. García-Martínez, Laura T. Ríos Niño, Lizeth N. Saavedra Gómez, Crisóstomo Barajas-Ferreira, Antonio Zuorro and Andrés F. Barajas-Solano
Environments 2026, 13(3), 155; https://doi.org/10.3390/environments13030155 - 13 Mar 2026
Viewed by 713
Abstract
This study assesses the environmental performance of three wastewater treatment setups through an attributional, gate-to-gate life cycle assessment (functional unit: 1 m3 of treated wastewater): (Sc1) a traditional municipal wastewater treatment plant, (Sc2) an aquaculture recirculation system using microalgae, and (Sc3) a [...] Read more.
This study assesses the environmental performance of three wastewater treatment setups through an attributional, gate-to-gate life cycle assessment (functional unit: 1 m3 of treated wastewater): (Sc1) a traditional municipal wastewater treatment plant, (Sc2) an aquaculture recirculation system using microalgae, and (Sc3) a domestic system combining UASB pretreatment with microalgae polishing. Inventory data were analyzed in SimaPro with ReCiPe 2016 Midpoint (Hierarchist) across seven effect categories. Robustness was tested through sensitivity analyses (±20%) of power consumption and influent characteristics, as well as an additional scenario exploring the offset of methane-recovery electricity. The global warming impact remained consistent across scenarios, ranging from 60.5 to 65.1 kg CO2-eq·m−3, indicating no significant difference within the operational parameters. In most categories, power consumption and influent-related burdens were the main contributors, while the impacts from flocculants and microalgae inoculum were minimal. Sc3 showed a lower freshwater eutrophication potential compared to Sc1 and Sc2 (0.028 vs. approximately 0.049 kg P-eq·m−3). Normalization highlighted human carcinogenic toxicity and aquatic ecotoxicity as key impact categories. The methane-offset scenario caused only slight changes at low CH4 outputs, suggesting that energy recovery depends on context. Full article
Show Figures

Figure 1

24 pages, 504 KB  
Article
Feasibility Study of CUDA-Accelerated Homomorphic Encryption and Benchmarking on Consumer-Grade and Embedded GPUs
by Volodymyr Dubetskyy and Maria-Dolores Cano
Big Data Cogn. Comput. 2026, 10(3), 79; https://doi.org/10.3390/bdcc10030079 - 6 Mar 2026
Viewed by 886
Abstract
Fully Homomorphic Encryption (FHE) provides strong data confidentiality during computation but often suffers from high latency on Central Processing Units (CPUs). This study evaluates Graphics Processing Unit (GPU) acceleration for modern FHE libraries across a laptop (NVIDIA GTX 1650 Ti), a server (NVIDIA [...] Read more.
Fully Homomorphic Encryption (FHE) provides strong data confidentiality during computation but often suffers from high latency on Central Processing Units (CPUs). This study evaluates Graphics Processing Unit (GPU) acceleration for modern FHE libraries across a laptop (NVIDIA GTX 1650 Ti), a server (NVIDIA RTX 4060), and a Jetson Nano 2 GB embedded GPU. We benchmark key generation, arithmetic operations, Boolean-gate evaluation and scheme-specific tasks such as relinearization and key switching, using library-provided benchmarks with an explicit baseline (operation scope, timing boundaries, and parameter tuples). Moreover, we compare GPU-native libraries (NuFHE, Phantom-FHE, and Troy-Nova) with CPU-oriented ones (Microsoft SEAL, HElib, OpenFHE, Cupcake, and TFHE-rs). Results show GPUs deliver significant speedups for targeted operations. For example, NuFHE’s NVIDIA CUDA (Compute Unified Device Architecture) backend achieves about 1.4× faster Boolean-gate evaluation on the laptop and 3.4× faster on the server compared to its OpenCL backend. Likewise, RLWE (Ring Learning With Errors)-based schemes (BFV, CKKS, and BGV) see marked gains for polynomial arithmetic such as Number Theoretic Transform (NTT) when executed via Phantom-FHE. However, attempts to add CUDA support to Microsoft SEAL reveal four main challenges: high-precision modular arithmetic on GPUs, sequential dependencies in SEAL’s design, limited GPU memory and complex build-system changes. In light of these findings, we propose revised guidelines for GPU-first FHE libraries and practical recommendations for deploying high-throughput, privacy-preserving solutions on modern GPUs. Full article
(This article belongs to the Section Big Data)
Show Figures

Figure 1

28 pages, 1396 KB  
Article
Environmental–Visual Fusion for Proactive Tomato Late Blight Management in Protected Horticulture
by Puxing Gao, Peigen Yang, Tangji Ke, Saiwei Wang, Yulong Wang, Fengman Xu and Yihong Song
Horticulturae 2026, 12(3), 299; https://doi.org/10.3390/horticulturae12030299 - 3 Mar 2026
Viewed by 387
Abstract
In protected horticultural production, tomato late blight shows strong environmental inducibility, with a short latent period, rapid risk accumulation, and a limited control window, which challenges conventional post-event disease monitoring. To address this, a tomato late blight risk perception and predictive control approach [...] Read more.
In protected horticultural production, tomato late blight shows strong environmental inducibility, with a short latent period, rapid risk accumulation, and a limited control window, which challenges conventional post-event disease monitoring. To address this, a tomato late blight risk perception and predictive control approach for protected production is proposed, integrating deep temporal modeling of environmental factors, visual symptom perception, and risk-driven greenhouse control to enable prospective assessment and proactive intervention. Based on disease mechanisms and real greenhouse conditions, an artificial intelligence (AI) framework covering perception, prediction, and regulation is constructed, moving beyond reliance on visible symptoms alone. Long-term evolution of key variables, including temperature, air humidity, leaf wetness, and light intensity, is modeled using deep temporal networks, while early weak lesions and subtle texture changes are captured by visual models. Cross-modal fusion in a unified risk space generates continuous risk scores to drive greenhouse regulation. Experiments on a multimodal dataset from a real greenhouse in Bayannur, Inner Mongolia, show that the proposed method outperforms vision-based and environment-based baselines in recognition and risk prediction. It achieves about 0.95 accuracy, 0.94 F1-score, and over 0.97 area under the receiver operating characteristic curve (AUC), while providing more than 20 h of early warning before disease onset. In environmental modeling, the deep temporal model consistently surpasses threshold-based methods, logistic regression, and long short-term memory/gated recurrent unit (LSTM/GRU) baselines in risk lead time, false alert rate, and prediction stability. Full article
(This article belongs to the Special Issue Artificial Intelligence in Horticulture Production)
Show Figures

Figure 1

30 pages, 716 KB  
Article
Spectral Robustness Mixer: Cross-Scale Neck for Robust No-Reference Image Quality Assessment
by Bader Rasheed, Anastasia Antsiferova and Dmitriy Vatolin
Technologies 2026, 14(3), 145; https://doi.org/10.3390/technologies14030145 - 28 Feb 2026
Viewed by 325
Abstract
No-reference image quality assessment (NR-IQA) models achieve high correlation with human mean opinion scores (MOS) on clean benchmarks, yet recent work shows they can be highly vulnerable to small adversarial perturbations that severely degrade ranking consistency, including in black-box settings. We introduce the [...] Read more.
No-reference image quality assessment (NR-IQA) models achieve high correlation with human mean opinion scores (MOS) on clean benchmarks, yet recent work shows they can be highly vulnerable to small adversarial perturbations that severely degrade ranking consistency, including in black-box settings. We introduce the Spectral Robustness Mixer (SRM), a lightweight neck inserted between an NR-IQA backbone and regression head, designed to reduce adversarial sensitivity without changing the dataset, label format, or target metric. SRM couples (i) deep-to-shallow cross-scale fusion via a Nyström low-rank attention surrogate, (ii) ridge-conditioned landmark kernels with ridge regularization, solved via numerically stable small-matrix factorization (SVD/LU) to improve conditioning, and (iii) variance-aware entropy-regularized fusion gates with a bounded gain cap to limit gradient amplification. We evaluate SRM on TID2013 and KonIQ-10k under a white-box l/l2 attack ensemble that includes per-image regression objectives and a correlation-aware pairwise inversion objective (a ranking-inspired surrogate for correlation inversion), with expectation-over-transformation (EOT) and anti-gradient masking checks. At ϵ=4/255 (l), SRM improves worst-case robust Spearman’s rank-order correlation coefficient (SROCC; defined as the minimum over our fixed attack ensemble) by an absolute 0.060.08 SROCC points (i.e., correlation-coefficient units, not percentage gain) across datasets/backbones, while keeping clean SROCC within 0.000.01 of the baseline. We observe similar trends for Pearson linear correlation coefficient (PLCC). Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

21 pages, 4844 KB  
Article
Human Activity Recognition in Domestic Settings Based on Optical Techniques and Ensemble Models
by Muhammad Amjad Raza, Nasir Mehmood, Hafeez Ur Rehman Siddiqui, Adil Ali Saleem, Roberto Marcelo Alvarez, Yini Airet Miró Vera and Isabel de la Torre Díez
Sensors 2026, 26(5), 1516; https://doi.org/10.3390/s26051516 - 27 Feb 2026
Viewed by 484
Abstract
Human activity recognition (HAR) is essential in many applications, such as smart homes, assisted living, healthcare monitoring, rehabilitation, physiotherapy, and geriatric care. Conventional methods of HAR use wearable sensors, e.g., acceleration sensors and gyroscopes. However, they are limited by issues such as sensitivity [...] Read more.
Human activity recognition (HAR) is essential in many applications, such as smart homes, assisted living, healthcare monitoring, rehabilitation, physiotherapy, and geriatric care. Conventional methods of HAR use wearable sensors, e.g., acceleration sensors and gyroscopes. However, they are limited by issues such as sensitivity to position, user inconvenience, and potential health risks with long-term use. Optical camera systems that are vision-based provide an alternative that is not intrusive; however, they are susceptible to variations in lighting, intrusions, and privacy issues. The paper uses an optical method of recognizing human domestic activities based on pose estimation and deep learning ensemble models. The skeletal keypoint features proposed in the current methodology are extracted from video data using PoseNet to generate a privacy-preserving representation that captures key motion dynamics without being sensitive to changes in appearance. A total of 30 subjects (15 male and 15 female) were sampled across 2734 activity samples, including nine daily domestic activities. There were six deep learning architectures, namely, the Transformer (Transformer), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Multilayer Perceptron (MLP), One-Dimensional Convolutional Neural Network (1D CNN), and a hybrid Convolutional Neural Network–Long Short-Term Memory (CNN–LSTM) architecture. The results on the hold-out test set show that the CNN–LSTM architecture achieves an accuracy of 98.78% within our experimental setting. Leave-One-Subject-Out cross-validation further confirms robust generalization across unseen individuals, with CNN–LSTM achieving a mean accuracy of 97.21% ± 1.84% across 30 subjects. The results demonstrate that vision-based pose estimation with deep learning is a useful, precise, and non-intrusive approach to HAR in smart healthcare and home automation systems. Full article
(This article belongs to the Special Issue Optical Sensors: Instrumentation, Measurement and Metrology)
Show Figures

Figure 1

15 pages, 1919 KB  
Article
Use of Energy Derived from Photovoltaic Panels in the Production of Polymer Flocculant
by Wioletta M. Bajdur, Maria Włodarczyk-Makuła and Tomasz Kamizela
Energies 2026, 19(5), 1197; https://doi.org/10.3390/en19051197 - 27 Feb 2026
Viewed by 308
Abstract
This study evaluates the environmental footprint of producing a polymer flocculant synthesised from phenol–formaldehyde resin waste (novolak T) at a quarter-technical scale, with electricity supply assumed from photovoltaic (PV) generation. A cradle-to-gate life cycle assessment was performed in SimaPro Developer v9.4 using the [...] Read more.
This study evaluates the environmental footprint of producing a polymer flocculant synthesised from phenol–formaldehyde resin waste (novolak T) at a quarter-technical scale, with electricity supply assumed from photovoltaic (PV) generation. A cradle-to-gate life cycle assessment was performed in SimaPro Developer v9.4 using the Environmental Footprint (EF) 3.0 method and ecoinvent datasets. The functional unit was 100 kg of the sodium salt of the sulfonic derivative of novolak T. The characterization results indicate a climate change impact of 170.1 kg CO2 eq and an acidification impact of 5.99 mol H+ eq per functional unit. Hotspot analysis shows that process chemicals dominate most impact categories: sulphuric acid production drives acidification and several air-emission-related categories, while sodium carbonate is a major contributor to toxicity- and eutrophication-related indicators. In contrast, electricity has a marginal contribution across categories. Recycling of novolak waste provides a strong compensatory credit, leading to net negative results in selected categories, including resource use and fossils (−5.02 × 103 MJ). Overall, the results indicate that improving the upstream supply chains and the consumption of process reagents are the primary levers for reducing the environmental footprint of this waste-derived flocculant. Full article
(This article belongs to the Section B: Energy and Environment)
Show Figures

Figure 1

Back to TopTop