Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (9,564)

Search Parameters:
Keywords = image sensor

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 2150 KB  
Article
Non-Destructive Freshness Assessment of Atlantic Salmon (Salmo salar) via Hyperspectral Imaging and an SPA-Enhanced Transformer Framework
by Zhongquan Jiang, Yu Li, Mincheng Xie, Hanye Zhang, Haiyan Zhang, Guangxin Yang, Peng Wang, Tao Yuan and Xiaosheng Shen
Foods 2026, 15(4), 725; https://doi.org/10.3390/foods15040725 (registering DOI) - 15 Feb 2026
Abstract
Monitoring the freshness of Salmo salar within cold chain logistics is paramount for ensuring food safety. However, conventional physicochemical and microbiological assays are impeded by inherent limitations, including destructiveness and significant time latency, rendering them inadequate for the real-time, non-invasive inspection demands of [...] Read more.
Monitoring the freshness of Salmo salar within cold chain logistics is paramount for ensuring food safety. However, conventional physicochemical and microbiological assays are impeded by inherent limitations, including destructiveness and significant time latency, rendering them inadequate for the real-time, non-invasive inspection demands of modern industry. Here, we present a novel detection framework synergizing hyperspectral imaging (400–1000 nm) with the Transformer deep learning architecture. Through a rigorous comparative analysis of twelve preprocessing protocols and four feature wavelength selection algorithms (Lasso, Genetic Algorithm, Successive Projections Algorithm, and Random Frog), prediction models for Total Volatile Basic Nitrogen (TVB-N) and Total Viable Count (TVC) were established. Furthermore, the capacity of the Transformer to capture long-range spectral dependencies was systematically investigated. Experimental results demonstrate that the model integrating Savitzky-Golay (SG) smoothing with the Transformer yielded optimal performance across the full spectrum, achieving determination coefficients (R2) of 0.9716 and 0.9721 for the Prediction Sets of TVB-N and TVC, respectively. Following the extraction of 30 characteristic wavelengths via the Successive Projections Algorithm (SPA), the streamlined model retained exceptional predictive precision (R2 ≥ 0.95) while enhancing computational efficiency by a factor of approximately six. This study validates the superiority of attention-mechanism-based deep learning algorithms in hyperspectral data analysis. These findings provide a theoretical foundation and technical underpinning for the development of cost-effective, high-efficiency portable multispectral sensors, thereby facilitating the intelligent transformation of the aquatic product supply chain. Full article
Show Figures

Figure 1

30 pages, 12009 KB  
Article
Comparison of CNN-Based Image Classification Approaches for Implementation of Low-Cost Multispectral Arcing Detection
by Elizabeth Piersall and Peter Fuhr
Sensors 2026, 26(4), 1268; https://doi.org/10.3390/s26041268 (registering DOI) - 15 Feb 2026
Abstract
Camera-based sensing has benefited in recent years from developments in machine learning data processing methods, as well as improved data collection options such as Unmanned Aerial Vehicles (UAV) mounted sensors. However, cost considerations, both for the initial purchase of sensors as well as [...] Read more.
Camera-based sensing has benefited in recent years from developments in machine learning data processing methods, as well as improved data collection options such as Unmanned Aerial Vehicles (UAV) mounted sensors. However, cost considerations, both for the initial purchase of sensors as well as updates, maintenance, or potential replacement if damaged, can limit adoption of more expensive sensing options for some applications. To evaluate more affordable options with less expensive, more available, and more easily replaceable hardware, we examine the use of machine learning-based image classification with custom datasets, utilizing deep learning based-image classification and the use of ensemble models for sensor fusion. Utilizing the same models for each camera to reduce technical overhead, we showed that for a very representative training dataset, camera-based detection can be successful for detection of electrical arcing. We also use multiple validation datasets, based on conditions expected to be of varying difficulty, to evaluate custom data. These results show that ensemble models of different data sources can mitigate risks from gaps in training data, though the system will be less redundant for those cases unless other precautions are taken. We found that with good quality custom datasets, data fusion models can be utilized without specialization in design to the specific cameras utilized, allowing for less specialized, more accessible equipment to be utilized as multispectral camera components. This approach can provide an alternative to expensive sensing equipment for applications in which lower-cost or more easily replaceable sensing equipment is desirable. Full article
(This article belongs to the Section Sensing and Imaging)
33 pages, 4781 KB  
Article
Modeling Multi-Sensor Daily Fire Events in Brazil: The DescrEVE Relational Framework for Wildfire Monitoring
by Henrique Bernini, Fabiano Morelli, Fabrício Galende Marques de Carvalho, Guilherme dos Santos Benedito, William Max dos Santos Silva Silva and Samuel Lucas Vieira de Melo
Remote Sens. 2026, 18(4), 606; https://doi.org/10.3390/rs18040606 (registering DOI) - 14 Feb 2026
Abstract
Wildfire monitoring in tropical regions requires robust frameworks capable of transforming heterogeneous satellite detections into consistent, event-level information suitable for decision support. This study presents the DescrEVE Fogo (Descrição de Eventos de Fogo) framework, a relational and scalable system that models daily fire [...] Read more.
Wildfire monitoring in tropical regions requires robust frameworks capable of transforming heterogeneous satellite detections into consistent, event-level information suitable for decision support. This study presents the DescrEVE Fogo (Descrição de Eventos de Fogo) framework, a relational and scalable system that models daily fire events in Brazil by integrating Advanced Very High Resolution Radiometer (AVHRR), Moderate-Resolution Imaging Spectroradiometer (MODIS), and Visible Infrared Imaging Radiometer Suite (VIIRS) active-fire detections within a unified Structured Query Language (SQL)/PostGIS environment. The framework formalizes a mathematical and computational model that defines and tracks fire fronts and multi-day fire events based on explicit spatio-temporal rules and geometry-based operations. Using database-native functions, DescrEVE Fogo aggregates daily fronts into events and computes intrinsic and environmental descriptors, including duration, incremental area, Fire Radiative Power (FRP), number of fronts, rainless days, and fire risk. Applied to the 2003–2025 archive of the Brazilian National Institute for Space Research (INPE) Queimadas Program, the framework reveals that the integration of VIIRS increases the fraction of multi-front events and enhances detectability of larger and longer-lived events, while the overall regime remains dominated by small, short-lived occurrences. A simple, prototype fire-type rule distinguishes new isolated fire events, possible incipient wildfires, and wildfires, indicating that fewer than 10% of events account for more than 40% of the area proxy and nearly 60% of maximum FRP. For the 2025 operational year, daily ignition counts show strong temporal coherence with the Global Fire Emissions Database version 5 (GFEDv5), albeit with a systematic positive bias reflecting differences in sensors and event definitions. A case study of the 2020 Pantanal wildfire illustrates how front-level metrics and environmental indicators can be combined to characterize persistence, spread, and climatic coupling. Overall, the database-native design provides a transparent and reproducible basis for large-scale, near-real-time wildfire analysis in Brazil, while current limitations in sensor homogeneity, typology, and validation point to clear avenues for future refinement and operational integration. Full article
Show Figures

Figure 1

18 pages, 45181 KB  
Article
Illumination Sensor for Reflection-Based Characterisation of Technical Surfaces
by Tim Sliti, Nils F. Melchert, Philipp Middendorf, Kolja Hedrich, Eduard Reithmeier and Markus Kästner
Sensors 2026, 26(4), 1256; https://doi.org/10.3390/s26041256 (registering DOI) - 14 Feb 2026
Abstract
The condition of technical surfaces strongly influences the functionality and lifetime of many components. In particular, the performance of aero-engines can be impaired by increased roughness of the turbine blade surfaces. In this work, an LED- and camera-based illumination sensor is presented for [...] Read more.
The condition of technical surfaces strongly influences the functionality and lifetime of many components. In particular, the performance of aero-engines can be impaired by increased roughness of the turbine blade surfaces. In this work, an LED- and camera-based illumination sensor is presented for reflection-based characterisation of turbine blade surfaces, with a focus on rapid, wide-area assessment rather than direct roughness measurement. Traditional roughness measurements (e.g., profilometry, confocal microscopy) provide micrometre-scale height information but are limited in working distance and measurement volume, making complete surface coverage time-consuming. The proposed sensor acquires multi-illumination image data, from which an anisotropic BRDF (bidirectional reflectance distribution function) model is fitted on a per-pixel basis to obtain reflectance parameters. Independently, surface roughness parameters (Sa, Sq, Sz, Ssk, Sku) are measured using a confocal laser scanning microscope in accordance with ISO 25178 and used as reference data. Using two turbine blades with contrasting surface conditions (comparatively smooth vs. visibly rough), the study qualitatively investigates whether there are indications of relationships between BRDF model parameters and roughness characteristics. The results show weak relationships with height-based parameters (Sa, Sq, Sz), but clearer trends for distribution parameters (Ssk, Sku) and a good qualitative agreement between directional BRDF parameters and texture orientation. These findings indicate that the illumination sensor provides a complementary, reflectance-based approach for surface condition triage in MRO and QA contexts, highlighting regions that warrant more detailed roughness measurements. Extension of the approach to other component geometries and a comprehensive quantitative analysis of BRDF–roughness relationships are planned for follow-up studies. Full article
(This article belongs to the Special Issue Optical Sensors for Industry Applications)
Show Figures

Figure 1

15 pages, 3893 KB  
Article
Inverse Design of Optical Color Routers with Improved Fabrication Compatibility
by Sushmit Hossain, Zerui Liu, Nishat Tasnim Hiramony, Tinghao Hsu, Himaddri Roy, Hongming Zhang and Wei Wu
Nanomaterials 2026, 16(4), 251; https://doi.org/10.3390/nano16040251 (registering DOI) - 14 Feb 2026
Abstract
We present a Genetic Algorithm (GA)-based inverse design framework for creating a single-layer, fabrication-compatible dielectric nano-patterned surface that enables efficient color routing in both transmissive and reflective optical systems. Unlike traditional multilayer or absorption-based color filters, the proposed structure employs a fabrication-compatible architecture [...] Read more.
We present a Genetic Algorithm (GA)-based inverse design framework for creating a single-layer, fabrication-compatible dielectric nano-patterned surface that enables efficient color routing in both transmissive and reflective optical systems. Unlike traditional multilayer or absorption-based color filters, the proposed structure employs a fabrication-compatible architecture that spatially routes red, green, and blue light into designated output channels, significantly enhancing light utilization and color fidelity. The design process integrates a GA with full-wave finite-difference time-domain (FDTD) simulations to optimize the structural pillar height distribution, using a figure of merit that simultaneously maximizes optical efficiency and minimizes spectral crosstalk. For CMOS image sensor-scale designs, the nano-patterned surface achieved peak optical efficiencies of 76%, 72%, and 78% for blue, green, and red channels, respectively, with an average efficiency of 75.5%. Parametric studies further revealed the dependence of performance on pillar geometry, refractive index, and unit cell scaling, providing practical design insights for scalable fabrication using nanoimprint or grayscale lithography. Extending the approach to reflective displays, we demonstrate tunable-mirror-based architectures that emulate electrophoretic microcapsules, achieving efficient color reflection and an expanded color gamut beyond the sRGB standard. This single-layer, inverse-designed nano-patterned surface offers a high-performance and fabrication-ready solution for compact, energy-efficient imaging and display technologies. Full article
Show Figures

Graphical abstract

18 pages, 2109 KB  
Article
An FPGA-Based YOLOv5n Accelerator for Online Multi-Track Particle Localization
by Zixuan Song, Wangwang Tang, Wendi Deng, Hongxia Wang, Guangming Huang, Haoran Wu, Yueting Guo, Jun Liu, Kai Jin and Zhiyuan Ma
Electronics 2026, 15(4), 810; https://doi.org/10.3390/electronics15040810 - 13 Feb 2026
Viewed by 28
Abstract
Reliability testing for Single Event Effects (SEEs) requires accurate localization of heavy-ion tracks from projection images. Conventional localization often relies on handcrafted features and geometric fitting, which is sensitive to noise and difficult to accelerate in hardware. This paper presents a lightweight detector [...] Read more.
Reliability testing for Single Event Effects (SEEs) requires accurate localization of heavy-ion tracks from projection images. Conventional localization often relies on handcrafted features and geometric fitting, which is sensitive to noise and difficult to accelerate in hardware. This paper presents a lightweight detector based on YOLOv5n that treats charge tracks in Topmetal pixel sensor projections as distinct objects and directly regresses the track angle and intercept, along with bounding boxes, in a single forward pass. On a synthetic dataset, the model achieves a precision of 0.9626 and a recall of 0.9493, with line-parameter errors of 0.3930° in angle and 0.4842 pixels in intercept. On experimental krypton beam data, the detector reaches a precision of 0.92 and a recall of 0.96, with a position resolution of 52.05 μm. We further deploy the model on an Xilinx Alveo U200, achieving an average per-frame accelerator latency of 3.1 ms while preserving measurement quality. This approach enables accurate, online track localization for SEE monitoring on Field-Programmable Gate Array (FPGA) platforms. Full article
(This article belongs to the Section Industrial Electronics)
Show Figures

Figure 1

23 pages, 5390 KB  
Article
A Metrologically Validated Cost-Effective Solution for Laboratory Measurement of Long-Term Deformations in Construction Materials
by Ahmad Fathi, Luís Lages Martins, João M. Pereira, Graça Vasconcelos and Miguel Azenha
Appl. Sci. 2026, 16(4), 1866; https://doi.org/10.3390/app16041866 - 13 Feb 2026
Viewed by 53
Abstract
Investigating the long-term performance of building materials, such as drying shrinkage, moisture expansion, creep, and others, usually requires long-lasting tests with a high number of specimens. Given the initial costs, required data acquisition systems, and the time allocated, conventional sensors like LVDTs become [...] Read more.
Investigating the long-term performance of building materials, such as drying shrinkage, moisture expansion, creep, and others, usually requires long-lasting tests with a high number of specimens. Given the initial costs, required data acquisition systems, and the time allocated, conventional sensors like LVDTs become costly for such long-term experimental studies. This article proposes an innovative cost-effective solution combining optical microscopy imaging, 3D printed sliding rulers, and Python-based artificial vision to overcome these limitations. The 3D printed rulers establish a local physical reference frame, while the artificial vision system uses contour detection and point tracking of optical targets to quantify displacements. Unlike continuous monitoring systems, the proposed solution utilises a discontinuous point-tracking approach, allowing a single USB microscope to monitor an unlimited number of specimens while maintaining the possibility for moisture exchange between the material surface and the environment. The system was metrologically validated against a laser interferometer, achieving an expanded instrumental uncertainty of 0.0042 mm (4.2 µm), determined through strict calibration. These results demonstrate that the proposed solution delivers accuracy comparable to conventional sensors but with significantly higher scalability and lower cost, making it highly suitable for extensive long-term experimental programmes. Full article
(This article belongs to the Special Issue Digital Advancements in Civil Engineering and Construction)
Show Figures

Figure 1

67 pages, 13903 KB  
Article
A Multi-Sensor Framework for Methane Detection and Flux Estimation with Scale-Aware Plume Segmentation and Uncertainty Propagation from High-Resolution Spaceborne Imaging Spectrometers
by Alvise Ferrari, Valerio Pampanoni, Giovanni Laneve, Raul Alejandro Carvajal Tellez and Simone Saquella
Methane 2026, 5(1), 10; https://doi.org/10.3390/methane5010010 - 13 Feb 2026
Viewed by 60
Abstract
Methane is the second most important contributor to global warming, and monitoring super-emitters from space is critical for climate mitigation. Despite the advancements in hyperspectral remote sensing, comparing methane observations across diverse imaging spectrometers remains a challenging task. Different retrieval algorithms, plume segmentation [...] Read more.
Methane is the second most important contributor to global warming, and monitoring super-emitters from space is critical for climate mitigation. Despite the advancements in hyperspectral remote sensing, comparing methane observations across diverse imaging spectrometers remains a challenging task. Different retrieval algorithms, plume segmentation techniques and uncertainty treatments make it very hard to perform fair comparisons between different products. To overcome these difficulties, this study presents HyGAS (Hyperspectral Gas Analysis Suite), a unified, open-source framework for sensor-agnostic methane retrieval and flux estimation. Starting from the established clutter-matched-filter (CMF) formalism and a physical calibration in concentration–path-length units (ppm·m), we propagate both instrument noise and surface-driven background variability consistently from methane enhancement to Integrated Mass Enhancement (IME) and flux. The framework further includes a spectrally matched background-selection strategy, scale-aware segmentation with fixed physical criteria across resolutions, and emission-rate estimation via an IME–UeffUeff approach informed by Large Eddy Simulation (LES). We demonstrate the framework on near-simultaneous observations of landfills and gas infrastructure in Argentina, Turkmenistan, and Pakistan, spanning Level-1 radiance workflows (PRISMA, EnMAP, Tanager-1) and Level-2 methane products (EMIT, GHGSat). The standardised chain enables systematic inter-comparison of methane enhancement products and reduces methodological bias, supporting robust multi-mission assessment and future global monitoring. Full article
Show Figures

Figure 1

23 pages, 2606 KB  
Article
A Proof-of-Concept Framework Integrating ML-Based MRI Segmentation with FEM for Transfemoral Residual Limb Modelling
by Ryota Sayama, Yukio Agarie, Hironori Suda, Hiroshi Otsuka, Kengo Ohnishi, Shinichiro Kon, Akihiko Hanahusa, Motoki Takagi and Shinichiro Yamamoto
Prosthesis 2026, 8(2), 16; https://doi.org/10.3390/prosthesis8020016 - 13 Feb 2026
Viewed by 67
Abstract
Background: Accurate evaluation of pressure distribution at the socket–limb interface is essential for improving prosthetic fit and comfort in transfemoral amputees. This study aimed to develop a proof-of-concept framework that integrates machine learning–based segmentation with the finite element method (FEM) to explore the [...] Read more.
Background: Accurate evaluation of pressure distribution at the socket–limb interface is essential for improving prosthetic fit and comfort in transfemoral amputees. This study aimed to develop a proof-of-concept framework that integrates machine learning–based segmentation with the finite element method (FEM) to explore the feasibility of an initial workflow for residual-limb analysis during socket application. Methods: MRI data from a transfemoral amputee were processed using a custom image segmentation algorithm to extract adipose tissue, femur, and ischium, achieving high F-measure scores. The segmented tissues were reconstructed into 3D models, refined through outlier removal and surface smoothing, and used for FEM simulations in LS-DYNA. Pressure values were extracted at nine sensor locations and compared with experimental measurements to provide a preliminary qualitative assessment of model behaviour. Results: The results showed consistent polarity between measured and simulated values across all points. Moderate correspondence was observed at eight low-pressure locations, whereas a substantial discrepancy occurred at the ischial tuberosity (IS), the primary load-bearing site. This discrepancy likely reflects the combined influence of geometric deviation in the reconstructed ischium and the non-physiological medial boundary condition required to prevent unrealistic tissue displacement. This limitation indicates that the current formulation does not support reliable quantitative interpretation at clinically critical locations. Conclusions: Overall, the proposed framework provides an initial demonstration of the methodological feasibility of combining automated anatomical modeling with FEM for exploratory pressure evaluation, indicating that such an integrated pipeline may serve as a useful foundation for future development. While extensive refinement and validation are required before any quantitative or clinically meaningful application is possible, this work represents an early step toward more advanced computational investigations of transfemoral socket–limb interaction. Full article
(This article belongs to the Special Issue Finite Element Analysis in Prosthesis and Orthosis Research)
Show Figures

Figure 1

21 pages, 7192 KB  
Article
Expectation–Maximization Method for RGB-D Camera Calibration with Motion Capture System
by Jianchu Lin, Guangxiao Du, Yugui Zhang, Yiyan Zhao, Qian Xie, Jian Yao and Ashim Khadka
Photonics 2026, 13(2), 183; https://doi.org/10.3390/photonics13020183 - 12 Feb 2026
Viewed by 95
Abstract
Camera calibration is an essential research direction in photonics and computer vision. It achieves the standardization of camera data by using intrinsic and extrinsic parameters. Recently, RGB-D cameras have been an important device by supplementing deep information, and they are commonly divided into [...] Read more.
Camera calibration is an essential research direction in photonics and computer vision. It achieves the standardization of camera data by using intrinsic and extrinsic parameters. Recently, RGB-D cameras have been an important device by supplementing deep information, and they are commonly divided into three kinds of mechanisms: binocular, structured light, and Time of Flight (ToF). However, the different mechanisms cause calibration methods to be complex and hardly uniform. Lens distortion, parameter loss, and sensor degradation et al. even fail calibration. To address the issues, we propose a camera calibration method based on the Expectation–Maximization (EM) algorithm. A unified model of latent variables is established for the different kinds of cameras. In the EM algorithm, the E-step estimates the hidden intrinsic parameters of cameras, while the M-step learns the distortion parameters of the lens. In addition, the depth values are calculated by the spatial geometric method, and they are calibrated using the least squares method under an optical motion capture system. Experimental results demonstrate that our method can be directly employed in the calibration of monocular and binocular RGB-D cameras, reducing image calibration errors between 0.6 and 1.2% less than least squares, Levenberg–Marquardt, Direct Linear Transform, and Trust Region Reflection. The deep error is reduced by 16 to 19.3 mm. Therefore, our method can effectively improve the performance of different RGB-D cameras. Full article
Show Figures

Figure 1

34 pages, 1614 KB  
Article
Multi-Layered Open Data, Differential Privacy, and Secure Engineering: The Operational Framework for Environmental Digital Twins
by Oleksandr Korchenko, Anna Korchenko, Dmytro Prokopovych-Tkachenko, Mikolaj Karpinski and Svitlana Kazmirchuk
Sustainability 2026, 18(4), 1912; https://doi.org/10.3390/su18041912 - 12 Feb 2026
Viewed by 93
Abstract
Sustainable urban development increasingly relies on hyperlocal environmental analytics created by smart city platforms that combine stationary and mobile sensors, Earth observations, meteorology, and land-use data. However, accurate spatio-temporal resolution can provide indirect identification and amplify cybersecurity threats. This article proposes the regulatory [...] Read more.
Sustainable urban development increasingly relies on hyperlocal environmental analytics created by smart city platforms that combine stationary and mobile sensors, Earth observations, meteorology, and land-use data. However, accurate spatio-temporal resolution can provide indirect identification and amplify cybersecurity threats. This article proposes the regulatory and technical mapping that implements the General Data Protection Regulation (GDPR) and the Network and Information Security Directive (NIS2) throughout the lifecycle of environmental data—reception, transport, storage, analytics, sharing, and publication. The methods combine doctrinal legal analysis, a review of the scope of recent research, formalized compliance modeling, modeling with synthetic city-scale datasets, expert identification, and demonstration of integrated analytics. The demonstration links deep evaluation of neural abnormalities (convolutional plus recurrent layers), short-term Fourier transformation of sensor signals, byte-to-image telemetry fingerprints, and protocol event counters, thereby tracking detection to explanatory evidence and to control actions. Deliverables include a matrix aligning lifecycle stages with GDPR principles and rights, as well as with the responsibilities of NIS2; a checklist for assessing the impact on data protection, which takes into account the risks of fairness and stigmatization; a basic set of controls for identification and access, secure design, monitoring, continuity, supplier assurance, and incident reporting; as well as a multi-layered publishing strategy that combines transparency with privacy through aggregation, delayed release, differentiated privacy budgets, and research enclaves. The visualization confirms that technical signals can be included in audit-ready reporting and automated response, while the guidelines legally clarify the relevant bases for common use cases such as air quality assurance networks, noise mapping, citizen sensor applications, and mobility and exposure modeling. The effects of the policy emphasize shared services for small municipalities, supply chain security, and ongoing review to counteract the mosaic effect. Overall, the study shows how cities can maximize environmental and social value based on environmental data, while maintaining privacy, sustainability, and equity by design. Full article
Show Figures

Figure 1

23 pages, 2557 KB  
Article
MECFN: A Multi-Modal Temporal Fusion Network for Valve Opening Prediction in Fluororubber Material Level Control
by Weicheng Yan, Kaiping Yuan, Han Hu, Minghui Liu, Haigang Gong, Xiaomin Wang and Guantao Zhang
Electronics 2026, 15(4), 783; https://doi.org/10.3390/electronics15040783 - 12 Feb 2026
Viewed by 73
Abstract
During fluororubber production, strong material agitation and agglomeration induce severe dynamic fluctuations, irregular surface morphology, and pronounced variations in apparent material level. Under such operating conditions, conventional single-modality monitoring approaches—such as point-based height sensors or manual visual inspection—often fail to reliably capture the [...] Read more.
During fluororubber production, strong material agitation and agglomeration induce severe dynamic fluctuations, irregular surface morphology, and pronounced variations in apparent material level. Under such operating conditions, conventional single-modality monitoring approaches—such as point-based height sensors or manual visual inspection—often fail to reliably capture the true process state. This information deficiency leads to inaccurate valve opening adjustment and degrades material level control performance. To address this issue, valve opening prediction is formulated as a data-driven, control-oriented regression task for material level regulation, and an end-to-end multimodal temporal regression framework, termed MECFN (Multi-Modal Enhanced Cross-Fusion Network), is proposed. The model performs deep fusion of visual image sequences and height sensor signals. A customized Multi-Feature Extraction (MFE) module is designed to enhance visual feature representation under complex surface conditions, while two independent Transformer encoders are employed to capture long-range temporal dependencies within each modality. Furthermore, a context-aware cross-attention mechanism is introduced to enable effective interaction and adaptive fusion between heterogeneous modalities. Experimental validation on a real-world industrial fluororubber production dataset demonstrates that MECFN consistently outperforms traditional machine learning approaches and single-modality deep learning models in valve opening prediction. Quantitative results show that MECFN achieves a mean absolute error of 2.36, a root mean squared error of 3.73, and an R2 of 0.92. These results indicate that the proposed framework provides a robust and practical data-driven solution for supporting valve control and achieving stable material level regulation in industrial production environments. Full article
(This article belongs to the Special Issue AI for Industry)
Show Figures

Figure 1

22 pages, 4393 KB  
Article
Visual–Inertial Fusion-Based Restoration of Image Degradation in High-Dynamic Scenes with Rolling Shutter Cameras
by Jianbin Ye, Cengfeng Luo, Qiuxuan Wu, Yuejun Ye, Shenao Li, Yiyang Chen and Aocheng Li
Sensors 2026, 26(4), 1189; https://doi.org/10.3390/s26041189 - 12 Feb 2026
Viewed by 92
Abstract
Rolling shutter CMOS cameras are widely used in mobile and embedded vision, but rapid motion and vibration often cause coupled degradations, including motion blur and rolling shutter (RS) geometric distortion. This paper presents a visual–inertial fusion framework that estimates unified motion-related degradation parameters [...] Read more.
Rolling shutter CMOS cameras are widely used in mobile and embedded vision, but rapid motion and vibration often cause coupled degradations, including motion blur and rolling shutter (RS) geometric distortion. This paper presents a visual–inertial fusion framework that estimates unified motion-related degradation parameters from IMU and image measurements and uses them to restore both photometric and geometric image quality in high-dynamic scenes. We further introduce an exposure-aware deblurring pipeline that accounts for the nonlinear photoelectric conversion characteristics of CMOS sensors, as well as a perspective-consistent RS compensation method to improve geometric consistency under depth–motion coupling. Experiments on real mobile data and public RS-visual–inertial sequences demonstrate improved image quality and downstream SLAM pose accuracy compared with representative baselines. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

27 pages, 3227 KB  
Review
A Review of Research on the Applications of Large Models in Each Functional Module of the Entire Rehabilitation Process
by Tingting Bai, Kaiwen Jiang, Yixuan Yu, Shuyan Qie, Congxiao Wang, Boyuan Wang and Wenli Zhang
Future Internet 2026, 18(2), 95; https://doi.org/10.3390/fi18020095 - 12 Feb 2026
Viewed by 121
Abstract
Population ageing and chronic disease are increasing demand for rehabilitation, while resources remain limited. This review does not report an implemented end-to-end system; instead, it proposes a modular workflow framework for applying large AI foundation models across rehabilitation. Organised into four stages—assessment, prescription, [...] Read more.
Population ageing and chronic disease are increasing demand for rehabilitation, while resources remain limited. This review does not report an implemented end-to-end system; instead, it proposes a modular workflow framework for applying large AI foundation models across rehabilitation. Organised into four stages—assessment, prescription, execution, and monitoring—we summarise recent evidence and highlight techniques most suitable at each stage. In assessment, multimodal models can enable more continuous and objective functional measurement from heterogeneous sensor and imaging data. In prescription, large language models can support evidence-informed, personalised plan formulation by synthesising guidelines and patient context. In execution, vision–language–sensor models can provide real-time feedback for telerehabilitation and adherence support. In monitoring, longitudinal and cross-setting data integration can facilitate risk prediction and early warning for safety and long-term management. We also discuss practical adaptation options (e.g., parameter-efficient fine-tuning) and propose a clinimetric-oriented evaluation framework to assess validity, reliability, and generalisability. By mapping AI capabilities to concrete workflow tasks, the framework provides a theoretical foundation and roadmap for reproducible research and future translation toward a universal rehabilitation model. Full article
(This article belongs to the Special Issue Artificial Intelligence-Enabled Smart Healthcare)
Show Figures

Figure 1

11 pages, 3464 KB  
Article
Pre-Programming Thermal Sensors Improves Detection During Drone-Based Nocturnal Wildlife Surveys in Warm Weather
by Lori Massey, Aaron M. Foley, Jeremy Baumgardt, Randy W. DeYoung and Humberto L. Perotto-Baldivieso
Drones 2026, 10(2), 127; https://doi.org/10.3390/drones10020127 - 11 Feb 2026
Viewed by 117
Abstract
Improvements in thermal infrared imaging provide new opportunities for drone-based wildlife surveys. The use of thermal sensors can be limited by ambient temperatures and vegetation cover, which can limit opportunities to survey during optimal biological seasons. Pre-programming isotherm settings in thermal cameras has [...] Read more.
Improvements in thermal infrared imaging provide new opportunities for drone-based wildlife surveys. The use of thermal sensors can be limited by ambient temperatures and vegetation cover, which can limit opportunities to survey during optimal biological seasons. Pre-programming isotherm settings in thermal cameras has the potential to allow surveys during warmer environmental conditions. We evaluated night-time surveys of white-tailed deer (Odocoileus virginianus) using isotherm settings in a 102 ha enclosed property in South Texas during February (winter) and July (summer) 2022. Detection probabilities were 0.84 and 0.65 during winter and summer, respectively. Percent woody cover was 48.1% and 60.7% during these seasons, respectively. The seasonal pattern in detection probabilities met expectations in terms of visibility bias caused by canopy cover. Despite different detection probabilities among seasons, population estimates were similar because distance sampling accounted for visibility bias. The use of isotherm settings allowed us to survey during temperatures previously thought to be too warm for ideal contrast (~21 °C vs. 30 °C), which provides more opportunities to survey during biologically important seasons typically associated with warm temperatures (i.e., fawning and antlerogenesis). We recommend the use of distance sampling methods to evaluate and correct for visibility bias during thermal-based drone surveys because detections of focal species may vary with vegetation. Full article
Show Figures

Figure 1

Back to TopTop