You are currently viewing a new version of our website. To view the old version click .

Sensors

Sensors is an international, peer-reviewed, open access journal on the science and technology of sensors.
Indexed in PubMed | Quartile Ranking JCR - Q2 (Instruments and Instrumentation | Chemistry, Analytical | Engineering, Electrical and Electronic)

All Articles (73,809)

This study aims to develop a non-contact automated impact-acoustic measurement system (AIAMS) for real-time detection of manufacturing defects in automotive brake calipers, a key component of the Electric Parking Brake (EPB) system. Calipers hold brake pads in contact with discs, and defects caused by repeated loads and friction can lead to reduced braking performance and abnormal vibration and noise. To address this issue, an automated impact hammer and a microphone-based measurement system were designed and implemented. Feature extraction was performed using Fast Fourier Transform (FFT) and Principal Component Analysis (PCA), followed by defect classification through machine learning algorithms including Support Vector Machine (SVM), k-Nearest Neighbor (KNN), and Decision Tree (DT). Experiments were conducted on five normal and six defective caliper specimens, each subjected to 200 repeated measurements, yielding a total of 2200 datasets. Twelve statistical and spectral features were extracted, and PCA revealed that Shannon Entropy (SE) was the most discriminative feature. Based on SE-centric feature combinations, the SVM, KNN, and DT models achieved classification accuracies of at least 99.2%/97.5%, 98.8%/98.0%, and 99.2%/96.5% for normal and defective specimens, respectively. Furthermore, GUI-based software (version 1.0.0) was implemented to enable real-time defect identification and visualization. Field tests also demonstrated an average defect classification accuracy of over 95%, demonstrating its applicability as a real-time quality control system.

4 November 2025

Flowchart of the overall analysis process of impact-acoustic data using AIAMS.

The agricultural sector remains one of the most hazardous working environments, with viticulture posing particularly high risks due to repetitive manual tasks, pesticide exposure, and machinery operation. This study explores the potential of vision-based Artificial Intelligence (AI) systems to enhance occupational health and safety by evaluating their coherence with human expert assessments. A dataset of 203 annotated images, collected from 50 vineyards in Northern Italy, was analyzed across three domains: manual work activities, workplace environments, and agricultural machinery. Each image was independently assessed by safety professionals and an AI pipeline integrating convolutional neural networks, regulatory contextualization, and risk matrix evaluation. Agreement between AI and experts was quantified using weighted Cohen’s Kappa, achieving values of 0.94–0.96, with overall classification error rates below 14%. Errors were primarily false negatives in machinery images, reflecting visual complexity and operational variability. Statistical analyses, including McNemar and Wilcoxon signed-rank tests, revealed no significant differences between AI and expert classifications. These findings suggest that AI can provide reliable, standardized risk detection while highlighting limitations such as reduced sensitivity in complex scenarios and the need for explainable models. Overall, integrating AI with complementary sensors and regulatory frameworks offers a credible path toward proactive, transparent, and preventive safety management in viticulture and potentially other high-risk agricultural sectors. Furthermore, vision-based AI systems inherently act as optical sensors capable of capturing and interpreting occupational risk conditions. Their integration with complementary sensor technologies—such as inertial, environmental, and proximity sensors—can enhance the precision and contextual awareness of automated safety assessments in viticulture.

4 November 2025

Transformer–CNN Hybrid Framework for Pavement Pothole Segmentation

  • Tianjie Zhang,
  • Zhen Liu and
  • Bingyan Cui
  • + 2 authors

Pavement surface defects such as potholes pose significant safety risks and accelerate infrastructure deterioration. Accurate and automated detection of such defects requires both advanced sensing technologies and robust deep learning models. In this study, we propose PoFormer, a Transformer–CNN hybrid framework designed for precise segmentation of pavement potholes from heterogeneous image datasets. The architecture leverages the global feature extraction ability of Transformers and the fine-grained localization capability of CNNs, achieving superior segmentation accuracy compared to state-of-the-art models. To construct a representative dataset, we combined open source images with high-resolution field data acquired using a multi-sensor pavement inspection vehicle equipped with a line-scan camera and infrared/laser-assisted lighting. This sensing system provides millimeter-level resolution and continuous 3D surface imaging under diverse environmental conditions, ensuring robust training inputs for deep learning. Experimental results demonstrate that PoFormer achieves a mean IoU of 77.23% and a mean pixel accuracy of 84.48%, outperforming existing CNN-based models. By integrating multi-sensor data acquisition with advanced hybrid neural networks, this work highlights the potential of 3D imaging and sensing technologies for intelligent pavement condition monitoring and automated infrastructure maintenance.

4 November 2025

Eye movement is an important tool used to investigate cognition. It also serves as input in human–computer interfaces for assistive technology. It can be measured with camera-based eye tracking and electro-oculography (EOG). EOG does not rely on eye visibility and can be measured even when the eyes are closed. We investigated the feasibility of detecting the gaze direction using EOG while having the eyes closed. A total of 15 participants performed a proprioceptive calibration task with open and closed eyes, while their eye movement was recorded with a camera-based eye tracker and with EOG. The calibration was guided by the participants’ hand motions following a pattern of felt dots on cardboard. Our cross-correlation analysis revealed reliable temporal synchronization between gaze-related signals and the instructed trajectory across all conditions. Statistical comparison tests and equivalence tests demonstrated that EOG tracking was statistically equivalent to the camera-based eye tracker gaze direction during the eyes-open condition. The camera-based eye-tracking glasses do not support tracking with closed eyes. Therefore, we evaluated the EOG-based gaze estimates during the eyes-closed trials by comparing them to the instructed trajectory. The results showed that EOG signals, guided by proprioceptive cues, followed the instructed path and achieved a significantly greater accuracy than shuffled control data, which represented a chance-level performance. This demonstrates the advantage of EOG when camera-based eye tracking is infeasible, and it paves the way for the development of eye-movement input interfaces for blind people, research on eye movement direction when the eyes are closed, and the early detection of diseases.

4 November 2025

News & Conferences

Issues

Open for Submission

Editor's Choice

Reprints of Collections

Advanced Energy Harvesting Technology
Reprint

Advanced Energy Harvesting Technology

Volume II
Editors: Mengying Xie, Kean C. Aw, Junlei Wang, Hailing Fu, Wee Chee Gan
Advanced Energy Harvesting Technology
Reprint

Advanced Energy Harvesting Technology

Volume I
Editors: Mengying Xie, Kean C. Aw, Junlei Wang, Hailing Fu, Wee Chee Gan

Get Alerted

Add your email address to receive forthcoming issues of this journal.

XFacebookLinkedIn
Sensors - ISSN 1424-8220Creative Common CC BY license