Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,030)

Search Parameters:
Keywords = new camera system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 4393 KB  
Article
An Open-Source, Low-Cost Solution for 3D Scanning
by Andrei Mateescu, Ioana Livia Stefan, Silviu Raileanu and Ioan Stefan Sacala
Sensors 2026, 26(1), 322; https://doi.org/10.3390/s26010322 - 4 Jan 2026
Viewed by 366
Abstract
With new applications continuously emerging in the fields of manufacturing, quality control and inspection, the need to develop three-dimensional (3D) scanning solutions suitable for industrial environments increases. 3D scanning is the process of analyzing one or more objects in order to convert and [...] Read more.
With new applications continuously emerging in the fields of manufacturing, quality control and inspection, the need to develop three-dimensional (3D) scanning solutions suitable for industrial environments increases. 3D scanning is the process of analyzing one or more objects in order to convert and store the object’s features in a digital format. Due to the increased costs of industrial 3D scanning solutions, this paper proposes an open-source, low-cost architecture for obtaining a 3D model that can be used in manufacturing, which involves a linear laser beam that is swept across the object via a rotating mirror, and a camera that grabs images, to further be used to extract the dimensions of the object through a technique inspired by laser triangulation. The 3D models for several objects are obtained, analyzed and compared to the dimensions of their respective real-world counterparts. For the tested objects, the proposed system yields a maximum mean height error of 2.56 mm, a maximum mean length error of 1.48 mm and a maximum mean width error of 1.30 mm on the raw point cloud and a scanning time of ∼4 s per laser line. Finally, a few observations and ways to improve the proposed solution are mentioned. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensing Technology in Smart Manufacturing)
Show Figures

Figure 1

18 pages, 853 KB  
Article
Safety in Smart Cities—Automatic Recognition of Dangerous Driving Styles
by Vincenzo Dentamaro, Lorenzo Di Maggio, Stefano Galantucci, Donato Impedovo and Giuseppe Pirlo
Information 2026, 17(1), 44; https://doi.org/10.3390/info17010044 - 4 Jan 2026
Viewed by 174
Abstract
Road safety ranks among the most apparent concerns in present-day urban existence, with risky driving the most prevalent cause of road crashes. In this paper, we present an external camera video-based automatic hazardous driving behavior detection model for use in smart cities. We [...] Read more.
Road safety ranks among the most apparent concerns in present-day urban existence, with risky driving the most prevalent cause of road crashes. In this paper, we present an external camera video-based automatic hazardous driving behavior detection model for use in smart cities. We addressed the problem with a holistic approach covering data collection to hazardous driving behavior classification including zig-zag driving, risky overtaking, and speeding over a pedestrian crossing. Our strategy employs a specially generated dataset with diverse driving situations under diverse traffic conditions and luminosities. We advocate for a Multi-Speed Transformer model with dual vehicle trajectory data timescale operation to capture near-future actions in the context of extended driving trends. A new contribution lies in our symbiotic system which, apart from the detection of unsafe driving, also assumes the responsibility of triggering countermeasures through a real-time continuous loop with vehicle systems. Empirical results demonstrate the Multi-Speed Transformer’s performance with 97.5% in accuracy and 93% in F1-score over our balanced corpus, surpassing comparison baselines including Temporal Convolutional Networks and Random Forest classifiers by significant amounts. The performance is boosted to 98.7% in accuracy as well as 95.5% in F1-score with the symbiotic framework. They confirm the promise of leading-edge neural architectures paired with symbiotic systems in enhancing road safety in smart cities. The ability of the system to provide real-time risky driving behavior detection with mitigation offers a real-world solution for the prevention of accidents while not restricting driver autonomy, a balance between automatic intervention, and passive monitoring. Empirical evidence on the TRAF-derived corpus, which includes 18 videos and 414 labelled trajectory segments, indicates that the Multi-Speed Transformer reaches an accuracy of 97.5% and an F1-score of 93% under the balanced-training protocol, and in this configuration it consistently surpasses the considered baselines when we use the same data splits and the same evaluation metrics. Full article
(This article belongs to the Special Issue AI and Data Analysis in Smart Cities)
Show Figures

Figure 1

18 pages, 3801 KB  
Technical Note
Sedimaging-Based Analysis of Granular Soil Compressibility for Building Foundation Design and Earth–Rock Dam Infrastructure
by Tengteng Cao, Shuangping Li, Zhaogen Hu, Bin Zhang, Junxing Zheng, Zuqiang Liu, Xin Xu and Han Tang
Buildings 2026, 16(1), 223; https://doi.org/10.3390/buildings16010223 - 4 Jan 2026
Viewed by 239
Abstract
This technical note presents a quantitative image-based framework for evaluating the packing and compressibility of granular soils, specifically applied to building foundation design in civil infrastructure projects. The Sedimaging system replicates hydraulic sedimentation in a controlled column, equipped with a high-resolution camera, to [...] Read more.
This technical note presents a quantitative image-based framework for evaluating the packing and compressibility of granular soils, specifically applied to building foundation design in civil infrastructure projects. The Sedimaging system replicates hydraulic sedimentation in a controlled column, equipped with a high-resolution camera, to visualize particle orientation after deposition. Grayscale images of the settled bed are analyzed using Haar Wavelet Transform (HWT) decomposition to quantify directional intensity gradients. A new descriptor, termed the sediment index (B), is defined as the ratio of vertical to horizontal wavelet energy at the dominant scale, representing the preferential alignment and anisotropy of particles during sedimentation. Experimental investigations were conducted on fifteen granular materials that include natural sands, tailings, glass beads and rice grains with different shapes. The results demonstrate strong correlations between B and both microscopic shape ratios (d1/d2 and d1/d3) and macroscopic properties. Linear relationships predict the limiting void ratios (emax, emin) with mean absolute differences of 0.04 and 0.03, respectively. A power-law function relates B to the compression index (Cc) with an average deviation of 0.02. These findings confirm that the sediment index effectively captures the morphological influence of particle shape on soil packing and compressibility. Compared with conventional physical testing, the Sedimaging-based approach offers a rapid, non-destructive, and high-throughput solution for estimating soil packing and compressibility of cohesionless, sand-sized granular soils directly from post-settlement imagery, making it particularly valuable for preliminary site assessments, geotechnical screening, and intelligent monitoring of granular materials in building foundation design and other infrastructure applications, such as earth–rock dams. Full article
(This article belongs to the Topic Resilient Civil Infrastructure, 2nd Edition)
Show Figures

Figure 1

20 pages, 5222 KB  
Article
A Real-Time Tractor Recognition and Positioning Method in Fields Based on Machine Vision
by Liang Wang, Dashuang Zhou and Zhongxiang Zhu
Agriculture 2025, 15(24), 2548; https://doi.org/10.3390/agriculture15242548 - 9 Dec 2025
Viewed by 452
Abstract
Multi-machine collaborative navigation in agricultural machinery can significantly improve field operation efficiency. Most existing multi-machine collaborative navigation systems rely on satellite navigation systems, which is costly and cannot meet the obstacle avoidance needs of field operations. In this paper, a real-time tractor recognition [...] Read more.
Multi-machine collaborative navigation in agricultural machinery can significantly improve field operation efficiency. Most existing multi-machine collaborative navigation systems rely on satellite navigation systems, which is costly and cannot meet the obstacle avoidance needs of field operations. In this paper, a real-time tractor recognition and positioning method in fields based on machine vision was proposed. First, we collected tractor images, annotated them, and constructed a tractor dataset. Second, we implemented lightweight improvements to the YOLOv4 algorithm, incorporating sparse training, channel pruning, layer pruning, and knowledge distillation fine-tuning based on the baseline model training. The test results of the lightweight model show that the model size was reduced by 98.73%, the recognition speed increased by 43.74%, and the recognition accuracy remains largely comparable to that of the baseline high-precision model. Then, we proposed a tractor positioning method based on an RGB-D camera. Finally, we established a field vehicle recognition and positioning experimental platform and designed a test plan. The results indicate that when IYO-RGBD recognized and positioned the leader tractor within a 10 m range, the root mean square (RMS) of longitudinal and lateral errors during straight-line travel were 0.0687 m and 0.025 m, respectively. During S-curve travel, the RMS values of longitudinal and lateral errors were 0.1101 m and 0.0481 m, respectively. IYO-RGBD can meet the accuracy requirements for recognizing and positioning the leader tractor by the follower tractor in practical autonomous following field operations. Our research outcomes can provide a new solution and certain technical references for visual navigation in multi-machine collaborative field operations of agricultural machinery. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

24 pages, 815 KB  
Review
Fish Farming 5.0: Advanced Tools for a Smart Aquaculture Management
by Edo D’Agaro
Appl. Sci. 2025, 15(23), 12638; https://doi.org/10.3390/app152312638 - 28 Nov 2025
Viewed by 1301
Abstract
The principal goal of precision fish farming (PFF) is to use data and new technologies such as sensors, cameras, and internet connections to optimise fish-aquaculture operations. PFF improves fish farming operations, making them data-driven, accurate, and repeatable, reducing the effects of subjective choices [...] Read more.
The principal goal of precision fish farming (PFF) is to use data and new technologies such as sensors, cameras, and internet connections to optimise fish-aquaculture operations. PFF improves fish farming operations, making them data-driven, accurate, and repeatable, reducing the effects of subjective choices by farmers. Thus, the daily management of operators based on manual practices and experience is shifted to knowledge-based automated processes. Modern sensors and animal biomarkers can be used to monitor environmental conditions, fish behaviour, growth performance, and key health indicators in real time, generating large datasets at low cost. The use of artificial intelligence provides useful insights from big data. Machine learning and modelling algorithms predict future outcomes such as fish growth, food requirements, or disease risk. The Internet of Things set up networks between connected devices on the farm for communication. Smart management systems can automatically adjust instruments such as aerators or feeders in response to sensor inputs. This integration between sensors, internet connectivity, and the use of automated controls enables real-time precision management. Full article
(This article belongs to the Section Agricultural Science and Technology)
Show Figures

Figure 1

33 pages, 2750 KB  
Article
Real-Time Detection of Rear Car Signals for Advanced Driver Assistance Systems Using Meta-Learning and Geometric Post-Processing
by Vasu Tammisetti, Georg Stettinger, Manuel Pegalajar Cuellar and Miguel Molina-Solana
Appl. Sci. 2025, 15(22), 11964; https://doi.org/10.3390/app152211964 - 11 Nov 2025
Viewed by 663
Abstract
Accurate identification of rear light signals in preceding vehicles is pivotal for Advanced Driver Assistance Systems (ADAS), enabling early detection of driver intentions and thereby improving road safety. In this work, we present a novel approach that leverages a meta-learning-enhanced YOLOv8 model to [...] Read more.
Accurate identification of rear light signals in preceding vehicles is pivotal for Advanced Driver Assistance Systems (ADAS), enabling early detection of driver intentions and thereby improving road safety. In this work, we present a novel approach that leverages a meta-learning-enhanced YOLOv8 model to detect left and right turn indicators, as well as brake signals. Traditional radar and LiDAR provide robust geometry, range, and motion cues that can indirectly suggest driver intent (e.g., deceleration or lane drift). However, they do not directly interpret color-coded rear signals, which limits early intent recognition from the taillights. We therefore focus on a camera-based approach that complements ranging sensors by decoding color and spatial patterns in rear lights. This approach to detecting vehicle signals poses additional challenges due to factors such as high reflectivity and the subtle visual differences between directional indicators. We address these by training a YOLOv8 model with a meta-learning strategy, thus enhancing its capability to learn from minimal data and rapidly adapt to new scenarios. Furthermore, we developed a post-processing layer that classifies signals by the geometric properties of detected objects, employing mathematical principles such as distance, area calculation, and Intersection over Union (IoU) metrics. Our approach increases adaptability and performance compared to traditional deep learning techniques, supporting the conclusion that integrating meta-learning into real-time object detection frameworks provides a scalable and robust solution for intelligent vehicle perception, significantly enhancing situational awareness and road safety through reliable prediction of vehicular behavior. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Computer Vision)
Show Figures

Figure 1

1949 KB  
Proceeding Paper
Gesture-Controlled Bionic Hand for Safe Handling of Biomedical Industrial Chemicals
by Sudarsun Gopinath, Glen Nitish, Daniel Ford, Thiyam Deepa Beeta and Shelishiyah Raymond
Eng. Proc. 2025, 118(1), 42; https://doi.org/10.3390/ECSA-12-26577 - 7 Nov 2025
Viewed by 148
Abstract
In pharmaceutical and biomedical industries, manual handling of dangerous chemicals is a leading cause of hazardous exposure to chemicals, toxic burning, and chemical contamination. To counteract these risks, we proposed a gesture-controlled bionic hand system to mimic human finger movements for safe and [...] Read more.
In pharmaceutical and biomedical industries, manual handling of dangerous chemicals is a leading cause of hazardous exposure to chemicals, toxic burning, and chemical contamination. To counteract these risks, we proposed a gesture-controlled bionic hand system to mimic human finger movements for safe and contactless chemical handling. This innovation system uses an ESP32 microcontroller to decode the hand gestures that are detected by the system using computer vision via an integrated camera. A PWM servo driver converts these movements to motor commands such that accurate movements of the fingers can be achieved. Teflon and other corrosion-proof materials are utilized in the 3D printing of the bionic hand in order to withstand corrosive conditions. This new, low-cost, and non-surgical approach replaces the EMG sensors, gives real-time control, and enhances industrial and laboratory process safety. The project is a major milestone in the application of robotics and AI for automation and risk reduction in dangerous environments. Full article
Show Figures

Figure 1

21 pages, 6243 KB  
Protocol
The Psychophysiological Interrelationship Between Working Conditions and Stress of Harvester and Forwarder Drivers—A Study Protocol
by Vera Foisner, Christoph Haas, Katharina Göttlicher, Arnulf Hartl and Christoph Huber
Forests 2025, 16(11), 1693; https://doi.org/10.3390/f16111693 - 6 Nov 2025
Viewed by 439
Abstract
(1) Background: Austria’s use of fully mechanized harvesting systems has been continuously increasing. Technical developments, such as traction aid winches, have made it possible to drive on increasingly steep terrain. However, this has led to challenges and potential hazards for the operators, resulting [...] Read more.
(1) Background: Austria’s use of fully mechanized harvesting systems has been continuously increasing. Technical developments, such as traction aid winches, have made it possible to drive on increasingly steep terrain. However, this has led to challenges and potential hazards for the operators, resulting in higher stand damage rates and risks of workplace accidents. Since these systems and working environments involve a highly complex interplay of various parameters, the purpose of this protocol is to propose a new set of methodologies that can be used to obtain a holistic interpretation of the psychophysiological interrelationship between the working conditions and stress of harvester and forwarder drivers. (2) Methods: We developed a research protocol to analyse the (a) environmental and (b) machine-related parameters; (c) psychological and psychophysiological responses of the operators; and (d) technical outcome parameters. Within this longitudinal exploratory field study, experienced drivers were monitored for over an hour at the beginning and the end of their workday while operating in varying steep terrains with and without a traction aid winch. The analysis is based on macroscopic (collected using cameras), microscopic (eye-tracking glasses and AI-driven emotion recognition), quantitative (standardized questionnaires), and qualitative (interviews) data. This multimodal research protocol aims to improve the health and safety of forest workers, increase their productivity, and reduce damage to remaining trees. Full article
(This article belongs to the Section Forest Operations and Engineering)
Show Figures

Figure 1

21 pages, 1020 KB  
Article
Robust 3D Skeletal Joint Fall Detection in Occluded and Rotated Views Using Data Augmentation and Inference–Time Aggregation
by Maryem Zobi, Lorenzo Bolzani, Youness Tabii and Rachid Oulad Haj Thami
Sensors 2025, 25(21), 6783; https://doi.org/10.3390/s25216783 - 6 Nov 2025
Viewed by 1027
Abstract
Fall detection systems are a critical application of human pose estimation, frequently struggle with achieving real-world robustness due to their reliance on domain-specific datasets and a limited capacity for generalization to novel conditions. Models trained on controlled, canonical camera views often fail when [...] Read more.
Fall detection systems are a critical application of human pose estimation, frequently struggle with achieving real-world robustness due to their reliance on domain-specific datasets and a limited capacity for generalization to novel conditions. Models trained on controlled, canonical camera views often fail when subjects are viewed from new perspectives or are partially occluded, resulting in missed detections or false positives. This study tackles these limitations by proposing the Viewpoint Invariant Robust Aggregation Graph Convolutional Network (VIRA-GCN), an adaptation of the Richly Activated GCN for fall detection. The VIRA-GCN introduces a novel dual-strategy solution: a synthetic viewpoint generation process to augment training data and an efficient inference-time aggregation method to form consensus-based predictions. We demonstrate that augmenting the Le2i dataset with simulated rotations and occlusions allows a standard pose estimation model to achieve a significant increase in its fall detection capabilities. The VIRA-GCN achieved 99.81% accuracy on the Le2i dataset, confirming its enhanced robustness. Furthermore, the model is suitable for low-resource deployment, utilizing only 4.06 M parameters and achieving a real-time inference latency of 7.50 ms. This work presents a practical and efficient solution for developing a single-camera fall detection system robust to viewpoint variations, and introduces a reusable mapping function to convert Kinect data to the MMPose format, ensuring consistent comparison with state-of-the-art models. Full article
Show Figures

Figure 1

18 pages, 4717 KB  
Article
Localized Surface Plasmon Resonance-Based Gas Sensor with a Metal–Organic-Framework-Modified Gold Nano-Urchin Substrate for Volatile Organic Compounds Visualization
by Cong Wang, Hao Guo, Bin Chen, Jia Yan, Fumihiro Sassa and Kenshi Hayashi
Sensors 2025, 25(21), 6522; https://doi.org/10.3390/s25216522 - 23 Oct 2025
Cited by 1 | Viewed by 879
Abstract
Volatile organic compound (VOC) monitoring is crucial for environmental safety and health, but conventional gas sensors often suffer from poor selectivity or lack spatial information. Here, we report a localized surface plasmon resonance (LSPR) gas sensor based on Au nano-urchins coated with a [...] Read more.
Volatile organic compound (VOC) monitoring is crucial for environmental safety and health, but conventional gas sensors often suffer from poor selectivity or lack spatial information. Here, we report a localized surface plasmon resonance (LSPR) gas sensor based on Au nano-urchins coated with a zeolitic imidazolate framework (ZIF-8) for both the quantitative detection and visualization of VOCs. Substrates were fabricated by immobilizing Au nano-urchins (~90 nm) on 3-aminopropyltriethoxysilane-modified glass and subsequently growing ZIF-8 crystals (~250 nm) for different durations. Scanning electron microscopy and optical analysis revealed that 90 min of ZIF-8 growth provided the optimal coverage and strongest plasmonic response. Using a spectrometer-based LSPR system, the optimized substrate exhibited clear, concentration-dependent responses to three representative VOCs, 2-pentanone, acetic acid, and ethyl acetate, over nine concentrations, with detection limits of 12.7, 14.5, and 36.3 ppm, respectively. Furthermore, a camera-based LSPR visualization platform enabled real-time imaging of gas plumes and evaporation-driven diffusion, with differential pseudo-color mapping providing intuitive spatial distributions and concentration dependence. These results demonstrate that ZIF-8-modified Au nano-urchin substrates enable sensitive and reproducible VOC detection and, importantly, transform plasmonic sensing into a visual modality, offering new opportunities for integrated LSPR–surface-enhanced Raman scattering dual-mode gas sensing in the future. Full article
(This article belongs to the Special Issue Nano/Micro-Structured Materials for Gas Sensor)
Show Figures

Figure 1

28 pages, 10678 KB  
Article
Deep-DSO: Improving Mapping of Direct Sparse Odometry Using CNN-Based Single-Image Depth Estimation
by Erick P. Herrera-Granda, Juan C. Torres-Cantero, Israel D. Herrera-Granda, José F. Lucio-Naranjo, Andrés Rosales, Javier Revelo-Fuelagán and Diego H. Peluffo-Ordóñez
Mathematics 2025, 13(20), 3330; https://doi.org/10.3390/math13203330 - 19 Oct 2025
Viewed by 1854
Abstract
In recent years, SLAM, visual odometry, and structure-from-motion approaches have widely addressed the problems of 3D reconstruction and ego-motion estimation. Of the many input modalities that can be used to solve these ill-posed problems, the pure visual alternative using a single monocular RGB [...] Read more.
In recent years, SLAM, visual odometry, and structure-from-motion approaches have widely addressed the problems of 3D reconstruction and ego-motion estimation. Of the many input modalities that can be used to solve these ill-posed problems, the pure visual alternative using a single monocular RGB camera has attracted the attention of multiple researchers due to its low cost and widespread availability in handheld devices. One of the best proposals currently available is the Direct Sparse Odometry (DSO) system, which has demonstrated the ability to accurately recover trajectories and depth maps using monocular sequences as the only source of information. Given the impressive advances in single-image depth estimation using neural networks, this work proposes an extension of the DSO system, named DeepDSO. DeepDSO effectively integrates the state-of-the-art NeW CRF neural network as a depth estimation module, providing depth prior information for each candidate point. This reduces the point search interval over the epipolar line. This integration improves the DSO algorithm’s depth point initialization and allows each proposed point to converge faster to its true depth. Experimentation carried out in the TUM-Mono dataset demonstrated that adding the neural network depth estimation module to the DSO pipeline significantly reduced rotation, translation, scale, start-segment alignment, end-segment alignment, and RMSE errors. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

17 pages, 3073 KB  
Article
An Open-Source Computer-Vision-Based Method for Spherical Microplastic Settling Velocity Calculation
by Catherine L. Stacy, Md Abdul Baset Sarker, Abul B. M. Baki and Masudul H. Imtiaz
Microplastics 2025, 4(4), 75; https://doi.org/10.3390/microplastics4040075 - 14 Oct 2025
Viewed by 887
Abstract
Microplastics (particles ≤ 5 mm) are ubiquitous and persistent, posing threats to ecosystems and human health. Thus, the development of technologies for evaluating their dynamics is crucial. Settling velocity is a critical parameter for predicting the fate of microplastics in aquatic environments. Current [...] Read more.
Microplastics (particles ≤ 5 mm) are ubiquitous and persistent, posing threats to ecosystems and human health. Thus, the development of technologies for evaluating their dynamics is crucial. Settling velocity is a critical parameter for predicting the fate of microplastics in aquatic environments. Current methods for computing this metric are highly subjective and lack a standard. The goal of this research is to develop an objective, automated technique employing the technological advances in computer vision. In the laboratory, a camera recorded the trajectories of microplastics as they sank through a water column. The settling velocity of each microplastic was calculated using a YOLOv12n-based object detection model. The system was tested with three classes of spherical microplastics and three types of water. Ground truth settling times, recorded manually with a stopwatch, allowed for quantification of the system’s accuracy. When comparing the velocities calculated using the computer vision system to the stopwatch ground truth, the average error across all water types was 5.97% for the 3 mm microplastics, 7.14% for the 4 mm microplastics, and 6.15% for the 5 mm microplastics. This new method will enable the research community to predict microplastic distribution and transport patterns, as well as implement more timely strategies for mitigating pollution. Full article
Show Figures

Figure 1

36 pages, 20759 KB  
Article
Autonomous UAV Landing and Collision Avoidance System for Unknown Terrain Utilizing Depth Camera with Actively Actuated Gimbal
by Piotr Łuczak and Grzegorz Granosik
Sensors 2025, 25(19), 6165; https://doi.org/10.3390/s25196165 - 5 Oct 2025
Viewed by 2090
Abstract
Autonomous landing capability is crucial for fully autonomous UAV flight. Currently, most solutions use either color imaging from a camera pointed down, lidar sensors, dedicated landing spots, beacons, or a combination of these approaches. Classical strategies can be limited by either no color [...] Read more.
Autonomous landing capability is crucial for fully autonomous UAV flight. Currently, most solutions use either color imaging from a camera pointed down, lidar sensors, dedicated landing spots, beacons, or a combination of these approaches. Classical strategies can be limited by either no color data when lidar is used, limited obstacle perception when only color imaging is used, a low field of view from a single RGB-D sensor, or the requirement for the landing spot to be prepared in advance. In this paper, a new approach is proposed where an RGB-D camera mounted on a gimbal is used. The gimbal is actively actuated to counteract the limited field of view while color images and depth information are provided by the RGB-D camera. Furthermore, a combined UAV-and-gimbal-motion strategy is proposed to counteract the low maximum range of depth perception to provide static obstacle detection and avoidance, while preserving safe operating conditions for low-altitude flight, near potential obstacles. The system is developed using a PX4 flight stack, CubeOrange flight controller, and Jetson nano onboard computer. The system was flight-tested in simulation conditions and statically tested on a real vehicle. Results show the correctness of the system architecture and possibility of deployment in real conditions. Full article
(This article belongs to the Special Issue UAV-Based Sensing and Autonomous Technologies)
Show Figures

Figure 1

11 pages, 826 KB  
Article
A Novel Virtual Reality-Based System for Measuring Deviation Angle in Strabismus: A Prospective Study
by Jhih-Yi Lu, Yin-Cheng Liu, Jui-Bang Lu, Ming-Han Tsai, Wen-Ling Liao, I-Ming Wang, Hui-Ju Lin and Yu-Te Huang
Diagnostics 2025, 15(18), 2402; https://doi.org/10.3390/diagnostics15182402 - 20 Sep 2025
Viewed by 884
Abstract
Background/Objectives: To develop a new Virtual Reality (VR) system software for measuring ocular deviation in strabismus patients. Methods: This prospective study included subjects with basic-type exotropia (XT) and non-refractive accommodative esotropia (ET). Ocular deviation was measured using the alternate prism cover [...] Read more.
Background/Objectives: To develop a new Virtual Reality (VR) system software for measuring ocular deviation in strabismus patients. Methods: This prospective study included subjects with basic-type exotropia (XT) and non-refractive accommodative esotropia (ET). Ocular deviation was measured using the alternate prism cover test (APCT) and two VR-based methods: target offset (TO) and a newly developed camera rotation (CR) method. Results: A total of 28 subjects were recruited (5 cases were excluded for preliminary testing and 5 for not meeting inclusion criteria). Among the 18 included patients, 10 (66.7%) had XT and 5 (33.3%) had ET. The median age was 21.5 years (IQR 17 to 25). The mean age was 22.3 years (range: 9–46), with 5 (27.8%) having manifest strabismus and 12 (61.1%) measured while wearing glasses. VR-based methods (TO and CR) showed comparable results to APCT for deviation angle measurements (p = 0.604). Subgroup analysis showed no significant differences in ET patients (all p > 0.05). In XT patients, both TO and CR underestimated deviation angles compared to APCT (p = 0.008 and p = 0.001, respectively), but no significant difference was observed between the two methods (p = 0.811). Linear regression showed CR had a stronger correlation with APCT than TO (R2 = 0.934 vs. 0.874). Conclusions: This newly developed VR system software, incorporating the CR method, provides a reliable approach for measuring ocular deviation. By shifting the entire visual scene rather than just the target, it lays a strong foundation for immersive diagnostic and therapeutic VR applications. Full article
(This article belongs to the Special Issue New Insights into the Diagnosis and Prognosis of Eye Diseases)
Show Figures

Figure 1

17 pages, 25008 KB  
Article
apex Mk.2/Mk.3: Secure Live Transmission of the First Flight of Trichoplax adhaerens in Space Based on Components Off-the-Shelf
by Nico Maas, Jean-Pierre de Vera, Moritz Jonathan Schmidt, Pia Reimann, Jason G. Randall, Sebastian Feles, Ruth Hemmersbach, Bernd Schierwater and Jens Hauslage
Eng 2025, 6(9), 241; https://doi.org/10.3390/eng6090241 - 12 Sep 2025
Cited by 3 | Viewed by 1322
Abstract
After the successful flight of the first Advanced Processors, Encryption, and Security Experiment (apex) Commercial Off-the-Shelf (COTS) On-Board Computer (OBC) during the Propulsion Technologies and Components of Launcher Stages (ATEK)/Material Physics Experiments Under Microgravity (MAPHEUS)-8 sounding rocket campaign, a second generation of COTS [...] Read more.
After the successful flight of the first Advanced Processors, Encryption, and Security Experiment (apex) Commercial Off-the-Shelf (COTS) On-Board Computer (OBC) during the Propulsion Technologies and Components of Launcher Stages (ATEK)/Material Physics Experiments Under Microgravity (MAPHEUS)-8 sounding rocket campaign, a second generation of COTS OBCs were built, leveraging the knowledge gained. This new concept and improvements are provided. The Mk.2 Science Camera Platform (SCP) has an instrumented high-definition science camera to research the behavior of small organisms such as Trichoplax adhaerens under challenging gravity conditions, while the Mk.3 Student Experiment Sensorboard (SES) represents an Arduino-like board that directly interfaces with the MAPHEUS Service Module and allows for rapid development of new sensor solutions on sounding rocket systems. Both experiments were flown successfully on MAPHEUS-10, including a biological system as a proof of concept, and paved the way for an even more capable third generation of apex OBCs. This study is part one of a three-part series describing the apex Mk.2/Mk.3 experiments, open-source ground segment, and service module simulator. Full article
(This article belongs to the Special Issue Interdisciplinary Insights in Engineering Research)
Show Figures

Figure 1

Back to TopTop