Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (344)

Search Parameters:
Keywords = visible light camera

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
45 pages, 7679 KB  
Article
Conquering the Urban Firefighting Challenge: A Deep Q-Network Approach for Autonomous UAV Navigation
by Shafiqul Alam Khan, Damian Valles, Marcelo M. Carvalho and Wenquan Dong
Inventions 2026, 11(2), 35; https://doi.org/10.3390/inventions11020035 - 2 Apr 2026
Viewed by 333
Abstract
Firefighters must locate victims reliably to carry out rescue operations within burning structures during urban firefighting events. Low visibility, reduced oxygen levels, weakened structural rigidity, and dense smoke make it difficult to locate victims. In addition to these challenges, victims may be unconscious [...] Read more.
Firefighters must locate victims reliably to carry out rescue operations within burning structures during urban firefighting events. Low visibility, reduced oxygen levels, weakened structural rigidity, and dense smoke make it difficult to locate victims. In addition to these challenges, victims may be unconscious and unable to report their locations to firefighters. This research work explores the Double Deep Q-Network (Double DQN), Dueling Deep Q-Network (Dueling DQN), and Dueling Double Deep Q-Network (D3QN) agents for an unmanned aerial vehicle (UAV) to navigate around a structure and locate trapped victims within it. The UAV’s position, Light Detection and Ranging (LiDAR), and infrared camera data are utilized as inputs for the Deep Q-Networks. The PER is used to store transitions and sample them according to priority for training. Python’s Pygame library is used in this research to create a simulated environment in which infrared camera and LiDAR data are simulated. The performance of the UAV agent is evaluated using cumulative maximum reward, reward distribution histogram, Temporal Difference (TD) error over time, and number of successful episodes. Among the three DQN UAV agents, the Dueling DQN and Double DQN have potential for real-world applications in firefighting. Full article
(This article belongs to the Special Issue Unmanned Aerial Vehicles (UAVs): Innovations and Applications)
Show Figures

Figure 1

20 pages, 37476 KB  
Article
In-Orbit MapAnything: An Enhanced Feed-Forward Metric Framework for 3D Reconstruction of Non-Cooperative Space Targets Under Complex Lighting
by Yinxi Lu, Hongyuan Wang, Qianhao Ning, Ziyang Liu, Yunzhao Zang, Zhen Liao and Zhiqiang Yan
Sensors 2026, 26(7), 2026; https://doi.org/10.3390/s26072026 - 24 Mar 2026
Viewed by 364
Abstract
Precise 3D reconstruction of non-cooperative space targets is a prerequisite for active debris removal and on-orbit servicing. However, this task is impeded by severe environmental challenges. Specifically, the limited dynamic range of visible light cameras leads to frequent overexposure or underexposure under extreme [...] Read more.
Precise 3D reconstruction of non-cooperative space targets is a prerequisite for active debris removal and on-orbit servicing. However, this task is impeded by severe environmental challenges. Specifically, the limited dynamic range of visible light cameras leads to frequent overexposure or underexposure under extreme space lighting. Compounded by sparse textures and strong specular reflections, these factors significantly constrain reconstruction accuracy. While existing general-purpose feed-forward models such as MapAnything offer efficient inference, their geometric recovery capabilities degrade sharply when facing significant domain shifts. To address these issues, this paper proposes an enhanced 3D reconstruction framework tailored for the space environment named In-Orbit MapAnything. First, to mitigate data scarcity, we construct a high-quality space target dataset incorporating extreme illumination characteristics, which provides comprehensive auxiliary modalities including accurate camera poses and dense point clouds. Second, we propose the SatMap-Adapter module to mitigate feature degradation caused by severe specular reflections. This architecture employs a hierarchical cascade sampling strategy to align multi-level backbone features and utilizes a lightweight adaptive fusion module to dynamically integrate shallow photometric cues, intermediate structural information, and deep semantic features. Finally, we employ a weight-decomposed low-rank adaptation strategy to achieve parameter-efficient fine-tuning while strictly freezing the pre-trained backbone. Experimental results demonstrate that the proposed method decreases the absolute relative error and Chamfer distance by 15.23% and 20.02% respectively compared to the baseline MapAnything model, while maintaining a rapid inference speed. The proposed approach effectively suppresses reconstruction noise on metallic surfaces and recovers fine geometric structures, validating the effectiveness of our feature-enhanced framework in extreme space environments. Full article
Show Figures

Figure 1

15 pages, 2132 KB  
Article
Anatomical Changes in the Peel of Sun-Damaged Pomegranates (Punica granatum L. cv. Hicaznar)
by Keziban Yazıcı, Muhammad Tanveer Altaf and Lami Kaynak
Plants 2026, 15(6), 987; https://doi.org/10.3390/plants15060987 - 23 Mar 2026
Viewed by 363
Abstract
Pomegranate (Punica granatum L.) is a major fruit crop in tropical and subtropical regions, but changing climatic conditions—especially rising temperatures and intense solar radiation—are increasing physiological disorders. Sunburn, a key heat- and light-induced disorder, causes peel discoloration and tissue damage. This results [...] Read more.
Pomegranate (Punica granatum L.) is a major fruit crop in tropical and subtropical regions, but changing climatic conditions—especially rising temperatures and intense solar radiation—are increasing physiological disorders. Sunburn, a key heat- and light-induced disorder, causes peel discoloration and tissue damage. This results in significant yield loss and reduced fruit quality. The objective of this study was to characterize sunburn-induced anatomical changes in the widely grown, highly sensitive Hicaznar cultivar in Türkiye, and to identify the optimal phenological stage for the application of sunburn-preventive practices. For this purpose, pomegranate fruit peels were fixed in FAA (Formalin–Acetic Acid–Alcohol) solution, embedded in paraffin blocks, and sectioned at a thickness of 5–7 µm. The sections were stained using the hematoxylin–eosin method and examined under a light microscope. The images captured with a digital camera wereanalyzed and revealed that sunburn damage in the pomegranate peel first appears in the cuticle layer, followed by disruption and fragmentation of the cutaneous and epidermal layers beneath it, and ultimately leads to damage of the parenchyma cells. Furthermore, Light microscopy showed that before visible discoloration, cells near the epidermis undergo phenolic accumulation, cell-wall thickening, and lignification, which are early indicators of sunburn. These microscopic changes provide early diagnostic features for detecting sunburn damage before external symptoms manifest. The study concluded that anatomical changes begin before the visible symptoms of sunburn appear on the fruit, and the most appropriate timing for applying preventive measures against sunburn has been identified. Light microscopy showed that before visible discoloration, cells near the epidermis undergo phenolic accumulation, cell-wall thickening, and lignification, which are early indicators of sunburn. Full article
(This article belongs to the Special Issue Plant Fruit Development and Abiotic Stress)
Show Figures

Figure 1

17 pages, 13209 KB  
Article
The Circular Return: Scenographic Practice in Virtual Production
by Natalie Beak
Arts 2026, 15(3), 54; https://doi.org/10.3390/arts15030054 - 11 Mar 2026
Viewed by 477
Abstract
This practice-led research examines how virtual production represents a circular return to scenographic practice, reactivating integrated modes of spatial authorship that have long underpinned screen storytelling but were obscured by industrial fragmentation. Drawing on a single-day intensive workshop at the Australian Film, Television [...] Read more.
This practice-led research examines how virtual production represents a circular return to scenographic practice, reactivating integrated modes of spatial authorship that have long underpinned screen storytelling but were obscured by industrial fragmentation. Drawing on a single-day intensive workshop at the Australian Film, Television and Radio School (AFTRS), the study analyses how spatial authorship emerged through embodied, collaborative engagement with an LED volume environment. Grounded in scenographic theory and concepts of distributed cognition and situated authorship, the article reframes virtual production as a condition that renders pre-digital, collaborative modes of making visible within contemporary screen production. The LED volume functions simultaneously as scenic environment, lighting instrument, and compositional partner, requiring participants to negotiate space, light, movement, and camera as a unified spatial event. Analysis identifies how scenographic understanding emerged through virtual scouting, world-responsive storytelling, physical-digital integration, and embodied realisation. The findings extend production design theory by challenging ocular-centric models of mise-en-scène and positioning scenographic integration as screen practice—an epistemic mode of enacting through collective, materially grounded spatial experimentation. While situated within an educational context, the study points to broader implications for how spatial authorship and collective practice are understood in contemporary screen production. Full article
Show Figures

Figure 1

23 pages, 2333 KB  
Article
Measurement of Metal Surface Temperature Based on Visible Light Images: A Strategy for On-Site Image Acquisition
by Xingwang Li, Wenhua Wu, Chengxiang Lei, Yang Chen, Zheng Tian and Qizheng Ye
Appl. Sci. 2026, 16(5), 2556; https://doi.org/10.3390/app16052556 - 6 Mar 2026
Viewed by 217
Abstract
Based on the mechanism of thermally modulated reflected light, visible light images combined with machine learning methods can be used to estimate the surface temperature of metal equipment at ambient temperature under sunlight conditions. However, the surface conditions of on-site equipment and camera [...] Read more.
Based on the mechanism of thermally modulated reflected light, visible light images combined with machine learning methods can be used to estimate the surface temperature of metal equipment at ambient temperature under sunlight conditions. However, the surface conditions of on-site equipment and camera imaging parameters vary greatly across different scenarios, leading to poor generalization of models trained solely on laboratory image databases. To address this, it is necessary to update the original laboratory database by incorporating on-site images and retrain the model accordingly; on the other hand, since most of the on-site equipment is working normally, there are few images capturing fault-induced high temperatures. Even if the method of updating and retraining on-site images is used, the data imbalance in the image database can still cause significant measurement errors in these high-temperature images. This study studies image database update schemes to address both multi-scenario and data imbalance problems and demonstrates that retraining with as little as 5% scenario-specific images or 1% high-temperature images significantly improves temperature prediction accuracy, which was validated through on-site experiments at a substation. By comparing four machine learning algorithms (random forest regression, gradient boosted regression trees, decision trees, and k-nearest neighbors), this study reveals that RFR yields the best performance. These findings enhance the practical applicability of visible light image-based temperature measurement models in engineering contexts. Full article
(This article belongs to the Special Issue Applied Computer Vision and Deep Learning)
Show Figures

Figure 1

23 pages, 7177 KB  
Article
Automated Object Detection and Change Quantification in Underground Mines Using LiDAR Point Clouds and 360° Image Processing
by Ana Fabiola Patricia Tejada Peralta, Roya Bakzadeh, Sina Siahidouzazar and Pedram Roghanchi
Appl. Sci. 2026, 16(5), 2337; https://doi.org/10.3390/app16052337 - 27 Feb 2026
Viewed by 371
Abstract
Underground mining environments pose significant challenges for automated hazard detection due to low illumination, restricted visibility, and the absence of Global Navigation Satellite System (GNSS) coverage. These factors limit situational awareness and delay inspection efforts, particularly after disruptive events when rapid assessment is [...] Read more.
Underground mining environments pose significant challenges for automated hazard detection due to low illumination, restricted visibility, and the absence of Global Navigation Satellite System (GNSS) coverage. These factors limit situational awareness and delay inspection efforts, particularly after disruptive events when rapid assessment is essential for safety. This study addresses this problem by developing a dual-pipeline framework for 2D–3D detection that uses 360° imaging and LiDAR-based machine learning to identify people, vehicles, and positional changes in underground settings without requiring personnel to re-enter hazardous areas. The objective was to create a system capable of recognizing objects and monitoring spatial changes under real underground mine conditions. The 2D component used a Ricoh Theta Z1 camera to collect panoramic images, and a YOLO (You Only Look Once) v8n model was fine-tuned using datasets representing low light, shadowed underground scenes. The 3D component employed an Ouster OS1-070-64 LiDAR sensor, and point clouds were processed through denoising, ICP alignment, surface reconstruction, manual annotation, and 2D projection. A YOLO-based model was then trained to detect objects and measure displacement between LiDAR scans. Results demonstrated strong performance for both components. The fine-tuned YOLOv8n model reliably detected personnel and vehicles despite challenging lighting and visual clutter, while the 3D pipeline localized objects in the registered LiDAR frame and quantified vehicle displacement between consecutive scans by comparing 3D bounding-box centroids after ICP alignment (displacement vector and magnitude). These findings indicate that the combined 2D–3D system can effectively support automated hazard recognition and environmental monitoring in GNSS-denied underground spaces. Full article
(This article belongs to the Special Issue The Application of Deep Learning in Image Processing)
Show Figures

Figure 1

22 pages, 39829 KB  
Article
Dual-Detector Vision and Depth-Aware Back-Projection for Accurate Apple Detection and 3D Localisation for Robotic Harvesting
by Tagor Hossain, Peng Shi and Levente Kovacs
Robotics 2026, 15(2), 47; https://doi.org/10.3390/robotics15020047 - 22 Feb 2026
Viewed by 580
Abstract
Accurate apple detection and precise three-dimensional (3D) localisation are essential for autonomous robotic harvesting in orchard environments, where occlusion, illumination variation, depth noise, and the similar colour appearance of fruits and surrounding leaves present significant challenges. This paper proposes a dual-detector vision framework [...] Read more.
Accurate apple detection and precise three-dimensional (3D) localisation are essential for autonomous robotic harvesting in orchard environments, where occlusion, illumination variation, depth noise, and the similar colour appearance of fruits and surrounding leaves present significant challenges. This paper proposes a dual-detector vision framework combined with depth-aware back-projection to achieve robust apple detection and metric 3D localisation in real time. The method integrates the complementary strengths of YOLOv8 and Mask R-CNN through confidence-weighted fusion of bounding boxes and pixel-wise union of segmentation masks, producing stabilised two-dimensional (2D) apple representations under visually ambiguous conditions. The fusion results are converted into dense 3D representations through depth-guided projection within the camera coordinate system representing the visible fruit surface. A depth-consistency weighting strategy assigns higher influence to depth-reliable pixels during centroid computation, thereby suppressing noisy or occluded depth measurements and improving the stability of 3D fruit centre estimation, while local intensity normalisation standardises neighbourhood-level pixel intensities to reduce the impact of shadows, highlights, and uneven lighting, enabling more consistent segmentation and detection across varying illumination conditions. Experimental results demonstrate an accuracy of 98.9%, an mAP of 94.2%, an F1-score of 93.3%, and a recall of 92.8%, while achieving real-time performance at 86.42 FPS, confirming the suitability of the proposed method for robotic harvesting in challenging orchard environments. Full article
(This article belongs to the Special Issue Perception and AI for Field Robotics)
Show Figures

Figure 1

21 pages, 5131 KB  
Article
Design and Characterization of a Hyperspectral Colposcope Based on Dual-LCTF VNIR Narrow-Band Illumination
by Carlos Vega, Raquel Leon, Norberto Medina, Himar Fabelo, Alicia Martín and Gustavo M. Callico
Sensors 2026, 26(4), 1255; https://doi.org/10.3390/s26041255 - 14 Feb 2026
Viewed by 330
Abstract
Early detection of precancerous cervical lesions is critical for improving patient management and clinical outcomes. Hyperspectral imaging has emerged as a promising non-invasive, label-free imaging modality for rapid medical diagnosis. This work presents the development of a liquid-crystal-tunable-filter-based hyperspectral colposcopy system covering the [...] Read more.
Early detection of precancerous cervical lesions is critical for improving patient management and clinical outcomes. Hyperspectral imaging has emerged as a promising non-invasive, label-free imaging modality for rapid medical diagnosis. This work presents the development of a liquid-crystal-tunable-filter-based hyperspectral colposcopy system covering the visible and near-infrared spectral ranges. The proposed system integrates two tunable filters into an existing Optomic OP-C5 clinical colposcope, enabling hyperspectral acquisition from 460 to 1000 nm with 130 spectral bands at 5 nm resolution using a panchromatic camera. Two alternative acquisition strategies were investigated: (i) filtering the light received by the system, or (ii) filtering the light emitted toward the sample. In addition, wavelength-dependent exposure control was studied to compensate for reduced system sensitivity and improve the signal-to-noise ratio in low-efficiency spectral regions. The system was benchmarked against a previous custom hyperspectral implementation based on a commercial camera. The comparative analysis highlights the advantages and limitations of both approaches, demonstrating the proposed system’s suitability for integration into clinical workflows and its potential for early detection of precancerous cervical lesions during routine colposcopic examinations. Full article
(This article belongs to the Special Issue Advanced Sensing Techniques in Biomedical Signal Processing)
Show Figures

Figure 1

8 pages, 1026 KB  
Proceeding Paper
IoT-Based Sensor Technologies for Object Detection in Low-Visibility Environments: Development and Validation of a Functional Prototype
by Pedro Escudero-Villa and Cristian Escudero
Eng. Proc. 2026, 124(1), 28; https://doi.org/10.3390/engproc2026124028 - 12 Feb 2026
Viewed by 574
Abstract
In emergency scenarios where visibility is compromised, rapid and accurate object detection becomes critical. This study addresses this challenge by proposing an IoT-enabled robotic solution capable of operating in low-visibility environments, with a focus on supporting search and rescue missions through autonomous sensing [...] Read more.
In emergency scenarios where visibility is compromised, rapid and accurate object detection becomes critical. This study addresses this challenge by proposing an IoT-enabled robotic solution capable of operating in low-visibility environments, with a focus on supporting search and rescue missions through autonomous sensing and real-time data communication. This research presents the development and implementation of an IoT-based sensorized system designed to detect objects in low-visibility environments. The system aims to enhance search and rescue operations by identifying potential human presence in areas with limited access due to smoke, darkness, or hazardous conditions. The platform integrates distance sensors, a thermal camera (AMG8833), a PIR motion sensor, and wireless communication through the Arduino MKR1000 and ESP32-CAM boards. The mobile robot is equipped with obstacle avoidance, person detection, and IoT communication modules, allowing data to be sent to the cloud via ThingSpeak and enabling remote commands through TalkBack. A structured methodology was followed, including technology selection, hardware/software design, and testing under various lighting and opacity conditions. Experimental results showed the effectiveness of the system in identifying obstacles and detecting heat signatures representing human body, with optimal performance observed at a 15 cm detection threshold. The system demonstrated robust operation in simulated rescue environments, providing real-time data transmission and remote-control capabilities. Full article
(This article belongs to the Proceedings of The 6th International Electronic Conference on Applied Sciences)
Show Figures

Figure 1

12 pages, 3595 KB  
Article
A Deep Learning-Enhanced MIMO C-OOK Scheme for Optical Camera Communication in Internet of Things Networks
by Duy Thong Nguyen, Trang Nguyen, Minh Duc Thieu and Huy Nguyen
Photonics 2026, 13(2), 163; https://doi.org/10.3390/photonics13020163 - 8 Feb 2026
Viewed by 519
Abstract
Wireless communication systems, which rely on radio frequencies (RFs), are widely utilized in various applications, such as mobile communications, radio frequency identification, marine networks, smart farms, and smart homes. Due to their ease of installation, wireless systems offer advantages over wired alternatives. But [...] Read more.
Wireless communication systems, which rely on radio frequencies (RFs), are widely utilized in various applications, such as mobile communications, radio frequency identification, marine networks, smart farms, and smart homes. Due to their ease of installation, wireless systems offer advantages over wired alternatives. But the deployment of high-frequency radio waves for a communication system can pose potential health risks. To address these concerns, many researchers have explored the use of visible light as a safer alternative to radio frequency communication. In this context, optical camera communication has emerged as a good candidate compared to the RF system. Meanwhile, artificial intelligence (AI) is reshaping industries and human life by solving complex problems, enabling intelligent automation, and driving advancements in technologies such as smart farms, smart homes, and future internet of things systems. In this study, we recommend a Multiple-Input Multiple-Output Camera On–Off Keying (MIMO C-OOK) modulation that integrates a YOLOv11 for light source detection and tracking and a deep learning network-based decoder algorithm, optimized for long-range and mobility communication scenarios. The proposed approach enhances the conventional C-OOK system by increasing the data rate and transmission range while reducing errors at the receiver. Implementation results show that the proposed approach can achieve reliable communication up to 10 m with minimal errors, even under mobility conditions (3 m/s, equivalent to walking speed), by optimizing camera parameters and employing forward error correction (FEC). Full article
(This article belongs to the Special Issue Optical Wireless Communications (OWC) for Internet-of-Things (IoT))
Show Figures

Figure 1

23 pages, 6932 KB  
Article
RocSync: Millisecond-Accurate Temporal Synchronization for Heterogeneous Camera Systems
by Jaro Meyer, Frédéric Giraud, Joschua Wüthrich, Marc Pollefeys, Philipp Fürnstahl and Lilian Calvet
Sensors 2026, 26(3), 1036; https://doi.org/10.3390/s26031036 - 5 Feb 2026
Viewed by 521
Abstract
Accurate spatiotemporal alignment of multi-view video streams is essential for a wide range of dynamic-scene applications such as multi-view 3D reconstruction, pose estimation, and scene understanding. However, synchronizing multiple cameras remains a significant challenge, especially in heterogeneous setups combining professional- and consumer-grade devices, [...] Read more.
Accurate spatiotemporal alignment of multi-view video streams is essential for a wide range of dynamic-scene applications such as multi-view 3D reconstruction, pose estimation, and scene understanding. However, synchronizing multiple cameras remains a significant challenge, especially in heterogeneous setups combining professional- and consumer-grade devices, visible and infrared sensors, or systems with and without audio, where common hardware synchronization capabilities are often unavailable. This limitation is particularly evident in real-world environments, where controlled capture conditions are not feasible. In this work, we present a low-cost, general-purpose synchronization method that achieves millisecond-level temporal alignment across diverse camera systems while supporting both visible (RGB) and infrared (IR) modalities. The proposed solution employs a custom-built LED Clock that encodes time through red and infrared LEDs, allowing visual decoding of the exposure window (start and end times) from recorded frames for millisecond-level synchronization. We benchmark our method against hardware synchronization and achieve a residual error of 1.34 ms RMSE across multiple recordings. In further experiments, our method outperforms light-, audio-, and timecode-based synchronization approaches and directly improves downstream computer vision tasks, including multi-view pose estimation and 3D reconstruction. Finally, we validate the system in large-scale surgical recordings involving over 25 heterogeneous cameras spanning both IR and RGB modalities. This solution simplifies and streamlines the synchronization pipeline and expands access to advanced vision-based sensing in unconstrained environments, including industrial and clinical applications. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

24 pages, 2006 KB  
Article
HiRo-SLAM: A High-Accuracy and Robust Visual-Inertial SLAM System with Precise Camera Projection Modeling and Adaptive Feature Selection
by Yujuan Deng, Liang Tian, Xiaohui Hou, Xin Liu, Yonggang Wang, Xingchao Liu and Chunyuan Liao
Sensors 2026, 26(2), 711; https://doi.org/10.3390/s26020711 - 21 Jan 2026
Viewed by 1048
Abstract
HiRo-SLAM is a visual-inertial SLAM system developed to achieve high accuracy and enhanced robustness. To address critical limitations of conventional methods, including systematic biases from imperfect camera models, uneven spatial feature distribution, and the impact of outliers, we propose a unified optimization framework [...] Read more.
HiRo-SLAM is a visual-inertial SLAM system developed to achieve high accuracy and enhanced robustness. To address critical limitations of conventional methods, including systematic biases from imperfect camera models, uneven spatial feature distribution, and the impact of outliers, we propose a unified optimization framework that integrates four key innovations. First, Precise Camera Projection Modeling (PCPM) embeds a fully differentiable camera model in nonlinear optimization, ensuring accurate handling of camera intrinsics and distortion to prevent error accumulation. Second, Visibility Pyramid-based Adaptive Non-Maximum Suppression (P-ANMS) quantifies feature point contribution through a multi-scale pyramid, providing uniform visual constraints in weakly textured or repetitive regions. Third, Robust Optimization Using Graduated Non-Convexity (GNC) suppresses outliers through dynamic weighting, preventing convergence to local minima. Finally, the Point-Line Feature Fusion Frontend combines XFeat point features with SOLD2 line features, leveraging multiple geometric primitives to improve perception in challenging environments, such as those with weak textures or repetitive structures. Comprehensive evaluations on the EuRoC MAV, TUM-VI, and OIVIO benchmarks show that HiRo-SLAM outperforms state-of-the-art visual-inertial SLAM methods. On the EuRoC MAV dataset, HiRo-SLAM achieves a 30.0% reduction in absolute trajectory error compared to strong baselines and attains millimeter-level accuracy on specific sequences under controlled conditions. However, while HiRo-SLAM demonstrates state-of-the-art performance in scenarios with moderate texture and minimal motion blur, its effectiveness may be reduced in highly dynamic environments with severe motion blur or extreme lighting conditions. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

13 pages, 4845 KB  
Article
Efficient Solid-State Far-Field Macroscopic Fourier Ptychographic Imaging via Programmable Illumination and Camera Array
by Di You, Ge Ren and Haotong Ma
Photonics 2026, 13(1), 73; https://doi.org/10.3390/photonics13010073 - 14 Jan 2026
Viewed by 305
Abstract
The macroscopic Fourier ptychography (FP) is regarded as a highly promising approach of creating a synthetic aperture for macro visible imaging to achieve sub-diffraction-limited resolution. However most existing macro FP techniques rely on the high-precision translation stage to drive laser or camera scanning, [...] Read more.
The macroscopic Fourier ptychography (FP) is regarded as a highly promising approach of creating a synthetic aperture for macro visible imaging to achieve sub-diffraction-limited resolution. However most existing macro FP techniques rely on the high-precision translation stage to drive laser or camera scanning, thereby increasing system complexity and bulk. Meanwhile, the scanning process is slow and time-consuming, hindering the ability to achieve rapid imaging. In this paper, we introduce an innovative illumination scheme that employs a spatial light modulator to achieve precise programmable variable-angle illumination at a relatively long distance, and it can also freely adjust the illumination spot size through phase coding to avoid the issues of limited field of view and excessive dispersion of illumination energy. Coupled with a camera array, this could significantly reduce the number of shots taken by the imaging system and enable a lightweight and highly efficient solid-state macro FP imaging system with a large equivalent aperture. The effectiveness of the method is experimentally validated using various optically rough diffuse objects and a USAF target at laboratory-scale distances. Full article
Show Figures

Figure 1

14 pages, 10595 KB  
Article
Light Sources in Hyperspectral Imaging Simultaneously Influence Object Detection Performance and Vase Life of Cut Roses
by Yong-Tae Kim, Ji Yeong Ham and Byung-Chun In
Plants 2026, 15(2), 215; https://doi.org/10.3390/plants15020215 - 9 Jan 2026
Cited by 1 | Viewed by 560
Abstract
Hyperspectral imaging (HSI) is a noncontact camera-based technique that enables deep learning models to learn various plant conditions by detecting light reflectance under illumination. In this study, we investigated the effects of four light sources—halogen (HAL), incandescent (INC), fluorescent (FLU), and light-emitting diodes [...] Read more.
Hyperspectral imaging (HSI) is a noncontact camera-based technique that enables deep learning models to learn various plant conditions by detecting light reflectance under illumination. In this study, we investigated the effects of four light sources—halogen (HAL), incandescent (INC), fluorescent (FLU), and light-emitting diodes (LED)—on the quality of spectral images and the vase life (VL) of cut roses, which are vulnerable to abiotic stresses. Cut roses ‘All For Love’ and ‘White Beauty’ were used to compare cultivar-specific visible reflectance characteristics associated with contrasting petal pigmentation. HSI was performed at four time points, yielding 640 images per light source from 40 cut roses. The results revealed that the light source strongly affected both the image quality (mAP@0.5 60–80%) and VL (0–3 d) of cut roses. The HAL lamp produced high-quality spectral images across wavelengths (WL) ranging from 480 to 900 nm and yielded the highest object detection performance (ODP), reaching mAP@0.5 of 85% in ‘All For Love’ and 83% in ‘White Beauty’ with the YOLOv11x models. However, it increased petal temperature by 2.7–3 °C, thereby stimulating leaf transpiration and consequently shortening the VL of the flowers by 1–2.5 d. In contrast, INC produced unclear images with low spectral signals throughout the WL and consequently resulted in lower ODP, with mAP@0.5 of 74% and 69% in ‘All For Love’ and ‘White Beauty’, respectively. The INC only slightly increased petal temperature (1.2–1.3 °C) and shortened the VL by 1 d in the both cultivars. Although FLU and LED had only minor effects on petal temperature and VL, these illuminations generated transient spectral peaks in the WL range of 480–620 nm, resulting in decreased ODP (mAP@0.5 60–75%). Our results revealed that HAL provided reliable, high-quality spectral image data and high object detection accuracy, but simultaneously had negative effects on flower quality. Our findings suggest an alternative two-phase approach for illumination applications that uses HAL during the initial exploration of spectra corresponding to specific symptoms of interest, followed by LED for routine plant monitoring. Optimizing illumination in HSI will improve the accuracy of deep learning-based prediction and thereby contribute to the development of an automated quality sorting system that is urgently required in the cut flower industry. Full article
(This article belongs to the Special Issue Application of Optical and Imaging Systems to Plants)
Show Figures

Figure 1

14 pages, 3893 KB  
Article
High-Speed X-Ray Imager ‘Hayaka’ and Its Application for Quick Imaging XAFS and in Coquendo 4DCT Observation
by Akio Yoneyama, Midori Yasuda, Wataru Yashiro, Hiroyuki Setoyama, Satoshi Takeya and Masahide Kawamoto
Sensors 2026, 26(2), 434; https://doi.org/10.3390/s26020434 - 9 Jan 2026
Viewed by 612
Abstract
A lens-coupled high-speed X-ray camera, “Hayaka”, was developed for quick imaging of X-ray absorption fine structure (XAFS) and time-resolved high-speed computed tomography (CT) using synchrotron radiation (SR). This camera is a lens-coupled type, composed of a scintillator, an imaging lens system, and a [...] Read more.
A lens-coupled high-speed X-ray camera, “Hayaka”, was developed for quick imaging of X-ray absorption fine structure (XAFS) and time-resolved high-speed computed tomography (CT) using synchrotron radiation (SR). This camera is a lens-coupled type, composed of a scintillator, an imaging lens system, and a high-speed visible light sCMOS, capable of imaging with a minimum exposure time of 1 μs and a maximum frame rate of 5000 frames/s (fps). A feasibility study using white and monochromatic SR at the beamline BL07 of the SAGA Light Source showed that fine X-ray images with a spatial resolution of 77 μm can be captured with an exposure time of 10 μs. Furthermore, quick imaging XAFS, combined with high-speed energy scanning of a small Ge double crystal monochromator of the same beamline, enabled spectral image data to be acquired near the Cu K-edge in a minimum of 0.5 s. Additionally, an in coquendo 4DCT (time-resolved 3D observation of cooking processes) observation combined with a high-speed rotation table revealed the boiling process of Japanese somen noodles over 150 s with a time resolution of 0.5 s. Full article
(This article belongs to the Special Issue Recent Advances in X-Ray Sensing and Imaging)
Show Figures

Figure 1

Back to TopTop