Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,487)

Search Parameters:
Keywords = 2D optical image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 24765 KB  
Article
Field-Transformation-Based Light-Field Hologram Generation from a Single RGB Image
by Xiaoming Chen, Xiaoyu Jiang, Yingqing Huang, Xi Wang and Chaoqun Ma
Photonics 2026, 13(5), 407; https://doi.org/10.3390/photonics13050407 - 22 Apr 2026
Abstract
We propose a field-transformation-based framework for generating phase-only light-field holograms from a single RGB image. The method establishes an explicit pipeline from monocular scene inference to holographic wavefront synthesis, without requiring multi-view capture or task-specific hologram-network training. First, we construct a layered occlusion [...] Read more.
We propose a field-transformation-based framework for generating phase-only light-field holograms from a single RGB image. The method establishes an explicit pipeline from monocular scene inference to holographic wavefront synthesis, without requiring multi-view capture or task-specific hologram-network training. First, we construct a layered occlusion RGB-D model from the input image using monocular depth estimation, connectivity-based layer decomposition, and occlusion-aware inpainting, which provides a lightweight 3D prior for sparse-view rendering in the small-parallax regime. Second, we transform the rendered sparse RGB-D light field into a target complex wavefront on the recording plane through local frequency mapping, thereby bridging explicit scene geometry and wave-optical field construction. Third, we optimize the phase-only hologram under multi-plane amplitude constraints using a geometrically consistent initial phase and an error-driven adaptive depth-sampling strategy, which improves convergence stability and reconstruction quality under a limited computational budget. Numerical experiments show that the proposed method achieves better depth continuity, occlusion fidelity, and lower speckle noise than representative layer-based and point-based methods, and improves the average PSNR and SSIM by approximately 3 dB and 0.15, respectively, over Hogel-Free Holography. Optical experiments further confirm the physical feasibility and robustness of the proposed framework. Full article
Show Figures

Figure 1

17 pages, 5384 KB  
Review
Hyperspectral Sensing Enabled by Optics-Free Sensor Architectures
by Yicheng Wang, Xueyi Wang, Xintong Guo and Yining Mu
Nanomanufacturing 2026, 6(2), 8; https://doi.org/10.3390/nanomanufacturing6020008 - 20 Apr 2026
Abstract
Hyperspectral sensing allows for the capture of spatially resolved spectral data, a capability critical for applications spanning from remote sensing to biomedical diagnostics. Nevertheless, the widespread adoption of this technology is hindered by the bulk and complexity of traditional systems based on diffractive [...] Read more.
Hyperspectral sensing allows for the capture of spatially resolved spectral data, a capability critical for applications spanning from remote sensing to biomedical diagnostics. Nevertheless, the widespread adoption of this technology is hindered by the bulk and complexity of traditional systems based on diffractive optics. To overcome these hurdles, substantial research efforts have been dedicated to system miniaturization via component scaling and computational imaging. This review outlines the technological progression of compact hyperspectral imaging, ranging from miniaturized dispersive elements and tunable filters to computational snapshot designs using optical multiplexing. Although these approaches decrease system volume, they generally treat the sensor as a passive intensity recorder requiring external encoding. Therefore, we focus here on the rising paradigm of sensor-level integration made possible by nanomanufacturing. We examine optics-free architectures where spectral discrimination is embedded directly into the pixel, distinguishing between pixel-level nanophotonic filtering and intrinsic material-based selectivity. We specifically highlight emerging platforms such as compositionally engineered and cavity-enhanced perovskites, as well as electrically tunable organic or two-dimensional (2D) material heterostructures. To conclude, this review discusses persistent challenges regarding fabrication uniformity and stability, providing an outlook on the future of scalable and fully integrated hyperspectral vision systems. Full article
Show Figures

Figure 1

24 pages, 1651 KB  
Article
An Integrated Tunable-Focus Light Field Imaging System for 3D Seed Phenotyping: From Co-Optimized Optical Design to Computational Reconstruction
by Jingrui Yang, Qinglei Zhao, Shuai Liu, Meihua Xia, Jing Guo, Yinghong Yu, Chao Li, Xiao Tang, Shuxin Wang, Qinglong Hu, Fengwei Guan, Qiang Liu, Mingdong Zhu and Qi Song
Photonics 2026, 13(4), 385; https://doi.org/10.3390/photonics13040385 - 17 Apr 2026
Viewed by 121
Abstract
Three-dimensional seed phenotyping requires imaging systems capable of achieving micron-level resolution across a centimeter-level field of view (FOV), a goal constrained by the resolution–FOV trade-off in conventional light field architectures. This paper presents a hardware–software co-optimized framework that integrates a reconfigurable optical system [...] Read more.
Three-dimensional seed phenotyping requires imaging systems capable of achieving micron-level resolution across a centimeter-level field of view (FOV), a goal constrained by the resolution–FOV trade-off in conventional light field architectures. This paper presents a hardware–software co-optimized framework that integrates a reconfigurable optical system with computational imaging pipelines to address this limitation. At the hardware level, we develop a tunable-focus lens module that enables flexible adjustment of the effective focal length, combined with a custom-designed microlens array (MLA). A mathematical model is established to analyze the interdependencies among FOV, lateral resolution, depth of field (DOF), and system configuration, guiding the design of individual optical components. On the computational side, we propose a hybrid aberration correction strategy: first, a co-calibration of lens and MLA aberrations based on line-feature detection; second, a conditional generative adversarial network (cGAN) with attention-guided residual learning to enhance sub-aperture images, achieving a PSNR of 34.63 dB and an SSIM of 0.9570 on seed datasets. Experimentally, the system achieves a resolution of 6.2 lp/mm at MTF50 over a 2–3 cm FOV, representing a 307% improvement over the initial configuration (1.52 lp/mm). The reconstruction pipeline combines epipolar plane image (EPI) analysis with multi-view consistency constraints to generate dense 3D point clouds at a density of approximately 1.5 × 104 points/cm2 while preserving spectral and textural features. Validation on bitter melon and rice seeds demonstrates accurate 3D reconstruction and accurate extraction of morphological parameters across a large area. By integrating optical and computational design, this work establishes a reconfigurable imaging framework that overcomes the resolution–FOV limitations of conventional light field systems. The proposed architecture is also applicable to robotic vision and biomedical imaging. Full article
(This article belongs to the Special Issue Optical Imaging and Measurements: 2nd Edition)
21 pages, 8107 KB  
Article
Lens Alternatives to Microscope Objectives in Optical Coherence Microscopy for Ultra-High-Resolution Imaging
by Xinjie Zhu, Zijian Zhang, Samuel Lawman, Xingyu Yang, Yalin Zheng and Yaochun Shen
Photonics 2026, 13(4), 384; https://doi.org/10.3390/photonics13040384 - 17 Apr 2026
Viewed by 211
Abstract
Ultrahigh lateral resolution (UHLR) optical coherence tomography (OCT) technology, also called optical coherence microscopy (OCM), has gained popularity, especially in the field of biomedical imaging. In these systems, high numerical aperture (NA) Microscope objectives (MO) are employed in OCM systems to offer better [...] Read more.
Ultrahigh lateral resolution (UHLR) optical coherence tomography (OCT) technology, also called optical coherence microscopy (OCM), has gained popularity, especially in the field of biomedical imaging. In these systems, high numerical aperture (NA) Microscope objectives (MO) are employed in OCM systems to offer better than 3 µm lateral resolution. However, in the implemented broadband OCM configuration, the use of complex multi-element microscope objectives can reduce the detected returned signal compared with a simpler imaging lens configuration. This reduction in detected returned signals can become an important practical limitation in many OCM applications, particularly for biomedical imaging when high imaging speed is crucial. This study investigates whether a single off-the-shelf lens can provide a practical alternative to conventional MOs, achieving higher throughput while maintaining reasonable spatial resolution. We systematically evaluated 14 commercial lenses using Zemax OpticStudio simulations, identifying an aspherized achromatic lens (Edmund Optics #85302) that best met these key criteria. To validate its feasibility for OCM, performance was tested in both Full-Field Time-Domain OCM (FF-TD-OCM) and Line-Field Spectral-Domain OCM (LF-SD-OCM) configurations. Using a broadband composite Superluminescent Diode (SLD) source (750–920 nm), we quantified the resolvable features, axial resolution, and overall light transmission. The validated system demonstrated near-diffraction-limited performance. In the LF-SD-OCM setup, it successfully resolved features as fine as Group 8, Element 6, corresponding to a 2.2 µm line pair pitch (~1.1 µm line width) and achieved a 2.86 µm axial resolution in air. A through-focus comparison further showed practically useful contrast retention around focus. Additional imaging of onion epidermal tissue and ex vivo porcine corneal tissue demonstrated that the proposed lens could provide interpretable structural images on representative biological samples. Under the tested LF-SD-OCM detection configuration, the selected lens delivered approximately 2.0 dB higher returned signal than the Mitutoyo MY10X-823 objective according to 1.59× larger received signal. Full article
Show Figures

Figure 1

21 pages, 10403 KB  
Article
Composition-Dependent Mechanical and Thermal Behavior of TPU-Modified PLA and ABS Filaments for FDM Applications
by Burak Demirtas, Caglar Sevim and Munise Didem Demirbas
Polymers 2026, 18(8), 949; https://doi.org/10.3390/polym18080949 - 13 Apr 2026
Viewed by 363
Abstract
Although polylactic acid (PLA) and acrylonitrile–butadiene–styrene (ABS) are among the most widely used polymers in material extrusion, their limited toughness and energy-absorption capacity often restrict the structural performance of 3D-printed functional components. To address the limited comparative understanding of how thermoplastic polyurethane (TPU) [...] Read more.
Although polylactic acid (PLA) and acrylonitrile–butadiene–styrene (ABS) are among the most widely used polymers in material extrusion, their limited toughness and energy-absorption capacity often restrict the structural performance of 3D-printed functional components. To address the limited comparative understanding of how thermoplastic polyurethane (TPU) modifies the deformation behavior and phase characteristics of these two polymer systems, this study presents a multi-analytical evaluation of TPU-reinforced PLA and ABS blends. To this end, both polymers were blended with TPU at 10–50 wt% and processed into filaments via single-screw extrusion. The resulting filaments were used to fabricate ASTM D638 Type I tensile specimens via material extrusion under matrix-specific, but internally consistent, printing parameters. For each composition, five specimens were tested to obtain representative values of tensile strength, elongation at break, and toughness. In addition to conventional tensile testing, the evolution of strain during deformation was monitored using digital image correlation (DIC), enabling full-field characterization of local deformation behavior. To ensure experimental reliability, specimen masses were carefully controlled, and the datasets were analyzed using MATLAB. Thermal properties were investigated by differential scanning calorimetry (DSC) to determine the influence of TPU on glass transition, melting behavior, and phase mobility, and to relate these thermal characteristics to the mechanical response of the blends. The incorporation of TPU significantly increased ductility and energy absorption in both polymer matrices, although the magnitude of improvement differed. ABS/TPU blends exhibited the highest toughness enhancement, reaching 221.4% at 30 wt% TPU, while PLA/TPU systems showed nearly a twofold increase at 20 wt% TPU. DIC analysis further revealed a transition from localized brittle deformation in neat polymers to more distributed plastic deformation with increasing TPU content. DSC results indicated reduced crystallinity in PLA-rich blends and enhanced segmental mobility in ABS-based systems, consistent with the observed mechanical behavior. Overall, the combined mechanical, optical, and thermal analyses demonstrate that the optimal TPU content is matrix-dependent, providing practical guidelines for tailoring PLA- and ABS-based filaments to achieve a controlled balance between stiffness, ductility, and energy absorption in material extrusion applications. Full article
Show Figures

Figure 1

27 pages, 49307 KB  
Article
Enhancing Soil Salinity Mapping by Integrating PolSAR Scattering Components and Spectral Indices in a 2D Feature Space Using RADARSAT-2 and Landsat-8 Imagery
by Bilali Aizezi, Ilyas Nurmemet, Aihepa Aihaiti, Yu Qin, Meimei Zhang, Ru Feng, Yixin Zhang and Yang Xiang
Remote Sens. 2026, 18(8), 1153; https://doi.org/10.3390/rs18081153 - 13 Apr 2026
Viewed by 345
Abstract
Soil salinization in arid oases constrains soil functioning and crop production, making spatially explicit monitoring important for land management. Multispectral optical remote sensing enables large-area salinity assessment, but in oasis environments such as the Keriya Oasis, its performance can be limited by spectral [...] Read more.
Soil salinization in arid oases constrains soil functioning and crop production, making spatially explicit monitoring important for land management. Multispectral optical remote sensing enables large-area salinity assessment, but in oasis environments such as the Keriya Oasis, its performance can be limited by spectral confusion between salt crusts and bright bare soils, sparse vegetation cover, and strong surface heterogeneity. Synthetic aperture radar (SAR), by contrast, provides all-weather imaging capability and sensitivity to surface scattering and dielectric-related conditions, but its salinity interpretation is often affected by surface complexity and environmental coupling. To address these, a spectral index–polarimetric scattering integration framework that combines RADARSAT-2 and Landsat-8 OLI features within a simple two-dimensional (2D) feature space was developed. Two groups of models were constructed from variables selected through a data-driven screening process: (1) polarimetric feature space models based on combinations such as VanZyl volume scattering with Pauli odd-bounce or Touzi alpha scattering; and (2) multi-source feature space models that integrate the optimal polarimetric component with key spectral indicators such as SI4 and MSAVI. Among all tested models, VanZyl_vol-SI4 achieved the best performance (fitting: R2 = 0.749, RMSE = 5.798 dS m−1, MAE = 4.086 dS m−1; validation: R2 = 0.716, RMSE = 5.566 dS m−1, MAE = 4.528 dS m−1). The results indicate that integrating PolSAR scattering information with optical indices can improve salinity mapping relative to single-source feature spaces in the Keriya Oasis. The proposed 2D framework provides a concise way to compare different feature combinations and supports regional identification of salt-affected soils. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

22 pages, 4667 KB  
Article
Self-Assembly of Curved Photonic Heterostructures by the Hanging Drop Method
by Ion Sandu, Claudiu Teodor Fleaca, Florian Dumitrache, Iuliana Urzica, Iulia Antohe and Marius Dumitru
Polymers 2026, 18(8), 924; https://doi.org/10.3390/polym18080924 - 9 Apr 2026
Viewed by 415
Abstract
By combining hanging-drop self-assembly with melt infiltration and selective inversion, we fabricate millimetric and free-standing curved photonic heterostructures that integrate infiltrated-opal, inverse-opal, embossed, and white-scattering 2.5D metasurface domains within a single continuous body. These architectures enable configurations inaccessible to planar fabrication, including naturally [...] Read more.
By combining hanging-drop self-assembly with melt infiltration and selective inversion, we fabricate millimetric and free-standing curved photonic heterostructures that integrate infiltrated-opal, inverse-opal, embossed, and white-scattering 2.5D metasurface domains within a single continuous body. These architectures enable configurations inaccessible to planar fabrication, including naturally formed concavities within convex inverse-opal films and alternating ordered/single-layer regions that preserve local coherence while introducing disorder at larger scales. Across these heterogeneous curved landscapes, we observe optical phenomena absent in flat photonic structures—spectrally selected lateral collimation, geometry-shifted ghost images, and transmission-derived valleys shaped by curvature-mediated Bragg extraction. Their origin lies in the geometric constraints inherent to curved assemblies, where spatially varying normals, non-parallel lattice orientations, and topologically required defects couple order and disorder into a distributed-coherence regime. This coupling expands the accessible photonic state space, establishing curvature as an active functional degree of freedom rather than a geometric constraint, positioning the self-assembled photonic heterostructures as a scalable route toward multifunctional 3D metasurfaces and new regimes of light–matter interaction. Full article
(This article belongs to the Special Issue Advances in Polymer Materials for Sensors and Flexible Electronics)
Show Figures

Graphical abstract

14 pages, 258 KB  
Article
Management of Complex CNS Tumours: Impact of Multiple Tumour Board Review
by Chalina Huynh, Pavanpreet Metley, Kent Powell, Matthew Larocque, Keith Aronyk and Alysa Fairchild
Radiation 2026, 6(2), 14; https://doi.org/10.3390/radiation6020014 - 7 Apr 2026
Viewed by 293
Abstract
Background. Patients with malignant or benign central nervous system (CNS) tumours are evaluated for suitability of treatment modality based on multiple clinical and tumour-related factors. To obtain multidisciplinary consensus, a patient’s file and imaging are commonly reviewed by a tumour board (TB). [...] Read more.
Background. Patients with malignant or benign central nervous system (CNS) tumours are evaluated for suitability of treatment modality based on multiple clinical and tumour-related factors. To obtain multidisciplinary consensus, a patient’s file and imaging are commonly reviewed by a tumour board (TB). There are three relevant weekly TB venues at our institute—gamma knife stereotactic radiosurgery (SRS) intake rounds, CNS rounds, and stereotactic body radiotherapy (SBRT) rounds—which are attended by non-overlapping clinician teams. We explored the clinical parameters prompting multiple TB reviews in patients with complex CNS tumours. Methods. Data were retrospectively obtained from electronic medical records. Patients referred for discussion at SRS rounds (November 2017–June 2020) were cross-referenced with those reviewed in CNS rounds and SBRT rounds. The cohort of interest included patients who underwent review at more than one TB for the same indication. Patient, tumour, and treatment factors were abstracted, and descriptive statistics were calculated. A sub-cohort of patients with pre-plans created for both SRS and conventionally fractionated external beam radiotherapy (EBRT) was identified. Dosimetric data were analyzed. Results. Of 1091 patients, 87 (8.0%) were discussed at more than one TB. 59/87 (67.8%) patients were reviewed at two TBs pertaining to the same CNS lesion and comprised the study cohort. The most common tumour type was meningioma (20/59), and the most common reason for multiple discussions was proximity to optic structures (19/59). After TB discussions, 25/59 patients were seen in consultation by one specialist, 29/59 by two, and 5/59 by none. Overall, the final treatment decisions were conventional EBRT in 21/59; SRS in 18/59; surveillance in 12/59; surgery in 3/59; systemic therapy in 3/59; proton referral in 1/59; and SBRT in 1/59. A total of 20/59 patients were treated with palliative intent. Among all patients who ultimately received radiotherapy, median interval between the first TB discussion and the first RT treatment was 56 days (IQR 7.5–65.5 d). The pre-plan sub-cohort consisted of four patients, all of whom were ultimately treated with conventional EBRT. Conclusions. Evidence to support optimal treatment for some complex CNS tumours can be limited. Multiple radiotherapy modalities may be equally favourable (or unfavourable) options. Proximity to the optic apparatus and previous CNS irradiation are common reasons for clinical equipoise. Tumour board review is an essential tool in formulating a multidisciplinary care plan; however, attention should be paid to ensuring that subsequent consultations and treatment initiation are not unduly delayed. Full article
18 pages, 535 KB  
Review
Artificial Intelligence in Intraoperative Imaging and Navigation for Spine Surgery: A Narrative Review
by Mina Girgis, Allison Kelliher, Michael S. Pheasant, Alex Tang, Siddharth Badve and Tan Chen
J. Clin. Med. 2026, 15(7), 2779; https://doi.org/10.3390/jcm15072779 - 7 Apr 2026
Viewed by 397
Abstract
Artificial intelligence (AI) is increasingly transforming spine surgery, with expanding applications in diagnostics, intraoperative imaging, and surgical navigation. As the field advances toward greater precision and safety, machine learning (ML) and deep learning technologies are being integrated to augment surgeon expertise and optimize [...] Read more.
Artificial intelligence (AI) is increasingly transforming spine surgery, with expanding applications in diagnostics, intraoperative imaging, and surgical navigation. As the field advances toward greater precision and safety, machine learning (ML) and deep learning technologies are being integrated to augment surgeon expertise and optimize operative workflows. In particular, AI-driven innovations in image acquisition and navigation are reshaping intraoperative decision-making and technical execution. This narrative review provides an overview of AI applications relevant to intraoperative imaging and navigation in spine surgery. We begin by defining key concepts in AI, ML, and deep learning and briefly outline the historical evolution of AI within spine practice. We then examine current capabilities in image recognition and automated pathology detection, emphasizing their clinical relevance. Given the central role of imaging accuracy in modern navigation-assisted procedures, we review conventional acquisition platforms, including intraoperative computed tomography (CT) systems (e.g., O-arm, GE, Airo), surface-based registration to preoperative CT (Stryker, Medtronic), and optical surface mapping technologies (e.g., 7D Surgical). Emerging AI-optimized advancements are subsequently discussed, including low-dose intraoperative CT protocols, expanded scan windows, metal artifact reduction algorithms, integration of 2D fluoroscopy with preoperative CT datasets, and 3D reconstruction derived from 2D imaging. These developments aim to improve image quality, reduce radiation exposure, and enhance navigational accuracy. By synthesizing current evidence and technological progress, this review highlights how AI-enhanced imaging systems are redefining intraoperative spine surgery and shaping the future of precision-based care. The primary purpose of this review is to outline the applications of AI and its potential for perioperative and intraoperative optimization, including radiation exposure reduction, workflow streamlining, preoperative planning, robot-assisted surgery, and navigation. The secondary purpose is to define AI, machine learning, and deep learning within the medical context, describe image and pathology recognition, and provide a historical overview of AI in orthopedic spine surgery. Full article
(This article belongs to the Special Issue Spine Surgery: Current Practice and Future Directions)
Show Figures

Figure 1

13 pages, 3660 KB  
Article
Prediction of Visual Field Progression in Myopic Normal Tension Glaucoma Using a Nomogram-Based Model
by Ji Eun Song, Eun Ji Lee and Tae-Woo Kim
J. Clin. Med. 2026, 15(7), 2709; https://doi.org/10.3390/jcm15072709 - 3 Apr 2026
Viewed by 315
Abstract
Background/Objectives: This study aimed to develop a nomogram-based prediction tool to estimate visual field (VF) progression in patients with bilateral myopic normal-tension glaucoma (mNTG) by integrating key structural and vascular parameters. Methods: This retrospective cohort study included 150 eyes from 75 [...] Read more.
Background/Objectives: This study aimed to develop a nomogram-based prediction tool to estimate visual field (VF) progression in patients with bilateral myopic normal-tension glaucoma (mNTG) by integrating key structural and vascular parameters. Methods: This retrospective cohort study included 150 eyes from 75 treatment-naïve patients with mNTG. All subjects were followed for at least five years with at least six reliable VF examinations. Key structural features, including the lamina cribrosa steepness index (LCSI) via enhanced-depth imaging optical coherence tomography (OCT) and choroidal microvascular dropout (cMvD) via OCT angiography (OCTA), were evaluated. VF progression was determined by event-based glaucoma progression analysis (GPA). To construct the predictive nomogram, clustered logistic regression with forward selection and 1000 bootstrap iterations was used to identify independent predictors. Results: Of the 150 eyes, 58 (38.7%) exhibited VF progression. Multivariable analysis identified steeper LCSI and the presence of parapapillary cMvD at baseline as significant independent predictors of progression. The resulting nomogram demonstrated excellent predictive accuracy, with an AUC of 0.922 and a C-index of approximately 0.92, indicating strong discriminative ability. Conclusions: This nomogram, incorporating structural (LCSI) and vascular (cMvD) markers, may offer a useful individualized tool for predicting VF progression in mNTG. This tool could assist in the early identification of high-risk patients and supports personalized treatment planning to optimize long-term visual outcomes. Full article
Show Figures

Figure 1

23 pages, 2950 KB  
Article
Multi-View Camera-Based UAV 3D Trajectory Reconstruction Using an Optical Imaging Geometric Model
by Chen Ji, Yiyue Wang, Junfan Yi, Xiangtian Zheng, Wanxuan Geng and Liang Cheng
Electronics 2026, 15(7), 1425; https://doi.org/10.3390/electronics15071425 - 30 Mar 2026
Viewed by 399
Abstract
In low-altitude complex environments, accurately reconstructing the three-dimensional (3D) flight trajectories of small unmanned aerial vehicles (UAV) without onboard positioning modules remains challenging. To address this issue, this paper proposes a multi-view ground camera-based UAV 3D trajectory detection method founded on an optical [...] Read more.
In low-altitude complex environments, accurately reconstructing the three-dimensional (3D) flight trajectories of small unmanned aerial vehicles (UAV) without onboard positioning modules remains challenging. To address this issue, this paper proposes a multi-view ground camera-based UAV 3D trajectory detection method founded on an optical imaging geometric model. Multiple ground cameras are used to synchronously observe UAV flight, enabling stable 3D trajectory reconstruction without relying on onboard Global Navigation Satellite System (GNSS). At the two-dimensional (2D) observation level, a lightweight object detection model is employed for rapid UAV detection. Foreground segmentation is further introduced to extract accurate UAV contours, and geometric centroids are computed to obtain precise image plane coordinates. At the 3D reconstruction stage, camera extrinsic parameters are estimated using a back intersection method with ground control points, and the UAV spatial position in the world coordinate system is recovered via multi-view forward intersection. Field experiments demonstrate that the proposed method achieves stable 3D trajectory reconstruction in real urban environments, with a median error of 4.93 m and a mean error of 5.83 m. The mean errors along the X, Y, and Z axes are 2.28 m, 4.58 m, and 1.09 m, respectively, confirming its effectiveness for low-cost UAV trajectory monitoring. Full article
Show Figures

Figure 1

15 pages, 1771 KB  
Article
Deep Learning-Based Generation of Retinal Nerve Fibre Layer Thickness Maps from Fundus Photographs: A Comparative Analysis of U-Net Architectures for Accessible Glaucoma Assessment
by Kyoung Ohn, Harin Jun, Yong-Sik Kim and Woong-Joo Whang
Life 2026, 16(4), 559; https://doi.org/10.3390/life16040559 - 29 Mar 2026
Viewed by 337
Abstract
Introduction: Optical coherence tomography (OCT) is the gold standard for retinal nerve fibre layer (RNFL) assessment; its high cost and limited accessibility hinder widespread use. This study aims to develop deep learning models that generate RNFL thickness maps from fundus images, providing a [...] Read more.
Introduction: Optical coherence tomography (OCT) is the gold standard for retinal nerve fibre layer (RNFL) assessment; its high cost and limited accessibility hinder widespread use. This study aims to develop deep learning models that generate RNFL thickness maps from fundus images, providing a cost-effective alternative to OCT. Methods: A dataset of 5000 fundus-OCT image pairs from 5000 unique glaucoma patients was used to train and compare the following four U-Net-based deep learning models: ResU-Net, R2U-Net, Nested U-Net, and Dense U-Net. All models were trained for up to 1000 epochs with early stopping (patience = 50 epochs). Performance was evaluated using Mean Squared Error (MSE), Mean Absolute Error (MAE), Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Fréchet Inception Distance (FID). Results: ResU-Net demonstrated the best performance, achieving MSE = 0.00061, MAE = 0.01877, SSIM = 0.9163, PSNR = 32.19 dB, and FID = 30.08. These results represent a 108% improvement in SSIM and a 67% improvement in PSNR compared to previously published benchmark for this task. Conclusions: This study demonstrates that deep learning models, particularly ResU-Net, can generate high-fidelity RNFL thickness maps from fundus photographs, substantially outperforming prior published benchmarks. This approach represents a potential contribution toward accessible glaucoma assessment, contingent upon prospective clinical validation and regulatory evaluation. Full article
(This article belongs to the Special Issue Vision Science and Optometry: 2nd Edition)
Show Figures

Figure 1

31 pages, 9451 KB  
Article
Quantitative Microstructure Characterization in Additively Manufactured Nickel Alloy 625 Using Image Segmentation and Deep Learning
by Tuğrul Özel, Sijie Ding, Amit Ramasubramanian, Franco Pieri and Doruk Eskicorapci
Machines 2026, 14(4), 366; https://doi.org/10.3390/machines14040366 - 26 Mar 2026
Viewed by 396
Abstract
Laser Powder Bed Fusion for metals (PBF-LB/M) is a complex additive manufacturing process in which metal powder is selectively melted layer-by-layer to fabricate 3D parts. Process parameters critically influence the resulting microstructure in nickel alloys, with features such as melt pool marks, grain [...] Read more.
Laser Powder Bed Fusion for metals (PBF-LB/M) is a complex additive manufacturing process in which metal powder is selectively melted layer-by-layer to fabricate 3D parts. Process parameters critically influence the resulting microstructure in nickel alloys, with features such as melt pool marks, grain size and orientation, porosity, and cracks serving as key process signatures. These features are typically analyzed post-process to identify suboptimal conditions. This research aims to develop automated post-process measurement and analysis techniques using image processing, pattern recognition, and statistical learning to correlate process parameters with part quality. Optical microscopy images of build surfaces are analyzed using machine learning algorithms to evaluate porosity, grain size, and relative density in fabricated test coupons. Effect plots are generated to identify trends related to increasing energy density. A novel deep learning approach based on Mask R-CNN is used to detect and segment melt pool regions in optical microscopy images. From the segmented regions, melt pool dimensions—such as width, depth, and area—are extracted using bounding geometry coordinates. Manually labeled images (Type I and Type II) are used to train the model. A comparison between ResNet-50 and ResNet-101 backbones shows that the ResNet-50-based model (Model 2) achieves superior performance, with lower training loss (0.1781 vs. 0.1907) and validation loss (8.6140 vs. 9.4228). Quantitative evaluation using the Jaccard index, precision, and recall metrics shows that the ResNet-101 backbone outperforms ResNet-50, achieving about 4% higher mean Intersection-over-Union, with values of 0.85 for Type I and 0.82 for Type II melt pools, where Type I is detected more accurately due to its more regular morphology and clearer boundaries. By extending Faster R-CNNs with a mask prediction branch, the method allows for precise melt pool measurements, providing valuable insights into process quality and dimensional accuracy, and aiding in the detection of defects in PBF-LB-fabricated parts. Full article
(This article belongs to the Special Issue Artificial Intelligence in Mechanical Engineering Applications)
Show Figures

Figure 1

11 pages, 1331 KB  
Communication
2D Perovskite All-Optical Synapses for Visual Perception Learning
by Fei Lv, Ruochen Li and Qing Hou
Photonics 2026, 13(4), 318; https://doi.org/10.3390/photonics13040318 - 25 Mar 2026
Viewed by 344
Abstract
This study presents an all-optical artificial synapse based on 2D perovskite materials for neuromorphic visual simulation. While conventional optoelectronic synapses, which integrate memory and processing, are prevalent in this field, their inherent optical-to-electrical conversion during signal processing incurs significant energy costs. In contrast, [...] Read more.
This study presents an all-optical artificial synapse based on 2D perovskite materials for neuromorphic visual simulation. While conventional optoelectronic synapses, which integrate memory and processing, are prevalent in this field, their inherent optical-to-electrical conversion during signal processing incurs significant energy costs. In contrast, our proposed device operates purely in the optical domain. Under ultraviolet–visible light control, the change in light transmittance of this device can simulate various key biological synaptic plasticity behaviors, including paired-pulse facilitation and learning ability. By integrating these devices into a 28 × 28 synaptic array, we constructed an artificial neural network that mimics the experience-driven enhancement characteristic of human visual perceptual learning. Under light-responsive regulation, the system optimized image recognition learning behavior, and after multiple training sessions, the recognition accuracy stabilized above 97%. This study is based on two-dimensional perovskite materials and provides a new material platform for realizing intelligent visual systems with adaptive learning capabilities. Full article
(This article belongs to the Section Optoelectronics and Optical Materials)
Show Figures

Figure 1

21 pages, 3469 KB  
Article
Three-Dimensional Imaging Based on Refractive Camera Model and Error Calibration for Risley-Prism Imaging System
by Wenjie Luo, Shumin Yang, Duanhao Huang, Feng Huang and Pengfei Wang
Sensors 2026, 26(7), 2013; https://doi.org/10.3390/s26072013 - 24 Mar 2026
Viewed by 324
Abstract
Three-dimensional (3D) reconstruction technology has found widespread applications across various domains, including intelligent driving and underwater exploration. But the existing imaging systems and methods still have deficiencies in terms of reconstruction accuracy, detection distance and system volume. Herein, this paper presents a three-dimensional [...] Read more.
Three-dimensional (3D) reconstruction technology has found widespread applications across various domains, including intelligent driving and underwater exploration. But the existing imaging systems and methods still have deficiencies in terms of reconstruction accuracy, detection distance and system volume. Herein, this paper presents a three-dimensional detection and reconstruction method based on a compact Risley-prism 3D imaging system that achieves multi-viewpoint imaging by rotating the Risley prism to adjust the camera’s optical axis. A refractive camera model that integrates the pinhole camera model with the vector form of Snell’s law is established to precisely describe beam trajectory. A forward projection method suitable for refractive interfaces is developed based on Fermat’s principle, and the influence of systematic errors on the reconstruction is analyzed in detail through simulation. Furthermore, a new 3D reconstruction method combining error calibration based on the optimization iteration is introduced to avoid the influence of error and improve reconstruction quality. Experimental results demonstrate that the proposed approach markedly enhances 3D reconstruction accuracy, reducing the Normalized Root Mean Square Error (NRMSE) from 0.9076 to 0.0207. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Back to TopTop