Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,016)

Search Parameters:
Keywords = wide field-of-view

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 17389 KiB  
Article
A Distortion Image Correction Method for Wide-Angle Cameras Based on Track Visual Detection
by Quanxin Liu, Xiang Sun and Yuanyuan Peng
Photonics 2025, 12(8), 767; https://doi.org/10.3390/photonics12080767 - 30 Jul 2025
Viewed by 148
Abstract
Regarding the distortion correction problem of large field of view wide-angle cameras commonly used in railway visual inspection systems, this paper proposes a novel online calibration method for non-specially made cooperative calibration objects. Based on the radial distortion divisor model, first, the spatial [...] Read more.
Regarding the distortion correction problem of large field of view wide-angle cameras commonly used in railway visual inspection systems, this paper proposes a novel online calibration method for non-specially made cooperative calibration objects. Based on the radial distortion divisor model, first, the spatial coordinates of natural spatial landmark points are constructed according to the known track gauge value between two parallel rails and the spacing value between sleepers. By using the image coordinate relationships corresponding to these spatial coordinates, the coordinates of the distortion center point are solved according to the radial distortion fundamental matrix. Then, a constraint equation is constructed based on the collinear constraint of vanishing points in railway images, and the Levenberg–Marquardt algorithm is used to found the radial distortion coefficients. Moreover, the distortion coefficients and the coordinates of the distortion center are re-optimized according to the least squares method (LSM) between points and the fitted straight line. Finally, based on the above, the distortion correction is carried out for the distorted railway images captured by the camera. The experimental results show that the above method can efficiently and accurately perform online distortion correction for large field of view wide-angle cameras used in railway inspection without the participation of specially made cooperative calibration objects. The whole method is simple and easy to implement, with high correction accuracy, and is suitable for the rapid distortion correction of camera images in railway online visual inspection. Full article
(This article belongs to the Section Optoelectronics and Optical Materials)
Show Figures

Figure 1

17 pages, 2000 KiB  
Article
Can 3D Exoscopy-Assisted Surgery Replace the Traditional Endoscopy in Septoplasty? Analysis of Our Two-Year Experience
by Luciano Catalfamo, Alessandro Calvo, Samuele Cicchiello, Antonino La Fauci, Francesco Saverio De Ponte, Calogero Scozzaro and Danilo De Rinaldis
J. Clin. Med. 2025, 14(15), 5279; https://doi.org/10.3390/jcm14155279 - 25 Jul 2025
Viewed by 291
Abstract
Background/Objectives: Septoplasty is a commonly performed surgical procedure aimed at correcting nasal septal deviations, to improve nasal airflow and respiratory function. Traditional approaches to septal correction rely on either direct visualization or endoscopic guidance. Recently, a novel technology known as exoscopy has [...] Read more.
Background/Objectives: Septoplasty is a commonly performed surgical procedure aimed at correcting nasal septal deviations, to improve nasal airflow and respiratory function. Traditional approaches to septal correction rely on either direct visualization or endoscopic guidance. Recently, a novel technology known as exoscopy has been introduced into surgical practice. Exoscopy is an “advanced magnification system” that provides an enlarged, three-dimensional view of the operating field. In this article, we present our experience with exoscope-assisted septoplasty, developed over the last two years, and compare it with our extensive experience using the endoscopic approach. Methods: Our case series includes 26 patients, predominantly males and young adults, who underwent exoscope-assisted septoplasty. We discuss the primary advantages of this technique and, most importantly, provide an analysis of its learning curve. The cohort of patients treated using the exoscopic approach was compared with a control group of 26 patients who underwent endoscope-guided septoplasty, randomly selected from our broader clinical database. Finally, we present a representative surgical case that details all phases of the exoscope-assisted procedure. Results: Our surgical experience has demonstrated that exoscopy is a safe and effective tool for performing septoplasty. Moreover, the learning curve associated with this technique exhibits a rapid and progressive improvement. Notably, exoscopy provides a substantial educational benefit for trainees and medical students, as it enables them to share the same visual perspective as the lead surgeon. Conclusions: Although further studies are required to validate this approach, we believe that exoscopy represents a promising advancement for a wide range of head and neck procedures, and certainly for septoplasty. Full article
(This article belongs to the Special Issue Recent Advances in Reconstructive Oral and Maxillofacial Surgery)
Show Figures

Figure 1

18 pages, 12540 KiB  
Article
SS-LIO: Robust Tightly Coupled Solid-State LiDAR–Inertial Odometry for Indoor Degraded Environments
by Yongle Zou, Peipei Meng, Jianqiang Xiong and Xinglin Wan
Electronics 2025, 14(15), 2951; https://doi.org/10.3390/electronics14152951 - 24 Jul 2025
Viewed by 207
Abstract
Solid-state LiDAR systems are widely recognized for their high reliability, low cost, and lightweight design, but they encounter significant challenges in SLAM tasks due to their limited field of view and uneven horizontal scanning patterns, especially in indoor environments with geometric constraints. To [...] Read more.
Solid-state LiDAR systems are widely recognized for their high reliability, low cost, and lightweight design, but they encounter significant challenges in SLAM tasks due to their limited field of view and uneven horizontal scanning patterns, especially in indoor environments with geometric constraints. To address these challenges, this paper proposes SS-LIO, a precise, robust, and real-time LiDAR–Inertial odometry solution designed for solid-state LiDAR systems. SS-LIO uses uncertainty propagation in LiDAR point-cloud modeling and a tightly coupled iterative extended Kalman filter to fuse LiDAR feature points with IMU data for reliable localization. It also employs voxels to encapsulate planar features for accurate map construction. Experimental results from open-source datasets and self-collected data demonstrate that SS-LIO achieves superior accuracy and robustness compared to state-of-the-art methods, with an end-to-end drift of only 0.2 m in indoor degraded scenarios. The detailed and accurate point-cloud maps generated by SS-LIO reflect the smoothness and precision of trajectory estimation, with significantly reduced drift and deviation. These outcomes highlight the effectiveness of SS-LIO in addressing the SLAM challenges posed by solid-state LiDAR systems and its capability to produce reliable maps in complex indoor settings. Full article
(This article belongs to the Special Issue Advancements in Robotics: Perception, Manipulation, and Interaction)
Show Figures

Figure 1

17 pages, 610 KiB  
Review
Three-Dimensional Reconstruction Techniques and the Impact of Lighting Conditions on Reconstruction Quality: A Comprehensive Review
by Dimitar Rangelov, Sierd Waanders, Kars Waanders, Maurice van Keulen and Radoslav Miltchev
Lights 2025, 1(1), 1; https://doi.org/10.3390/lights1010001 - 14 Jul 2025
Viewed by 330
Abstract
Three-dimensional (3D) reconstruction has become a fundamental technology in applications ranging from cultural heritage preservation and robotics to forensics and virtual reality. As these applications grow in complexity and realism, the quality of the reconstructed models becomes increasingly critical. Among the many factors [...] Read more.
Three-dimensional (3D) reconstruction has become a fundamental technology in applications ranging from cultural heritage preservation and robotics to forensics and virtual reality. As these applications grow in complexity and realism, the quality of the reconstructed models becomes increasingly critical. Among the many factors that influence reconstruction accuracy, the lighting conditions at capture time remain one of the most influential, yet widely neglected, variables. This review provides a comprehensive survey of classical and modern 3D reconstruction techniques, including Structure from Motion (SfM), Multi-View Stereo (MVS), Photometric Stereo, and recent neural rendering approaches such as Neural Radiance Fields (NeRFs) and 3D Gaussian Splatting (3DGS), while critically evaluating their performance under varying illumination conditions. We describe how lighting-induced artifacts such as shadows, reflections, and exposure imbalances compromise the reconstruction quality and how different approaches attempt to mitigate these effects. Furthermore, we uncover fundamental gaps in current research, including the lack of standardized lighting-aware benchmarks and the limited robustness of state-of-the-art algorithms in uncontrolled environments. By synthesizing knowledge across fields, this review aims to gain a deeper understanding of the interplay between lighting and reconstruction and provides research directions for the future that emphasize the need for adaptive, lighting-robust solutions in 3D vision systems. Full article
Show Figures

Figure 1

12 pages, 10090 KiB  
Article
Adaptive Curved Slicing for En Face Imaging in Optical Coherence Tomography
by Mingxin Li, Phatham Loahavilai, Yueyang Liu, Xiaochen Li, Yang Li and Liqun Sun
Sensors 2025, 25(14), 4329; https://doi.org/10.3390/s25144329 - 10 Jul 2025
Viewed by 330
Abstract
Optical coherence tomography (OCT) employs light to acquire high-resolution 3D images and is widely applied in fields such as ophthalmology and forensic science. A popular technique for visualizing the top view (en face) is to slice it with flat horizontal plane or apply [...] Read more.
Optical coherence tomography (OCT) employs light to acquire high-resolution 3D images and is widely applied in fields such as ophthalmology and forensic science. A popular technique for visualizing the top view (en face) is to slice it with flat horizontal plane or apply statistical functions along the depth axis. However, when the target appears as a thin layer, strong reflections from other layers can interfere with the target, rendering the flat-plane approach ineffective. We apply Otsu-based thresholding to extract the object’s foreground, then use least squares (with Tikhonov regularization) to fit a polynomial curve that describes the sample’s structural morphology. The surface is then used to obtain the latent fingerprint image and its residues at different depths from a translucent tape, which cannot be analyzed using conventional en face OCT due to strong reflection from the diffusive surface, achieving FSIM of 0.7020 compared to traditional en face of 0.6445. The method is also compatible with other signal processing techniques, as demonstrated by a thermal-printed label ink thickness measurement confirmed by a microscopic image. Our approach empowers OCT to observe targets embedded in samples with arbitrary postures and morphology, and can be easily adapted to various optical imaging technologies. Full article
(This article belongs to the Special Issue Short-Range Optical 3D Scanning and 3D Data Processing)
Show Figures

Graphical abstract

24 pages, 3878 KiB  
Review
Research Progress and Perspectives on Curved Image Sensors for Bionic Eyes
by Tianlong He, Qiuchun Lu and Xidi Sun
Solids 2025, 6(3), 34; https://doi.org/10.3390/solids6030034 - 10 Jul 2025
Viewed by 312
Abstract
Perovskite bionic eyes have emerged as highly promising candidates for photodetection applications to their wide-angle imaging capabilities, high external quantum efficiency(EQE), and low-cost fabrication and integration. Since their initial exploration in 2015, significant advancements have been achieved in this field, with their EQE [...] Read more.
Perovskite bionic eyes have emerged as highly promising candidates for photodetection applications to their wide-angle imaging capabilities, high external quantum efficiency(EQE), and low-cost fabrication and integration. Since their initial exploration in 2015, significant advancements have been achieved in this field, with their EQE reaching 27%. Nevertheless, intrinsic challenges such as the oxidation susceptibility of perovskites and difficulties in curved surface growth hinder their further development. Addressing these issues necessitates a comprehensive and systematic understanding of the preparation mechanisms for hemispherical perovskite, as well as the development of effective mitigation strategies. In this review, a review published provides a detailed overview of the research progress in hemispherical perovskite photodetectors, with a particular focus on the fundamental properties and fabrication pathways of hemispherical perovskites. Furthermore, various strategies to enhance the performance of hemispherical perovskite and overcome preparation challenges are thoroughly discussed. Finally, existing challenges and perspectives are presented to further advance the development of eco-friendly hemispherical perovskite. Full article
Show Figures

Figure 1

30 pages, 4582 KiB  
Review
Review on Rail Damage Detection Technologies for High-Speed Trains
by Yu Wang, Bingrong Miao, Ying Zhang, Zhong Huang and Songyuan Xu
Appl. Sci. 2025, 15(14), 7725; https://doi.org/10.3390/app15147725 - 10 Jul 2025
Viewed by 547
Abstract
From the point of view of the intelligent operation and maintenance of high-speed train tracks, this paper examines the research status of high-speed train rail damage detection technology in the field of high-speed train track operation and maintenance detection in recent years, summarizes [...] Read more.
From the point of view of the intelligent operation and maintenance of high-speed train tracks, this paper examines the research status of high-speed train rail damage detection technology in the field of high-speed train track operation and maintenance detection in recent years, summarizes the damage detection methods for high-speed trains, and compares and analyzes different detection technologies and application research results. The analysis results show that the detection methods for high-speed train rail damage mainly focus on the research and application of non-destructive testing technology and methods, as well as testing platform equipment. Detection platforms and equipment include a new type of vortex meter, integrated track recording vehicles, laser rangefinders, thermal sensors, laser vision systems, LiDAR, new ultrasonic detectors, rail detection vehicles, rail detection robots, laser on-board rail detection systems, track recorders, self-moving trolleys, etc. The main research and application methods include electromagnetic detection, optical detection, ultrasonic guided wave detection, acoustic emission detection, ray detection, vortex detection, and vibration detection. In recent years, the most widely studied and applied methods have been rail detection based on LiDAR detection, ultrasonic detection, eddy current detection, and optical detection. The most important optical detection method is machine vision detection. Ultrasonic detection can detect internal damage of the rail. LiDAR detection can detect dirt around the rail and the surface, but the cost of this kind of equipment is very high. And the application cost is also very high. In the future, for high-speed railway rail damage detection, the damage standards must be followed first. In terms of rail geometric parameters, the domestic standard (TB 10754-2018) requires a gauge deviation of ±1 mm, a track direction deviation of 0.3 mm/10 m, and a height deviation of 0.5 mm/10 m, and some indicators are stricter than European standard EN-13848. In terms of damage detection, domestic flaw detection vehicles have achieved millimeter-level accuracy in crack detection in rail heads, rail waists, and other parts, with a damage detection rate of over 85%. The accuracy of identifying track components by the drone detection system is 93.6%, and the identification rate of potential safety hazards is 81.8%. There is a certain gap with international standards, and standards such as EN 13848 have stricter requirements for testing cycles and data storage, especially in quantifying damage detection requirements, real-time damage data, and safety, which will be the key research and development contents and directions in the future. Full article
Show Figures

Figure 1

10 pages, 4530 KiB  
Article
A Switchable-Mode Full-Color Imaging System with Wide Field of View for All Time Periods
by Shubin Liu, Linwei Guo, Kai Hu and Chunbo Zou
Photonics 2025, 12(7), 689; https://doi.org/10.3390/photonics12070689 - 8 Jul 2025
Viewed by 257
Abstract
Continuous, single-mode imaging systems fail to deliver true-color high-resolution imagery around the clock under extreme lighting. High-fidelity color and signal-to-noise ratio imaging across the full day–night cycle remains a critical challenge for surveillance, navigation, and environmental monitoring. We present a competitive dual-mode imaging [...] Read more.
Continuous, single-mode imaging systems fail to deliver true-color high-resolution imagery around the clock under extreme lighting. High-fidelity color and signal-to-noise ratio imaging across the full day–night cycle remains a critical challenge for surveillance, navigation, and environmental monitoring. We present a competitive dual-mode imaging platform that integrates a 155 mm f/6 telephoto daytime camera with a 52 mm f/1.5 large-aperture low-light full-color night-vision camera into a single, co-registered 26 cm housing. By employing a sixth-order aspheric surface to reduce the element count and weight, our system achieves near-diffraction-limited MTF (>0.5 at 90.9 lp/mm) in daylight and sub-pixel RMS blur < 7 μm at 38.5 lp/mm under low-light conditions. Field validation at 0.0009 lux confirms high-SNR, full-color capture from bright noon to the darkest nights, enabling seamless switching between long-range, high-resolution surveillance and sensitive, low-light color imaging. This compact, robust design promises to elevate applications in security monitoring, autonomous navigation, wildlife observation, and disaster response by providing uninterrupted, color-faithful vision in all lighting regimes. Full article
(This article belongs to the Special Issue Research on Optical Materials and Components for 3D Displays)
Show Figures

Figure 1

12 pages, 7213 KiB  
Article
Planar Wide-Angle Imaging System with a Single-Layer SiC Metalens
by Yiyang Liu, Qiangbo Zhang, Changwei Zhang, Mengguang Wang and Zhenrong Zheng
Nanomaterials 2025, 15(13), 1046; https://doi.org/10.3390/nano15131046 - 5 Jul 2025
Viewed by 400
Abstract
Optical systems with wide field-of-view (FOV) imaging capabilities are crucial for applications ranging from biomedical diagnostics to remote sensing, yet conventional wide-angle optics face integration challenges in compact platforms. Here, we present the design and experimental demonstration of a single-layer silicon carbide (SiC) [...] Read more.
Optical systems with wide field-of-view (FOV) imaging capabilities are crucial for applications ranging from biomedical diagnostics to remote sensing, yet conventional wide-angle optics face integration challenges in compact platforms. Here, we present the design and experimental demonstration of a single-layer silicon carbide (SiC) metalens achieving a 90° total FOV, whose planar structure and small footprint address the challenges. This design is driven by a gradient-based numerical optimization strategy, Gradient-Optimized Phase Profile Shaping (GOPP), which optimizes the phase profile to accommodate the angle-dependent requirements. Combined with a front aperture, the GOPP-generated phase profile enables off-axis aberration control within a planar structure. Operating at 803 nm with a focal length of 1 mm (NA = 0.25), the fabricated metalens demonstrated focusing capabilities across the wide FOV, enabling effective wide-angle imaging. This work demonstrates the feasibility of using numerical optimization to realize single-layer metalens with challenging wide FOV capabilities, offering a promising route towards highly compact imagers for applications such as endoscopy and dermoscopy. Full article
Show Figures

Figure 1

31 pages, 56365 KiB  
Article
The Quiet Architecture of Informality: Negotiating Space Through Agency
by Rim Mrani, Jérôme Chenal, Hassan Radoine and Hassan Yakubu
Buildings 2025, 15(13), 2357; https://doi.org/10.3390/buildings15132357 - 4 Jul 2025
Viewed by 291
Abstract
Housing informality in Morocco has taken root within Rabat’s formal neighborhoods, quietly reshaping façades, extending plot lines, and redrawing the texture of entire blocks. This ongoing transformation runs up against the rigidity of official planning frameworks, producing tension between state enforcement and tacit [...] Read more.
Housing informality in Morocco has taken root within Rabat’s formal neighborhoods, quietly reshaping façades, extending plot lines, and redrawing the texture of entire blocks. This ongoing transformation runs up against the rigidity of official planning frameworks, producing tension between state enforcement and tacit tolerance, as residents navigate persistent legal and economic ambiguities. Prior Moroccan studies are neighborhood-specific or socio-economic; the field lacks a city-wide, multi-class analysis linking everyday tactics to long-term governance dilemmas and policy design. The paper, therefore, asks how and why residents and architects across affordable, middle-class, and affluent districts craft unapproved modifications, and what urban order emerges from their cumulative effects. A mixed qualitative design triangulates (i) five resident focus groups and two architect focus groups, (ii) 50 short, structured interviews, and (iii) 500 geo-referenced façade photographs and observational field notes, thematically coded and compared across housing types. In addition to deciphering informality methods and impacts, the results reveal that informal modifications are shaped by both reactive needs—such as accommodating family growth and enhancing security—and proactive drivers, including esthetic expression and real estate value. Despite their legal ambiguity, these modifications are socially normalized and often viewed by residents as value-adding improvements rather than infractions. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

19 pages, 2374 KiB  
Article
Tracking and Registration Technology Based on Panoramic Cameras
by Chao Xu, Guoxu Li, Ye Bai, Yuzhuo Bai, Zheng Cao and Cheng Han
Appl. Sci. 2025, 15(13), 7397; https://doi.org/10.3390/app15137397 - 1 Jul 2025
Viewed by 286
Abstract
Augmented reality (AR) has become a research focus in computer vision and graphics, with growing applications driven by advances in artificial intelligence and the emergence of the metaverse. Panoramic cameras offer new opportunities for AR due to their wide field of view but [...] Read more.
Augmented reality (AR) has become a research focus in computer vision and graphics, with growing applications driven by advances in artificial intelligence and the emergence of the metaverse. Panoramic cameras offer new opportunities for AR due to their wide field of view but also pose significant challenges for camera pose estimation because of severe distortion and complex scene textures. To address these issues, this paper proposes a lightweight, unsupervised deep learning model for panoramic camera pose estimation. The model consists of a depth estimation sub-network and a pose estimation sub-network, both optimized for efficiency using network compression, multi-scale rectangular convolutions, and dilated convolutions. A learnable occlusion mask is incorporated into the pose network to mitigate errors caused by complex relative motion. Furthermore, a panoramic view reconstruction model is constructed to obtain effective supervisory signals from the predicted depth, pose information, and corresponding panoramic images and is trained using a designed spherical photometric consistency loss. The experimental results demonstrate that the proposed method achieves competitive accuracy while maintaining high computational efficiency, making it well-suited for real-time AR applications with panoramic input. Full article
Show Figures

Figure 1

15 pages, 5442 KiB  
Review
A Global Perspective on Ecotourism Marketing Trends: A Review
by Kaitano Dube and Precious Chikezie Ezeh
Sustainability 2025, 17(13), 6035; https://doi.org/10.3390/su17136035 - 1 Jul 2025
Viewed by 869
Abstract
As various sectors of the world are grappling with various sustainability challenges, there is an urgent need to seek ways to find sustainable ways of dealing with some of these global challenges. Ecotourism has been seen as an avenue for addressing some of [...] Read more.
As various sectors of the world are grappling with various sustainability challenges, there is an urgent need to seek ways to find sustainable ways of dealing with some of these global challenges. Ecotourism has been seen as an avenue for addressing some of the sustainability challenges facing the tourism industry. Most tourism enterprises have adopted ecotourism principles. This study examines the evolution of ecotourism marketing to identify the key concepts and critical debates within this terrain. In this regard, this study also seeks to identify knowledge gaps and future research directions. Using bibliometric data from Web of Science-indexed publications between 2003 and 2025, this study found that ecotourism marketing has been a growing field of research, which is highly cited across fields. The study found that ecotourism marketing covers a wide range of aspects, including digital marketing, destination branding, sustainable marketing, and demand-side considerations in ecotourism marketing. Ecotourism marketing, in many respects, is equally concerned with how ecotourism establishments embrace the current challenges of climate change from a climate change mitigation, adaptation, and resilience perspective to ensure sustainability. There are several research gaps and directions with respect to ecotourism marketing, some of which could cover various aspects in the future, such as examining the role of new technologies, social influencers, and funding in ecotourism marketing. There is an equal need to understand how various generations view the whole concept of green tourism to inform segmentation and better market positioning. Full article
Show Figures

Figure 1

20 pages, 741 KiB  
Article
Long-Endurance Collaborative Search and Rescue Based on Maritime Unmanned Systems and Deep-Reinforcement Learning
by Pengyan Dong, Jiahong Liu, Hang Tao, Yang Zhao, Zhijie Feng and Hanjiang Luo
Sensors 2025, 25(13), 4025; https://doi.org/10.3390/s25134025 - 27 Jun 2025
Viewed by 317
Abstract
Maritime vision sensing can be applied to maritime unmanned systems to perform search and rescue (SAR) missions under complex marine environments, as multiple unmanned aerial vehicles (UAVs) and unmanned surface vehicles (USVs) are able to conduct vision sensing through the air, the water-surface, [...] Read more.
Maritime vision sensing can be applied to maritime unmanned systems to perform search and rescue (SAR) missions under complex marine environments, as multiple unmanned aerial vehicles (UAVs) and unmanned surface vehicles (USVs) are able to conduct vision sensing through the air, the water-surface, and underwater. However, in these vision-based maritime SAR systems, collaboration between UAVs and USVs is a critical issue for successful SAR operations. To address this challenge, in this paper, we propose a long-endurance collaborative SAR scheme which exploits the complementary strengths of the maritime unmanned systems. In this scheme, a swarm of UAVs leverages a multi-agent reinforcement-learning (MARL) method and probability maps to perform cooperative first-phase search exploiting UAV’s high altitude and wide field of view of vision sensing. Then, multiple USVs conduct precise real-time second-phase operations by refining the probabilistic map. To deal with the energy constraints of UAVs and perform long-endurance collaborative SAR missions, a multi-USV charging scheduling method is proposed based on MARL to prolong the UAVs’ flight time. Through extensive simulations, the experimental results verified the effectiveness of the proposed scheme and long-endurance search capabilities. Full article
(This article belongs to the Special Issue Underwater Vision Sensing System: 2nd Edition)
Show Figures

Figure 1

11 pages, 6080 KiB  
Article
Single-Shot Femtosecond Raster-Framing Imaging with High Spatio-Temporal Resolution Using Wavelength/Polarization Time Coding
by Yang Yang, Yongle Zhu, Xuanke Zeng, Dong He, Li Gu, Zhijian Wang and Jingzhen Li
Photonics 2025, 12(7), 639; https://doi.org/10.3390/photonics12070639 - 24 Jun 2025
Viewed by 294
Abstract
This paper introduces a single-shot ultrafast imaging technique termed wavelength and polarization time-encoded ultrafast raster imaging (WP-URI). By integrating raster imaging principles with wavelength- and polarization-based temporal encoding, the system uses a spatial raster mask and time–space mapping to aggregate multiple two-dimensional temporal [...] Read more.
This paper introduces a single-shot ultrafast imaging technique termed wavelength and polarization time-encoded ultrafast raster imaging (WP-URI). By integrating raster imaging principles with wavelength- and polarization-based temporal encoding, the system uses a spatial raster mask and time–space mapping to aggregate multiple two-dimensional temporal raster images onto a single detector plane, thereby enabling the effective spatial separation and extraction of target information. Finally, the target dynamics are recovered using a reconstruction algorithm based on the Nyquist–Shannon sampling theorem. Numerical simulations demonstrate the single-shot acquisition of four dynamic frames at 25 trillion frames per second (Tfps) with an intrinsic spatial resolution of 50 line pairs per millimeter (lp/mm) and a wide field of view. The WP-URI technique achieves unparalleled spatio-temporal resolution and frame rates, offering significant potential for investigating ultrafast phenomena such as matter interactions, carrier dynamics in semiconductor devices, and femtosecond laser–matter processes. Full article
Show Figures

Figure 1

16 pages, 1058 KiB  
Article
Multi-Scale Context Enhancement Network with Local–Global Synergy Modeling Strategy for Semantic Segmentation on Remote Sensing Images
by Qibing Ma, Hongning Liu, Yifan Jin and Xinyue Liu
Electronics 2025, 14(13), 2526; https://doi.org/10.3390/electronics14132526 - 21 Jun 2025
Cited by 1 | Viewed by 316
Abstract
Semantic segmentation of remote sensing images is a fundamental task in geospatial analysis and Earth observation research, and has a wide range of applications in urban planning, land cover classification, and ecological monitoring. In complex geographic scenes, low target-background discriminability in overhead views [...] Read more.
Semantic segmentation of remote sensing images is a fundamental task in geospatial analysis and Earth observation research, and has a wide range of applications in urban planning, land cover classification, and ecological monitoring. In complex geographic scenes, low target-background discriminability in overhead views (e.g., indistinct boundaries, ambiguous textures, and low contrast) significantly complicates local–global information modeling and results in blurred boundaries and classification errors in model predictions. To address this issue, in this paper, we proposed a novel Multi-Scale Local–Global Mamba Feature Pyramid Network (MLMFPN) through designing a local–global information synergy modeling strategy, and guided and enhanced the cross-scale contextual information interaction in the feature fusion process to obtain quality semantic features to be used as cues for precise semantic reasoning. The proposed MLMFPN comprises two core components: Local–Global Align Mamba Fusion (LGAMF) and Context-Aware Cross-attention Interaction Module (CCIM). Specifically, LGAMF designs a local-enhanced global information modeling through asymmetric convolution for synergistic modeling of the receptive fields in vertical and horizontal directions, and further introduces the Vision Mamba structure to facilitate local–global information fusion. CCIM introduces positional encoding and cross-attention mechanisms to enrich the global-spatial semantics representation during multi-scale context information interaction, thereby achieving refined segmentation. The proposed methods are evaluated on the ISPRS Potsdam and Vaihingen datasets and the outperformance in the results verifies the effectiveness of the proposed method. Full article
Show Figures

Figure 1

Back to TopTop