Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (982)

Search Parameters:
Keywords = color camera

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 2292 KB  
Article
Source Camera Identification via Explicit Content–Fingerprint Decoupling with a Dual-Branch Deep Learning Framework
by Zijuan Han, Yang Yang, Jiaxuan Lu, Jian Sun, Yunxia Liu and Ngai-Fong Bonnie Law
Appl. Sci. 2026, 16(3), 1245; https://doi.org/10.3390/app16031245 - 26 Jan 2026
Viewed by 113
Abstract
In this paper, we propose a source camera identification method based on disentangled feature modeling, aiming to achieve robust extraction of camera fingerprint features under complex imaging and post-processing conditions. To address the severe coupling between image content and camera fingerprint features in [...] Read more.
In this paper, we propose a source camera identification method based on disentangled feature modeling, aiming to achieve robust extraction of camera fingerprint features under complex imaging and post-processing conditions. To address the severe coupling between image content and camera fingerprint features in existing methods, which makes content interference difficult to suppress, we develop a dual-branch deep learning framework guided by imaging physics. By introducing physical consistency constraints, the proposed framework explicitly separates image content representations from device-related fingerprint features in the feature space, thereby enhancing the stability and robustness of source camera identification. The proposed method adopts two parallel branches: a content modeling branch and a fingerprint feature extraction branch. The content branch is built upon an improved U-Net architecture to reconstruct scene and color information, and further incorporates texture refinement and multi-scale feature fusion to reduce residual content interference in fingerprint modeling. The fingerprint branch employs ResNet-50 as the backbone network to learn discriminative global features associated with the camera imaging pipeline. Based on these branches, fingerprint information dominated by sensor noise is explicitly extracted by computing the residual between the input image and the reconstructed content, and is further encoded through noise analysis and feature fusion for joint camera model classification. Experimental results on multiple public-source camera forensics datasets demonstrate that the proposed method achieves stable and competitive identification performance in same-brand camera discrimination, complex imaging conditions, and post-processing scenarios, validating the effectiveness of the proposed disentangled modeling and physical consistency constraint strategy for source camera identification. Full article
(This article belongs to the Special Issue New Development in Machine Learning in Image and Video Forensics)
Show Figures

Figure 1

20 pages, 4015 KB  
Article
High-Speed Image Restoration Based on a Dynamic Vision Sensor
by Paul K. J. Park, Junseok Kim, Juhyun Ko and Yeoungjin Chang
Sensors 2026, 26(3), 781; https://doi.org/10.3390/s26030781 - 23 Jan 2026
Viewed by 226
Abstract
We report on the post-capture, on-demand deblurring technique based on a Dynamic Vision Sensor (DVS). Motion blur causes photographic defects inherently in most use cases of mobile cameras. To compensate for motion blur in mobile photography, we use a fast event-based vision sensor. [...] Read more.
We report on the post-capture, on-demand deblurring technique based on a Dynamic Vision Sensor (DVS). Motion blur causes photographic defects inherently in most use cases of mobile cameras. To compensate for motion blur in mobile photography, we use a fast event-based vision sensor. However, in this paper, we found severe artifacts resulting in image quality degradation caused by color ghosts, event noises, and discrepancies between conventional image sensors and event-based sensors. To overcome these inevitable artifacts, we propose and demonstrate event-based compensation techniques such as cross-correlation optimization, contrast maximization, resolution mismatch compensation (event upsampling for alignment), and disparity matching. The results show that the deblur performance can be improved dramatically in terms of metrics such as the Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Spatial Frequency Response (SFR). Thus, we expect that the proposed event-based image restoration technique can be widely deployed in mobile cameras. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

21 pages, 6960 KB  
Article
First-Stage Algorithm for Photo-Identification and Location of Marine Species
by Rosa Isela Ramos-Arredondo, Francisco Javier Gallegos-Funes, Blanca Esther Carvajal-Gámez, Guillermo Urriolagoitia-Sosa, Beatriz Romero-Ángeles, Alberto Jorge Rosales-Silva and Erick Velázquez-Lozada
Animals 2026, 16(2), 281; https://doi.org/10.3390/ani16020281 - 16 Jan 2026
Viewed by 139
Abstract
Marine species photo-identification and location for tracking are crucial for understanding the characteristics and patterns that distinguish each marine species. However, challenges in camera data acquisition and the unpredictability of animal movements have restricted progress in this field. To address these challenges, we [...] Read more.
Marine species photo-identification and location for tracking are crucial for understanding the characteristics and patterns that distinguish each marine species. However, challenges in camera data acquisition and the unpredictability of animal movements have restricted progress in this field. To address these challenges, we present a novel algorithm for the first stage of marine species photo-identification and location methods. For marine species photo-identification applications, a color index-based thresholding segmentation method is proposed. This method is based on the characteristics of the GMR (Green Minus Red) color index and the proposed empirical BMG (Blue Minus Green) color index. These color indexes are modified to provide better information about the color of regions, such as marine animals, the sky, and land found in the scientific sightings images, allowing an optimal thresholding segmentation method. In the case of marine species location, a SURFs (Speeded-Up Robust Features)-based supervised classifier is used to obtain the location of the marine animal in the sighting image; with this, its tracking could be obtained. The tests were performed with the Kaggle happywhale public database; the results obtained in precision shown range from 0.77 up to 0.98 using the proposed indexes. Finally, the proposed method could be used in real-time marine species tracking with a processing time of 0.33 s for images of 645 × 376 pixels using a standard PC. Full article
(This article belongs to the Section Aquatic Animals)
Show Figures

Figure 1

28 pages, 6605 KB  
Article
A New Method of Evaluating Multi-Color Ellipsometric Mapping on Big-Area Samples
by Sándor Kálvin, Berhane Nugusse Zereay, György Juhász, Csaba Major, Péter Petrik, Zoltán György Horváth and Miklós Fried
Sci 2026, 8(1), 17; https://doi.org/10.3390/sci8010017 - 13 Jan 2026
Viewed by 257
Abstract
Ellipsometric mapping measurements and Bayesian evaluation were performed with a non-collimated, imaging ellipsometer using an LCD monitor as a light source. In such a configuration, the polarization state of the illumination and the local angle of incidence vary spatially and spectrally, rendering conventional [...] Read more.
Ellipsometric mapping measurements and Bayesian evaluation were performed with a non-collimated, imaging ellipsometer using an LCD monitor as a light source. In such a configuration, the polarization state of the illumination and the local angle of incidence vary spatially and spectrally, rendering conventional spectroscopic ellipsometry inversion methods hardly applicable. To address these limitations, a multilayer optical forward model is augmented with instrument-specific correction parameters describing the polarization state of the monitor and the angle-of-incidence map. These parameters are determined through a Bayesian calibration procedure using well-characterized Si-SiO2 reference wafers. The resulting posterior distribution is explored by global optimization based on simulated annealing, yielding a maximum a posteriori estimate, followed by marginalization to quantify uncertainties and parameter correlations. The calibrated correction parameters are subsequently incorporated as informative priors in the Bayesian analysis of unknown samples, including polycrystalline–silicon layers deposited on Si-SiO2 substrates and additional Si-SiO2 wafers outside the calibration set. The approach allows consistent propagation of calibration uncertainties into the inferred layer parameters and provides credible intervals and correlation information that cannot be obtained from conventional least-squares methods. The results demonstrate that, despite the broadband nature of the RGB measurement and the limited number of analyzer orientations, reliable layer thicknesses can be obtained with quantified uncertainties for a wide range of technologically relevant samples. The proposed Bayesian framework enables a transparent interpretation of the measurement accuracy and limitations, providing a robust basis for large-area ellipsometric mapping of multilayer structures. Full article
Show Figures

Figure 1

25 pages, 10750 KB  
Article
LHRSI: A Lightweight Spaceborne Imaging Spectrometer with Wide Swath and High Resolution for Ocean Color Remote Sensing
by Bo Cheng, Yongqian Zhu, Miao Hu, Xianqiang He, Qianmin Liu, Chunlai Li, Chen Cao, Bangjian Zhao, Jincai Wu, Jianyu Wang, Jie Luo, Jiawei Lu, Zhihua Song, Yuxin Song, Wen Jiang, Zi Wang, Guoliang Tang and Shijie Liu
Remote Sens. 2026, 18(2), 218; https://doi.org/10.3390/rs18020218 - 9 Jan 2026
Viewed by 230
Abstract
Global water environment monitoring urgently requires remote sensing data with high temporal resolution and wide spatial coverage. However, current space-borne ocean color spectrometers still face a significant trade-off among spatial resolution, swath width, and system compactness, which limits the large-scale deployment of satellite [...] Read more.
Global water environment monitoring urgently requires remote sensing data with high temporal resolution and wide spatial coverage. However, current space-borne ocean color spectrometers still face a significant trade-off among spatial resolution, swath width, and system compactness, which limits the large-scale deployment of satellite constellations. To address this challenge, this study developed a lightweight high-resolution spectral imager (LHRSI) with a total mass of less than 25 kg and power consumption below 80 W. The visible (VIS) camera adopts an interleaved dual-field-of-view and detectors splicing fusion design, while the shortwave infrared (SWIR) camera employs a transmission-type focal plane with staggered detector arrays. Through the field-of-view (FOV) optical design, the instrument achieves swath widths of 207.33 km for the VIS bands and 187.8 km for the SWIR bands at an orbital altitude of 500 km, while maintaining spatial resolutions of 12 m and 24 m, respectively. On-orbit imaging results demonstrate that the spectrometer achieves excellent performance in both spatial resolution and swath width. In addition, preliminary analysis using index-based indicators illustrates LHRSI’s potential for observing chlorophyll-related features in water bodies. This research not only provides a high-performance, miniaturized spectrometer solution but also lays an engineering foundation for developing low-cost, high-revisit global ocean and water environment monitoring constellations. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Graphical abstract

20 pages, 6958 KB  
Article
Bird Detection in the Field with the IA-Mask-RCNN
by Yassine Sohbi, Lucie Zgainski and Christophe Sausse
Appl. Sci. 2026, 16(2), 584; https://doi.org/10.3390/app16020584 - 6 Jan 2026
Viewed by 222
Abstract
In recent times, field crop damage caused by birds, such as corvids and pigeons, has become crucial for many farmers. Damage can be as serious as the loss of a large part of the harvest. Several solutions have been proposed, but none are [...] Read more.
In recent times, field crop damage caused by birds, such as corvids and pigeons, has become crucial for many farmers. Damage can be as serious as the loss of a large part of the harvest. Several solutions have been proposed, but none are effective. An example is the use of scarecrows, but birds eventually adapt to them over time, and so they become ineffective. To study bird behavior and to propose a bird deterrent that would adapt to the presence of birds, we set up an experimental image-taking system on several plots of land over a period of 4–5 years. Around fifteen terabytes of images taken in the field were acquired. Our aim was to automatically detect these birds using deep learning methods and then to activate a real-time scarer. This work meets two challenges: the first is agroecological, as bird damage has become a major issue, and the second is IT, as it is difficult to detect birds in the field: the individuals are small because they are far from the camera lens, and field conditions are often less than optimal: darkness, confusion between the pigeons’ colors and the ground, etc. The Mask-RCNN in its original configuration is not suited to detecting small individuals. We mainly focused on the model’s hyperparameters to better adapt it to our study context. As a result, we improved the detection of small individuals using, among other things, appropriate anchor scales design and image processing techniques. At the same time, we built an original dataset focused on small individuals called BirdyDataset. The model can detect corvids and pigeons with an accuracy of 78% under real field conditions. Full article
Show Figures

Figure 1

21 pages, 6253 KB  
Article
Design of an Afocal Telescope System Integrated with Digital Imaging for Enhanced Optical Performance
by Yi-Lun Su, Wen-Shing Sun, Chuen-Lin Tien, Yen-Cheng Lin and Yi-Hong Liu
Micromachines 2026, 17(1), 62; https://doi.org/10.3390/mi17010062 - 31 Dec 2025
Viewed by 432
Abstract
This study presents the design and optimization of a digital-imaging afocal telescope system that integrates an afocal telescope architecture with an imaging optical subsystem. The proposed system employs a combination of spherical and aspherical optical elements to enhance imaging flexibility, reduce aberrations, and [...] Read more.
This study presents the design and optimization of a digital-imaging afocal telescope system that integrates an afocal telescope architecture with an imaging optical subsystem. The proposed system employs a combination of spherical and aspherical optical elements to enhance imaging flexibility, reduce aberrations, and ensure effective system coupling. Proper pupil matching is achieved by aligning the exit pupil of the afocal telescope with the entrance pupil of the imaging system, ensuring minimal vignetting and optimal energy transfer. Circular apertures and lens elements are used throughout the system to simplify alignment and minimize pupil-matching errors. The complete system comprises three imaging optical subsystems and a digital camera module, each independently optimized to ensure balanced optical performance. The design achieves an overall magnification of 16×, with near-diffraction-limited quality confirmed by an RMS wavefront error of 0.0474λ and a Strehl ratio of 0.915. The modulation transfer function (MTF) reaches 0.42 at 80 lp/mm, while the distortion remains below 4.87%. Chromatic performance is well controlled, with maximum lateral color deviations of 1.007 µm (short-to-long wavelength) and 1.52 µm (short-to-reference wavelength), evaluated at 656 nm, 587 nm, and 486 nm. The results demonstrate that the proposed digital-imaging afocal telescope system provides high-resolution, low-aberration imaging suitable for precision optical applications. Full article
(This article belongs to the Special Issue Emerging Trends in Optoelectronic Device Engineering, 2nd Edition)
Show Figures

Figure 1

8 pages, 1586 KB  
Proceeding Paper
On the Development of an Advanced Fatigue Testing Machine for Three-Point Bending of Polymer Matrix Composites
by Nikolaos Davaris, George-Christopher Vosniakos, Evangelos Tzimas and Emmanouil Stathatos
Eng. Proc. 2025, 119(1), 39; https://doi.org/10.3390/engproc2025119039 - 23 Dec 2025
Viewed by 229
Abstract
A crank press is converted into a smart fatigue testing machine for 3-point bending of polymer matrix composite specimens. The press is retrofitted with a load cell base for work holding, which monitors the bending force applied by the ram, a cycle counter [...] Read more.
A crank press is converted into a smart fatigue testing machine for 3-point bending of polymer matrix composite specimens. The press is retrofitted with a load cell base for work holding, which monitors the bending force applied by the ram, a cycle counter recording the number of loading cycles, and a camera recording snapshots of the specimen area where failure is expected. A convolutional and Resnet neural networks are trained to recognize failure as an area of color change in camera images. Signal drop on the load cell signals failure onset, thus triggering monitoring by the camera and execution of the neural network. Acceptable proof-of-concept results encourage further automation of the setup. Full article
Show Figures

Figure 1

32 pages, 14384 KB  
Article
CSPC-BRS: An Enhanced Real-Time Multi-Target Detection and Tracking Algorithm for Complex Open Channels
by Wei Li, Xianpeng Zhu, Aghaous Hayat, Hu Yuan and Xiaojiang Yang
Electronics 2025, 14(24), 4942; https://doi.org/10.3390/electronics14244942 - 16 Dec 2025
Viewed by 253
Abstract
Ensuring worker safety compliance and secure cargo transportation in complex port environments is critical for modern logistics hubs. However, conventional supervision methods, including manual inspection and passive video monitoring, suffer from limited coverage, poor real-time responsiveness, and low robustness under frequent occlusion, scale [...] Read more.
Ensuring worker safety compliance and secure cargo transportation in complex port environments is critical for modern logistics hubs. However, conventional supervision methods, including manual inspection and passive video monitoring, suffer from limited coverage, poor real-time responsiveness, and low robustness under frequent occlusion, scale variation, and cross-camera transitions, leading to unstable target association and missed risk events. To address these challenges, this paper proposes CSPC-BRS, a real-time multi-object detection and tracking framework for open-channel port scenarios. CSPC (Coordinated Spatial Perception Cascade) enhances the YOLOv8 backbone by integrating CASAM, SPPELAN-DW, and CACC modules to improve feature representation under cluttered backgrounds and degraded visual conditions. Meanwhile, BRS (Bounding Box Reduction Strategy) mitigates scale distortion during tracking, and a Multi-Dimensional Re-identification Scoring (MDRS) mechanism fuses six perceptual features—color, texture, shape, motion, size, and time—to achieve stable cross-camera identity consistency. Experimental results demonstrate that CSPC-BRS outperforms the YOLOv8-n baseline by improving the mAP@0.5:0.95 by 9.6% while achieving a real-time speed of 132.63 FPS. Furthermore, in practical deployment, it reduces the false capture rate by an average of 59.7% compared to the YOLOv8 + Bot-SORT tracker. These results confirm that CSPC-BRS effectively balances detection accuracy and computational efficiency, providing a practical and deployable solution for intelligent safety monitoring in complex industrial logistics environments. Full article
Show Figures

Figure 1

33 pages, 9178 KB  
Article
Automated Image-to-BIM Using Neural Radiance Fields and Vision-Language Semantic Modeling
by Mohammad H. Mehraban, Shayan Mirzabeigi, Mudan Wang, Rui Liu and Samad M. E. Sepasgozar
Buildings 2025, 15(24), 4549; https://doi.org/10.3390/buildings15244549 - 16 Dec 2025
Viewed by 733
Abstract
This study introduces a novel, automated image-to-BIM (Building Information Modeling) workflow designed to generate semantically rich and geometrically useful BIM models directly from RGB images. Conventional scan-to-BIM often relies on specialized, costly, and time-intensive equipment, specifically if LiDAR is used to generate point [...] Read more.
This study introduces a novel, automated image-to-BIM (Building Information Modeling) workflow designed to generate semantically rich and geometrically useful BIM models directly from RGB images. Conventional scan-to-BIM often relies on specialized, costly, and time-intensive equipment, specifically if LiDAR is used to generate point clouds (PCs). Typical workflows are followed by a separate post-processing step for semantic segmentation recently performed by deep learning models on the generated PCs. Instead, the proposed method integrates vision language object detection (YOLOv8x-World v2) and vision based segmentation (SAM 2.1) with Neural Radiance Fields (NeRF) 3D reconstruction to generate segmented, color-labeled PCs directly from images. The key novelty lies in bypassing post-processing on PCs by embedding semantic information at the pixel level in images, preserving it through reconstruction, and encoding it into the resulting color labeled PC, which allows building elements to be directly identified and geometrically extracted based on color labels. Extracted geometry is serialized into a JSON format and imported into Revit to automate BIM creation for walls, windows, and doors. Experimental validation on BIM models generated from Unmanned Aerial Vehicle (UAV)-based exterior datasets and standard camera-based interior datasets demonstrated high accuracy in detecting windows and doors. Spatial evaluations yielded up to 0.994 precision and 0.992 Intersection over Union (IoU). NeRF and Gaussian Splatting models, Nerfacto, Instant-NGP, and Splatfacto, were assessed. Nerfacto produced the most structured PCs suitable for geometry extraction and Splatfacto achieved the highest image reconstruction quality. The proposed method removes dependency on terrestrial surveying tools and separate segmentation processes on PCs. It provides a low-cost and scalable solution for generating BIM models in aging or undocumented buildings and supports practical applications such as renovation, digital twin, and facility management. Full article
(This article belongs to the Special Issue Artificial Intelligence in Architecture and Interior Design)
Show Figures

Graphical abstract

21 pages, 4172 KB  
Article
OCC-Based Positioning Method for Autonomous UAV Navigation in GNSS-Denied Environments: An Offshore Wind Farm Simulation Study
by Ju-Hyun Kim and Sung-Yoon Jung
Sensors 2025, 25(24), 7569; https://doi.org/10.3390/s25247569 - 12 Dec 2025
Viewed by 554
Abstract
Precise positioning is critical for autonomous uncrewed aerial vehicle (UAV) navigation, especially in GNSS-denied environments where radio-based signals are unreliable. This study presents an optical camera communication (OCC)-based positioning method that enables real-time 3D coordinate estimation using aviation obstruction light-emitting diodes (LEDs) as [...] Read more.
Precise positioning is critical for autonomous uncrewed aerial vehicle (UAV) navigation, especially in GNSS-denied environments where radio-based signals are unreliable. This study presents an optical camera communication (OCC)-based positioning method that enables real-time 3D coordinate estimation using aviation obstruction light-emitting diodes (LEDs) as optical transmitters and a UAV-mounted camera as the receiver. In the proposed system, absolute positional identifiers are encoded into color-shift-keying-modulated optical signals emitted by fixed LEDs and captured by the UAV camera. The UAV’s 3D position is estimated by integrating the decoded LED information with geometric constraints through the Perspective-n-Point algorithm, eliminating the need for satellite or RF-based localization infrastructure. A virtual offshore wind farm, developed in Unreal Engine, was used to experimentally evaluate the feasibility and accuracy of the method. Results demonstrate submeter localization precision over a 50,000 cm flight path, confirming the system’s capability for reliable, real-time positioning. These findings indicate that OCC-based positioning provides a cost-effective and robust alternative for UAV navigation in complex or communication-restricted environments. The offshore wind farm inspection scenario further highlights the method’s potential for industrial operation and maintenance tasks and underscores the promise of integrating optical wireless communication into autonomous UAV systems. Full article
(This article belongs to the Special Issue Smart Sensor Systems for Positioning and Navigation)
Show Figures

Figure 1

20 pages, 4879 KB  
Article
A Multi-Phenotype Acquisition System for Pleurotus eryngii Based on RGB and Depth Imaging
by Yueyue Cai, Zhijun Wang, Ziqin Liao, Yujie Li, Weijie Shi, Peijie Huang, Bingzhi Chen, Jie Pang, Xiangzeng Kong and Xuan Wei
Agriculture 2025, 15(24), 2566; https://doi.org/10.3390/agriculture15242566 - 11 Dec 2025
Viewed by 395
Abstract
High-throughput phenotypic acquisition and analysis allow us to accurately quantify trait expressions, which is essential for developing intelligent breeding strategies. However, there is still much potential to explore in the field of high-throughput phenotyping for edible fungi. In this study, we developed a [...] Read more.
High-throughput phenotypic acquisition and analysis allow us to accurately quantify trait expressions, which is essential for developing intelligent breeding strategies. However, there is still much potential to explore in the field of high-throughput phenotyping for edible fungi. In this study, we developed a portable multi-phenotypic acquisition system for Pleurotus eryngii using RGB and RGB-D cameras. We developed an innovative Unet-based semantic segmentation model by integrating the ASPP structure with the VGG16 architecture. This allows for precise segmentation of the cap, gills and stem of the fruiting body. By leveraging depth images from RGB-D cameras, we can effectively collect phenotypic information about Pleurotus eryngii. By combining K-means clustering with Lab color space thresholds, we are able to achieve more precise automatic classification of Pleurotus eryngii cap colors. Moreover, AlexNet is utilized to classify the shapes of the fruiting bodies. The Aspp-VGGUnet network demonstrates remarkable performance with a mean Intersection over Union (mIoU) of 96.47% and a mean pixel accuracy (mPA) of 98.53%. These results reflect respective improvements of 3.03% and 2.23% compared to the standard Unet model, respectively. The average error in size phenotype measurement is just 0.15 ± 0.03 cm. The accuracy for cap color classification reaches 91.04%, while fruiting body shape classification achieves 97.90%. The proposed multi-phenotype acquisition system reduces the measurement time per sample from an average of 76 s (manual method) to about 2 s, substantially increasing data acquisition throughput and providing robust support for scalable phenotyping workflows in breeding research. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Graphical abstract

15 pages, 3346 KB  
Article
HDR Merging of RAW Exposure Series for All-Sky Cameras: A Comparative Study for Circumsolar Radiometry
by Paul Matteschk, Max Aragón, Jose Gomez, Jacob K. Thorning, Stefanie Meilinger and Sebastian Houben
J. Imaging 2025, 11(12), 442; https://doi.org/10.3390/jimaging11120442 - 11 Dec 2025
Viewed by 448
Abstract
All-sky imagers (ASIs) used in solar energy meteorology face an extreme intra-image dynamic range, with the circumsolar neighborhood orders of magnitude brighter than the diffuse dome. Many operational ASI pipelines address this gap with high-dynamic-range (HDR) bracketing inside the camera’s image signal processor [...] Read more.
All-sky imagers (ASIs) used in solar energy meteorology face an extreme intra-image dynamic range, with the circumsolar neighborhood orders of magnitude brighter than the diffuse dome. Many operational ASI pipelines address this gap with high-dynamic-range (HDR) bracketing inside the camera’s image signal processor (ISP), i.e., after demosaicing and color processing in a nonlinear 8-bit RGB domain. Near the Sun, such ISP-domain HDR can down-weight the shortest exposure, retain clipped or near-clipped samples from longer frames, and compress highlight contrast, thereby increasing circumsolar saturation and flattening aureole gradients. A radiance-linear HDR fusion in the sensor/RAW domain (RAW–HDR) is therefore contrasted with the vendor ISP-based HDR mode (ISP–HDR). Solar-based geometric calibration enables Sun-centered analysis. Paired, interleaved acquisitions under clear-sky and broken-cloud conditions are evaluated using two circumsolar performance criteria per RGB channel: (i) saturated-area fraction in concentric rings and (ii) a median-based radial gradient in defined arcs. All quantitative analyses operate on the radiance-linear HDR result; post-merge tone mapping is only used for visualization. Across conditions, ISP–HDR exhibits roughly double the near-saturation within 0–4° of the Sun and about a three- to fourfold weaker circumsolar radial gradient within 0–6° relative to RAW–HDR. These findings indicate that radiance-linear fusion in the RAW domain better preserves circumsolar structure than the examined ISP-domain HDR mode and thus provides more suitable input for downstream tasks such as cloud–edge detection, aerosol retrieval, and irradiance estimation. Full article
(This article belongs to the Special Issue Techniques and Applications of Sky Imagers)
Show Figures

Graphical abstract

45 pages, 59804 KB  
Article
Multi-Threshold Art Symmetry Image Segmentation and Numerical Optimization Based on the Modified Golden Jackal Optimization
by Xiaoyan Zhang, Zuowen Bao, Xinying Li and Jianfeng Wang
Symmetry 2025, 17(12), 2130; https://doi.org/10.3390/sym17122130 - 11 Dec 2025
Cited by 1 | Viewed by 383
Abstract
To address the issues of uneven population initialization, insufficient individual information interaction, and passive boundary handling in the standard Golden Jackal Optimization (GJO) algorithm, while improving the accuracy and efficiency of multilevel thresholding in artistic image segmentation, this paper proposes an improved Golden [...] Read more.
To address the issues of uneven population initialization, insufficient individual information interaction, and passive boundary handling in the standard Golden Jackal Optimization (GJO) algorithm, while improving the accuracy and efficiency of multilevel thresholding in artistic image segmentation, this paper proposes an improved Golden Jackal Optimization algorithm (MGJO) and applies it to this task. MGJO introduces a high-quality point set for population initialization, ensuring a more uniform distribution of initial individuals in the search space and better adaptation to the complex grayscale characteristics of artistic images. A dual crossover strategy, integrating horizontal and vertical information exchange, is designed to enhance individual information sharing and fine-grained dimensional search, catering to the segmentation needs of artistic image textures and color layers. Furthermore, a global-optimum-based boundary handling mechanism is constructed to prevent information loss when boundaries are exceeded, thereby preserving the boundary details of artistic images. The performance of MGJO was evaluated on the CEC2017 (dim = 30, 100) and CEC2022 (dim = 10, 20) benchmark suites against seven algorithms, including GWO and IWOA. Population diversity analysis, exploration–exploitation balance assessment, Wilcoxon rank-sum tests, and Friedman mean-rank tests all demonstrate that MGJO significantly outperforms the comparison algorithms in optimization accuracy, stability, and statistical reliability. In multilevel thresholding for artistic image segmentation, using Otsu’s between-class variance as the objective function, MGJO achieves higher fitness values (approaching Otsu’s optimal values) across various artistic images with complex textures and colors, as well as benchmark images such as Baboon, Camera, and Lena, in 4-, 6-, 8-, and 10-level thresholding tasks. The resulting segmented images exhibit superior peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and feature similarity (FSIM) compared to other algorithms, more precisely preserving brushstroke details and color layers. Friedman average rankings consistently place MGJO in the lead. These experimental results indicate that MGJO effectively overcomes the performance limitations of the standard GJO, demonstrating excellent performance in both numerical optimization and multilevel thresholding artistic image segmentation. It provides an efficient solution for high-dimensional complex optimization problems and practical demands in artistic image processing. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

24 pages, 3139 KB  
Article
Detection of Red, Yellow, and Purple Raspberry Fruits Using YOLO Models
by Kamil Buczyński, Magdalena Kapłan and Zbigniew Jarosz
Agriculture 2025, 15(24), 2530; https://doi.org/10.3390/agriculture15242530 - 6 Dec 2025
Viewed by 796
Abstract
This study presents a comprehensive evaluation of recent YOLO architectures, YOLOv8s, YOLOv9s, YOLOv10s, YOLO11s, and YOLO12s, for the detection of red, yellow, and purple raspberry fruits under field conditions. Images were collected using an smartphone camera under varying illumination, weather, and occlusion conditions. [...] Read more.
This study presents a comprehensive evaluation of recent YOLO architectures, YOLOv8s, YOLOv9s, YOLOv10s, YOLO11s, and YOLO12s, for the detection of red, yellow, and purple raspberry fruits under field conditions. Images were collected using an smartphone camera under varying illumination, weather, and occlusion conditions. Each model was trained and evaluated using standard object detection metrics (Precision, Recall, mAP50, mAP50:95, F1-score), while inference performance was benchmarked on both high-performance (NVIDIA RTX 5080) and embedded (NVIDIA Jetson Orin NX) platforms. All models achieved high and consistent detection accuracy across fruits of different colors, confirming the robustness of the YOLO algorithm design. Compact variants provided the best trade-off between accuracy and computational cost, whereas deeper architectures yielded marginal improvements at higher Latency. TensorRT optimization on the Jetson device further enhanced real-time inference, particularly for embedded deployment. The results indicate that modern YOLO architectures have reached a level of architectural maturity, where advances are driven by optimization and specialization rather than structural redesign. These findings underline the strong potential of YOLO-based detectors as core components of intelligent, edge-deployable systems for precision agriculture and automated fruit detection. Full article
Show Figures

Graphical abstract

Back to TopTop