error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (8)

Search Parameters:
Keywords = vignetting removal

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 4847 KB  
Article
High-Precision Detection of Cells and Amyloid-β Using Multi-Frame Brightfield Imaging and Quantitative Analysis
by Mengyu Li, Masahiro Kuragano, Stefan Baar, Mana Endo, Kiyotaka Tokuraku and Shinya Watanabe
Electronics 2025, 14(17), 3418; https://doi.org/10.3390/electronics14173418 - 27 Aug 2025
Viewed by 590
Abstract
This study presents a novel method for high-precision detection and quantitative evaluation of the spatial relationship between cells and amyloid-β (Aβ) in time-lapse brightfield microscopy images. Achieving accurate detection of non-fluorescent cells and Aβ deposits requires high-quality video images [...] Read more.
This study presents a novel method for high-precision detection and quantitative evaluation of the spatial relationship between cells and amyloid-β (Aβ) in time-lapse brightfield microscopy images. Achieving accurate detection of non-fluorescent cells and Aβ deposits requires high-quality video images free from noise, distortion, and frame-to-frame luminance flicker. To this end, we employ a robust preprocessing pipeline that combines multi-frame integration with vignetting correction to enhance image quality and reduce luminance variability across frames. Key preprocessing steps include background correction via two-dimensional polynomial fitting, temporal smoothing of luminance fluctuations, histogram matching for luminance normalization, and dust artifact removal based on intensity thresholds. This enhanced imaging approach enables accurate identification of Aβ aggregates, which typically appear as jelly-like structures and are difficult to detect under standard brightfield conditions. Furthermore, we introduce a quantitative index to assess the spatial relationship between cells and Aβ concentrations, facilitating detailed analysis under varying Aβ levels. Full article
Show Figures

Figure 1

15 pages, 27251 KB  
Article
Single-Frame Vignetting Correction for Post-Stitched-Tile Imaging Using VISTAmap
by Anthony A. Fung, Ashley H. Fung, Zhi Li and Lingyan Shi
Nanomaterials 2025, 15(7), 563; https://doi.org/10.3390/nano15070563 - 7 Apr 2025
Cited by 2 | Viewed by 1504
Abstract
Stimulated Raman Scattering (SRS) nanoscopy imaging offers unprecedented insights into tissue molecular architecture but often requires stitching multiple high-resolution tiles to capture large fields of view. This process is time-consuming and frequently introduces vignetting artifacts—grid-like intensity fluctuations that degrade image quality and hinder [...] Read more.
Stimulated Raman Scattering (SRS) nanoscopy imaging offers unprecedented insights into tissue molecular architecture but often requires stitching multiple high-resolution tiles to capture large fields of view. This process is time-consuming and frequently introduces vignetting artifacts—grid-like intensity fluctuations that degrade image quality and hinder downstream quantitative analyses and processing such as super-resolution deconvolution. We present VIgnetted Stitched-Tile Adjustment using Morphological Adaptive Processing (VISTAmap), a simple tool that corrects these shading artifacts directly on the final stitched image. VISTAmap automatically detects the tile grid configuration by analyzing intensity frequency variations and then applies sequential morphological operations to homogenize the image. In contrast to conventional approaches that require increased tile overlap or pre-acquisition background sampling, VISTAmap offers a pragmatic, post-processing solution without the need for separate individual tile images. This work addresses pressing concerns by delivering a robust, efficient strategy for enhancing mosaic image uniformity in modern nanoscopy, where the smallest details make tremendous impacts. Full article
(This article belongs to the Special Issue New Advances in Applications of Nanoscale Imaging and Nanoscopy)
Show Figures

Figure 1

19 pages, 4410 KB  
Article
Development of a Low-Cost Luminance Imaging Device with Minimal Equipment Calibration Procedures for Absolute and Relative Luminance
by Daniel Bishop and J. Geoffrey Chase
Buildings 2023, 13(5), 1266; https://doi.org/10.3390/buildings13051266 - 12 May 2023
Cited by 7 | Viewed by 3241
Abstract
Luminance maps are information-dense measurements that can be used to directly evaluate and derive a number of important lighting measures, and improve lighting design and practices. However, cost barriers have limited the uptake of luminance imaging devices. This study presents a low-cost custom [...] Read more.
Luminance maps are information-dense measurements that can be used to directly evaluate and derive a number of important lighting measures, and improve lighting design and practices. However, cost barriers have limited the uptake of luminance imaging devices. This study presents a low-cost custom luminance imaging device developed from a Raspberry Pi microcomputer and camera module; however, the work may be extended to other low-cost imaging devices. Two calibration procedures for absolute and relative luminance are presented, which require minimal equipment. To remove calibration equipment limitations, novel procedures were developed to characterize sensor linearity and vignetting, where the accurate characterization of sensor linearity allows the use of lower-cost and highly non-linear sensors. Overall, the resultant device has an average absolute luminance error of 6.4% and an average relative luminance error of 6.2%. The device has comparable accuracy and performance to other custom devices, which use higher-cost technologies and more expensive calibration equipment, and significantly reduces the cost barrier for luminance imaging and the better lighting it enables. Full article
(This article belongs to the Special Issue Lighting in Buildings)
Show Figures

Figure 1

16 pages, 4092 KB  
Article
Nonuniform Correction of Ground-Based Optical Telescope Image Based on Conditional Generative Adversarial Network
by Xiangji Guo, Tao Chen, Junchi Liu, Yuan Liu, Qichang An and Chunfeng Jiang
Sensors 2023, 23(3), 1086; https://doi.org/10.3390/s23031086 - 17 Jan 2023
Cited by 1 | Viewed by 2169
Abstract
Ground-based telescopes are often affected by vignetting, stray light and detector nonuniformity when acquiring space images. This paper presents a space image nonuniform correction method using the conditional generative adversarial network (CGAN). Firstly, we create a dataset for training by introducing [...] Read more.
Ground-based telescopes are often affected by vignetting, stray light and detector nonuniformity when acquiring space images. This paper presents a space image nonuniform correction method using the conditional generative adversarial network (CGAN). Firstly, we create a dataset for training by introducing the physical vignetting model and by designing the simulation polynomial to realize the nonuniform background. Secondly, we develop a robust conditional generative adversarial network (CGAN) for learning the nonuniform background, in which we improve the network structure of the generator. The experimental results include a simulated dataset and authentic space images. The proposed method can effectively remove the nonuniform background of space images, achieve the Mean Square Error (MSE) of 4.56 in the simulation dataset, and improve the target’s signal-to-noise ratio (SNR) by 43.87% in the real image correction. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

17 pages, 8218 KB  
Article
A Handheld Grassland Vegetation Monitoring System Based on Multispectral Imaging
by Aiwu Zhang, Shaoxing Hu, Xizhen Zhang, Taipei Zhang, Mengnan Li, Haiyu Tao and Yan Hou
Agriculture 2021, 11(12), 1262; https://doi.org/10.3390/agriculture11121262 - 13 Dec 2021
Cited by 10 | Viewed by 4248
Abstract
Monitoring grassland vegetation growth is of vital importance to scientific grazing and grassland management. People expect to be able to use a portable device, like a mobile phone, to monitor grassland vegetation growth at any time. In this paper, we propose a handheld [...] Read more.
Monitoring grassland vegetation growth is of vital importance to scientific grazing and grassland management. People expect to be able to use a portable device, like a mobile phone, to monitor grassland vegetation growth at any time. In this paper, we propose a handheld grassland vegetation monitoring system to achieve the goal of monitoring grassland vegetation growth. The system includes two parts: the hardware unit is a hand-held multispectral imaging tool named ASQ-Discover based on a smartphone, which has six bands (wavelengths)—including three visible bands (450 nm, 550 nm, 650 nm), a red-edge band (750 nm), and two near-infrared bands (850 nm, 960 nm). The imagery data of each band has a size of 5120 × 3840 pixels with 8-bit depth. The software unit improves image quality through vignetting removal, radiometric calibration, and misalignment correction and estimates and analyzes spectral traits of grassland vegetation (Fresh Grass Ratio (FGR), NDVI, NDRE, BNDVI, GNDVI, OSAVI and TGI) that are indicators of vegetation growth in grassland. We introduce the hardware and software unit in detail, and we also experiment in five pastures located in Haiyan County, Qinghai Province. Our experimental results show that the handheld grassland vegetation growth monitoring system has the potential to revolutionize the grassland monitoring that operators can conduct when using a hand-held tool to achieve the tasks of grassland vegetation growth monitoring. Full article
(This article belongs to the Special Issue Image Analysis Techniques in Agriculture)
Show Figures

Figure 1

14 pages, 13380 KB  
Article
Detection of Chilling Injury in Pickling Cucumbers Using Dual-Band Chlorophyll Fluorescence Imaging
by Yuzhen Lu and Renfu Lu
Foods 2021, 10(5), 1094; https://doi.org/10.3390/foods10051094 - 14 May 2021
Cited by 19 | Viewed by 5182
Abstract
Pickling cucumbers are susceptible to chilling injury (CI) during postharvest refrigerated storage, which would result in quality degradation and economic loss. It is, thus, desirable to remove the defective fruit before they are marketed as fresh products or processed into pickled products. Chlorophyll [...] Read more.
Pickling cucumbers are susceptible to chilling injury (CI) during postharvest refrigerated storage, which would result in quality degradation and economic loss. It is, thus, desirable to remove the defective fruit before they are marketed as fresh products or processed into pickled products. Chlorophyll fluorescence is sensitive to CI in green fruits, because exposure to chilling temperatures can induce detectable alterations in chlorophylls of tissues. This study evaluated the feasibility of using a dual-band chlorophyll fluorescence imaging (CFI) technique for detecting CI-affected pickling cucumbers. Chlorophyll fluorescence images at 675 nm and 750 nm were acquired from pickling cucumbers under the excitation of ultraviolet-blue light. The raw images were processed for vignetting corrections through bi-dimensional empirical mode decomposition and subsequent image reconstruction. The fluorescence images were effective for ascertaining CI-affected tissues, which appeared as dark areas in the images. Support vector machine models were developed for classifying pickling cucumbers into two or three classes using the features extracted from the fluorescence images. Fusing the features of fluorescence images at 675 nm and 750 nm resulted in overall accuracies of 96.9% and 91.2% for two-class (normal and injured) and three-class (normal, mildly and severely injured) classification, respectively, which are statistically significantly better than those obtained using the features at a single wavelength, especially for the three-class classification. Furthermore, a subset of features, selected based on the neighborhood component feature selection technique, achieved the highest accuracies of 97.4% and 91.3% for the two-class and three-class classification, respectively. This study demonstrated that dual-band CFI is an effective modality for CI detection in pickling cucumbers. Full article
(This article belongs to the Special Issue Nondestructive Optical Sensing for Food Quality and Safety Inspection)
Show Figures

Figure 1

19 pages, 5976 KB  
Article
Analysis and Evaluation of the Image Preprocessing Process of a Six-Band Multispectral Camera Mounted on an Unmanned Aerial Vehicle for Winter Wheat Monitoring
by Jiale Jiang, Hengbiao Zheng, Xusheng Ji, Tao Cheng, Yongchao Tian, Yan Zhu, Weixing Cao, Reza Ehsani and Xia Yao
Sensors 2019, 19(3), 747; https://doi.org/10.3390/s19030747 - 12 Feb 2019
Cited by 34 | Viewed by 6242
Abstract
Unmanned aerial vehicle (UAV)-based multispectral sensors have great potential in crop monitoring due to their high flexibility, high spatial resolution, and ease of operation. Image preprocessing, however, is a prerequisite to make full use of the acquired high-quality data in practical applications. Most [...] Read more.
Unmanned aerial vehicle (UAV)-based multispectral sensors have great potential in crop monitoring due to their high flexibility, high spatial resolution, and ease of operation. Image preprocessing, however, is a prerequisite to make full use of the acquired high-quality data in practical applications. Most crop monitoring studies have focused on specific procedures or applications, and there has been little attempt to examine the accuracy of the data preprocessing steps. This study focuses on the preprocessing process of a six-band multispectral camera (Mini-MCA6) mounted on UAVs. First, we have quantified and analyzed the components of sensor error, including noise, vignetting, and lens distortion. Next, different methods of spectral band registration and radiometric correction were evaluated. Then, an appropriate image preprocessing process was proposed. Finally, the applicability and potential for crop monitoring were assessed in terms of accuracy by measurement of the leaf area index (LAI) and the leaf biomass inversion under variable growth conditions during five critical growth stages of winter wheat. The results show that noise and vignetting could be effectively removed via use of correction coefficients in image processing. The widely used Brown model was suitable for lens distortion correction of a Mini-MCA6. Band registration based on ground control points (GCPs) (Root-Mean-Square Error, RMSE = 1.02 pixels) was superior to that using PixelWrench2 (PW2) software (RMSE = 1.82 pixels). For radiometric correction, the accuracy of the empirical linear correction (ELC) method was significantly higher than that of light intensity sensor correction (ILSC) method. The multispectral images that were processed using optimal correction methods were demonstrated to be reliable for estimating LAI and leaf biomass. This study provides a feasible and semi-automatic image preprocessing process for a UAV-based Mini-MCA6, which also serves as a reference for other array-type multispectral sensors. Moreover, the high-quality data generated in this study may stimulate increased interest in remote high-efficiency monitoring of crop growth status. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

32 pages, 15527 KB  
Article
Sensor Correction of a 6-Band Multispectral Imaging Sensor for UAV Remote Sensing
by Joshua Kelcey and Arko Lucieer
Remote Sens. 2012, 4(5), 1462-1493; https://doi.org/10.3390/rs4051462 - 18 May 2012
Cited by 205 | Viewed by 29011
Abstract
Unmanned aerial vehicles (UAVs) represent a quickly evolving technology, broadening the availability of remote sensing tools to small-scale research groups across a variety of scientific fields. Development of UAV platforms requires broad technical skills covering platform development, data post-processing, and image analysis. UAV [...] Read more.
Unmanned aerial vehicles (UAVs) represent a quickly evolving technology, broadening the availability of remote sensing tools to small-scale research groups across a variety of scientific fields. Development of UAV platforms requires broad technical skills covering platform development, data post-processing, and image analysis. UAV development is constrained by a need to balance technological accessibility, flexibility in application and quality in image data. In this study, the quality of UAV imagery acquired by a miniature 6-band multispectral imaging sensor was improved through the application of practical image-based sensor correction techniques. Three major components of sensor correction were focused upon: noise reduction, sensor-based modification of incoming radiance, and lens distortion. Sensor noise was reduced through the use of dark offset imagery. Sensor modifications through the effects of filter transmission rates, the relative monochromatic efficiency of the sensor and the effects of vignetting were removed through a combination of spatially/spectrally dependent correction factors. Lens distortion was reduced through the implementation of the Brown–Conrady model. Data post-processing serves dual roles in data quality improvement, and the identification of platform limitations and sensor idiosyncrasies. The proposed corrections improve the quality of the raw multispectral imagery, facilitating subsequent quantitative image analysis. Full article
(This article belongs to the Special Issue Unmanned Aerial Vehicles (UAVs) based Remote Sensing)
Show Figures

Graphical abstract

Back to TopTop