Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (15)

Search Parameters:
Keywords = stepwise image fusion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 10480 KB  
Article
Monitoring Chlorophyll Content of Brassica napus L. Based on UAV Multispectral and RGB Feature Fusion
by Yongqi Sun, Jiali Ma, Mengting Lyu, Jianxun Shen, Jianping Ying, Skhawat Ali, Basharat Ali, Wenqiang Lan, Yiwa Hu, Fei Liu, Weijun Zhou and Wenjian Song
Agronomy 2025, 15(8), 1900; https://doi.org/10.3390/agronomy15081900 - 7 Aug 2025
Viewed by 463
Abstract
Accurate prediction of chlorophyll content in Brassica napus L. (rapeseed) is essential for monitoring plant nutritional status and precision agricultural management. The current study focuses on single cultivars, limiting general applicability. This study used unmanned aerial vehicle (UAV)-based RGB and multispectral imagery to [...] Read more.
Accurate prediction of chlorophyll content in Brassica napus L. (rapeseed) is essential for monitoring plant nutritional status and precision agricultural management. The current study focuses on single cultivars, limiting general applicability. This study used unmanned aerial vehicle (UAV)-based RGB and multispectral imagery to evaluate six rapeseed cultivars chlorophyll content across mixed-growth stages, including seedling, bolting, and initial flowering stages. The ExG-ExR threshold segmentation was applied to remove background interference. Subsequently, color and spectral indices were extracted from segmented images and ranked according to their correlations with measured chlorophyll content. Partial Least Squares Regression (PLSR), Multiple Linear Regression (MLR), and Support Vector Regression (SVR) models were independently established using subsets of the top-ranked features. Model performance was assessed by comparing prediction accuracy (R2 and RMSE). Results demonstrated significant accuracy improvements following background removal, especially for the SVR model. Compared to data without background removal, accuracy increased notably with background removal by 8.0% (R2p improved from 0.683 to 0.763) for color indices and 3.1% (R2p from 0.835 to 0.866) for spectral indices. Additionally, stepwise fusion of spectral and color indices further improved prediction accuracy. Optimal results were obtained by fusing the top seven color features ranked by correlation with chlorophyll content, achieving an R2p of 0.878 and an RMSE of 52.187 μg/g. These findings highlight the effectiveness of background removal and feature fusion in enhancing chlorophyll prediction accuracy. Full article
Show Figures

Figure 1

14 pages, 48905 KB  
Article
RSM-Optimizer: Branch Optimization for Dual- or Multi-Branch Semantic Segmentation Networks
by Xiaohong Zhang, Wenwen Zong and Yaning Jiang
Electronics 2025, 14(6), 1109; https://doi.org/10.3390/electronics14061109 - 11 Mar 2025
Cited by 1 | Viewed by 812
Abstract
Semantic segmentation is a crucial task in the field of computer vision, with important applications in areas such as autonomous driving, medical image analysis, and remote sensing image analysis. Dual-branch and multi-branch semantic segmentation networks that leverage deep learning technologies can enhance both [...] Read more.
Semantic segmentation is a crucial task in the field of computer vision, with important applications in areas such as autonomous driving, medical image analysis, and remote sensing image analysis. Dual-branch and multi-branch semantic segmentation networks that leverage deep learning technologies can enhance both segmentation accuracy and speed. These networks typically contain a semantic branch and a context branch. However, the feature maps in the detail branch are limited to a single type of receptive field, which limits models’ abilities to perceive objects at different scales. During the feature map fusion process, low-resolution feature maps from the semantic branch are upsampled with a large factor to match the feature maps in the detail branch. Unfortunately, these upsampling operations inevitably introduce noise. To address these issues, we propose several improvements to optimize the detail and semantic branches. We first design a receptive field-driven feature enhancement module to enrich the receptive fields of feature maps in the detail branch. Then, we propose a stepwise upsampling and fusion module to reduce the noise introduced during the upsampling process of feature fusion. Finally, we introduce a pyramid mixed pooling module (PMPM) to improve models’ abilities to perceive objects of different shapes. Considering the diversity of objects in terms of scale, shape, and category in urban street scene data, we carried out experiments on the Cityscapes and CamVid datasets. The experimental results on both datasets validate the effectiveness and efficiency of the proposed improvements. Full article
Show Figures

Figure 1

24 pages, 2671 KB  
Article
Multi-View Feature Fusion and Rich Information Refinement Network for Semantic Segmentation of Remote Sensing Images
by Jiang Liu, Shuli Cheng and Anyu Du
Remote Sens. 2024, 16(17), 3184; https://doi.org/10.3390/rs16173184 - 28 Aug 2024
Viewed by 2096
Abstract
Semantic segmentation is currently a hot topic in remote sensing image processing. There are extensive applications in land planning and surveying. Many current studies combine Convolutional Neural Networks (CNNs), which extract local information, with Transformers, which capture global information, to obtain richer information. [...] Read more.
Semantic segmentation is currently a hot topic in remote sensing image processing. There are extensive applications in land planning and surveying. Many current studies combine Convolutional Neural Networks (CNNs), which extract local information, with Transformers, which capture global information, to obtain richer information. However, the fused feature information is not sufficiently enriched and it often lacks detailed refinement. To address this issue, we propose a novel method called the Multi-View Feature Fusion and Rich Information Refinement Network (MFRNet). Our model is equipped with the Multi-View Feature Fusion Block (MAFF) to merge various types of information, including local, non-local, channel, and positional information. Within MAFF, we introduce two innovative methods. The Sliding Heterogeneous Multi-Head Attention (SHMA) extracts local, non-local, and positional information using a sliding window, while the Multi-Scale Hierarchical Compressed Channel Attention (MSCA) leverages bar-shaped pooling kernels and stepwise compression to obtain reliable channel information. Additionally, we introduce the Efficient Feature Refinement Module (EFRM), which enhances segmentation accuracy by interacting the results of the Long-Range Information Perception Branch and the Local Semantic Information Perception Branch. We evaluate our model on the ISPRS Vaihingen and Potsdam datasets. We conducted extensive comparison experiments with state-of-the-art models and verified that MFRNet outperforms other models. Full article
(This article belongs to the Special Issue Image Enhancement and Fusion Techniques in Remote Sensing)
Show Figures

Graphical abstract

18 pages, 7977 KB  
Article
Integration of Unmanned Aerial Vehicle Spectral and Textural Features for Accurate Above-Ground Biomass Estimation in Cotton
by Maoguang Chen, Caixia Yin, Tao Lin, Haijun Liu, Zhenyang Wang, Pingan Jiang, Saif Ali, Qiuxiang Tang and Xiuliang Jin
Agronomy 2024, 14(6), 1313; https://doi.org/10.3390/agronomy14061313 - 18 Jun 2024
Cited by 8 | Viewed by 1751
Abstract
Timely and accurate estimation of Above-Ground-Biomass (AGB) in cotton is essential for precise production monitoring. The study was conducted in Shaya County, Aksu Region, Xinjiang, China. It employed an unmanned aerial vehicle (UAV) as a low-altitude monitoring platform to capture multispectral images of [...] Read more.
Timely and accurate estimation of Above-Ground-Biomass (AGB) in cotton is essential for precise production monitoring. The study was conducted in Shaya County, Aksu Region, Xinjiang, China. It employed an unmanned aerial vehicle (UAV) as a low-altitude monitoring platform to capture multispectral images of the cotton canopy. Subsequently, spectral features and textural features were extracted, and feature selection was conducted using Pearson’s correlation (P), Principal Component Analysis (PCA), Multivariate Stepwise Regression (MSR), and the ReliefF algorithm (RfF), combined with the machine learning algorithm to construct an estimation model of cotton AGB. The results indicate a high consistency between the mean (MEA) and the corresponding spectral bands in textural features with the AGB correlation. Moreover, spectral and textural feature fusion proved to be more stable than models utilizing single spectral features or textural features alone. Both the RfF algorithm and ANN model demonstrated optimization effects on features, and their combination effectively reduced the data redundancy while improving the model performance. The RfF-ANN-AGB model constructed based on the spectral and textural features fusion worked better, and using the features SIPI2, RESR, G_COR, and RE_DIS, exhibited the best performance, achieving a test sets R2 of 0.86, RMSE of 0.23 kg·m−2, MAE of 0.16 kg·m−2, and nRMSE of 0.39. The findings offer a comprehensive modeling strategy for the precise and rapid estimation of cotton AGB. Full article
Show Figures

Figure 1

23 pages, 3139 KB  
Article
Multi-Level Feature-Refinement Anchor-Free Framework with Consistent Label-Assignment Mechanism for Ship Detection in SAR Imagery
by Yun Zhou, Sensen Wang, Haohao Ren, Junyi Hu, Lin Zou and Xuegang Wang
Remote Sens. 2024, 16(6), 975; https://doi.org/10.3390/rs16060975 - 10 Mar 2024
Cited by 15 | Viewed by 1908
Abstract
Deep learning-based ship-detection methods have recently achieved impressive results in the synthetic aperture radar (SAR) community. However, numerous challenging issues affecting ship detection, such as multi-scale characteristics of the ship, clutter interference, and densely arranged ships in complex inshore, have not been well [...] Read more.
Deep learning-based ship-detection methods have recently achieved impressive results in the synthetic aperture radar (SAR) community. However, numerous challenging issues affecting ship detection, such as multi-scale characteristics of the ship, clutter interference, and densely arranged ships in complex inshore, have not been well solved so far. Therefore, this article puts forward a novel SAR ship-detection method called multi-level feature-refinement anchor-free framework with a consistent label-assignment mechanism, which is capable of boosting ship-detection performance in complex scenes. First, considering that SAR ship detection is susceptible to complex background interference, we develop a stepwise feature-refinement backbone network to refine the position and contour of the ship object. Next, we devise an adjacent feature-refined pyramid network following the backbone network. The adjacent feature-refined pyramid network consists of the sub-pixel sampling-based adjacent feature-fusion sub-module and adjacent feature-localization enhancement sub-module, which can improve the detection capability of multi-scale objects by mitigating multi-scale high-level semantic loss and enhancing low-level localization features. Finally, to solve the problems of unbalanced positive and negative samples and densely arranged ship detection, we propose a consistent label-assignment mechanism based on consistent feature scale constraints to assign more appropriate and consistent labels to samples. Extensive qualitative and quantitative experiments on three public datasets, i.e., SAR Ship-Detection Dataset (SSDD), High-Resolution SAR Image Dataset (HRSID), and SAR-Ship-Dataset illustrate that the proposed method is superior to many state-of-the-art SAR ship-detection methods. Full article
Show Figures

Figure 1

16 pages, 3860 KB  
Article
A Comparison of Different Data Fusion Strategies’ Effects on Maize Leaf Area Index Prediction Using Multisource Data from Unmanned Aerial Vehicles (UAVs)
by Junwei Ma, Pengfei Chen and Lijuan Wang
Drones 2023, 7(10), 605; https://doi.org/10.3390/drones7100605 - 26 Sep 2023
Cited by 6 | Viewed by 1839
Abstract
The leaf area index (LAI) is an important indicator for crop growth monitoring. This study aims to analyze the effects of different data fusion strategies on the performance of LAI prediction models, using multisource images from unmanned aerial vehicles (UAVs). For this purpose, [...] Read more.
The leaf area index (LAI) is an important indicator for crop growth monitoring. This study aims to analyze the effects of different data fusion strategies on the performance of LAI prediction models, using multisource images from unmanned aerial vehicles (UAVs). For this purpose, maize field experiments were conducted to obtain plants with different growth status. LAI and corresponding multispectral (MS) and RGB images were collected at different maize growth stages. Based on these data, different model design scenarios, including single-source image scenarios, pixel-level multisource data fusion scenarios, and feature-level multisource data fusion scenarios, were created. Then, stepwise multiple linear regression (SMLR) was used to design LAI prediction models. The performance of models were compared and the results showed that (i) combining spectral and texture features to predict LAI performs better than using only spectral or texture information; (ii) compared with using single-source images, using a multisource data fusion strategy can improve the performance of the model to predict LAI; and (iii) among the different multisource data fusion strategies, the feature-level data fusion strategy performed better than the pixel-level fusion strategy in the LAI prediction models. Thus, a feature-level data fusion strategy is recommended for the creation of maize LAI prediction models using multisource UAV images. Full article
Show Figures

Figure 1

16 pages, 3139 KB  
Article
A Lightweight Feature Distillation and Enhancement Network for Super-Resolution Remote Sensing Images
by Feng Gao, Liangliang Li, Jiawen Wang, Kaipeng Sun, Ming Lv, Zhenhong Jia and Hongbing Ma
Sensors 2023, 23(8), 3906; https://doi.org/10.3390/s23083906 - 12 Apr 2023
Cited by 4 | Viewed by 2323
Abstract
Super-resolution (SR) images based on deep networks have achieved great accomplishments in recent years, but the large number of parameters that come with them are not conducive to use in equipment with limited capabilities in real life. Therefore, we propose a lightweight feature [...] Read more.
Super-resolution (SR) images based on deep networks have achieved great accomplishments in recent years, but the large number of parameters that come with them are not conducive to use in equipment with limited capabilities in real life. Therefore, we propose a lightweight feature distillation and enhancement network (FDENet). Specifically, we propose a feature distillation and enhancement block (FDEB), which contains two parts: a feature-distillation part and a feature-enhancement part. Firstly, the feature-distillation part uses the stepwise distillation operation to extract the layered feature, and here we use the proposed stepwise fusion mechanism (SFM) to fuse the retained features after stepwise distillation to promote information flow and use the shallow pixel attention block (SRAB) to extract information. Secondly, we use the feature-enhancement part to enhance the extracted features. The feature-enhancement part is composed of well-designed bilateral bands. The upper sideband is used to enhance the features, and the lower sideband is used to extract the complex background information of remote sensing images. Finally, we fuse the features of the upper and lower sidebands to enhance the expression ability of the features. A large number of experiments show that the proposed FDENet both produces less parameters and performs better than most existing advanced models. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

22 pages, 2706 KB  
Article
An AR Map Virtual–Real Fusion Method Based on Element Recognition
by Zhangang Wang
ISPRS Int. J. Geo-Inf. 2023, 12(3), 126; https://doi.org/10.3390/ijgi12030126 - 14 Mar 2023
Cited by 5 | Viewed by 2881
Abstract
The application of AR to explore augmented map representation has become a research hotspot due to the growing application of AR in maps and geographic information in addition to the rising demand for automated map interpretation. Taking the AR map as the research [...] Read more.
The application of AR to explore augmented map representation has become a research hotspot due to the growing application of AR in maps and geographic information in addition to the rising demand for automated map interpretation. Taking the AR map as the research object, this paper focuses on AR map tracking and registration and the virtual–real fusion method based on element recognition. It strives to establish a new geographic information visualization interface and application model. AR technology is applied to the augmented representation of 2D planar maps. A step-by-step identification and extraction method of unmarked map elements are designed and proposed based on the analysis of the characteristics of planar map elements. This method combines the spatial and attribute characteristics of point-like elements and line-like elements, extracts the color, geometric features, and spatial distribution of map elements through computer vision methods, and completes the identification and automatic extraction of map elements. The multi-target image recognition and extraction method based on template and contour matching, and the line element recognition and extraction method based on color space and area growth are introduced in detail. Then, 3D tracking and registration is used to realize the unmarked tracking and registration of planar map element images, and the AR map virtual–real fusion algorithm is proposed. The experimental results and results of an analysis of stepwise identification and extraction of unmarked map elements and map virtual–real fusion reveal that the stepwise identification of unmarked map elements and map model virtual–real fusion studied in this paper is effective. Through the analysis of map element step-by-step recognition efficiency and recognition rate, it is proved that the element step-by-step method in this paper is fast, its recognition efficiency meets the AR real-time requirements, and its recognition accuracy is high. Full article
Show Figures

Figure 1

18 pages, 5011 KB  
Article
IPD-Net: Infrared Pedestrian Detection Network via Adaptive Feature Extraction and Coordinate Information Fusion
by Lun Zhou, Song Gao, Simin Wang, Hengsheng Zhang, Ruochen Liu and Jiaming Liu
Sensors 2022, 22(22), 8966; https://doi.org/10.3390/s22228966 - 19 Nov 2022
Cited by 9 | Viewed by 2646
Abstract
Infrared pedestrian detection has important theoretical research value and a wide range of application scenarios. Because of its special imaging method, infrared images can be used for pedestrian detection at night and in severe weather conditions. However, the lack of pedestrian feature information [...] Read more.
Infrared pedestrian detection has important theoretical research value and a wide range of application scenarios. Because of its special imaging method, infrared images can be used for pedestrian detection at night and in severe weather conditions. However, the lack of pedestrian feature information in infrared images and the small scale of pedestrian objects makes it difficult for detection networks to extract feature information and accurately detect small-scale pedestrians. To address these issues, this paper proposes an infrared pedestrian detection network based on YOLOv5, named IPD-Net. Firstly, an adaptive feature extraction module (AFEM) is designed in the backbone network section, in which a residual structure with stepwise selective kernel was included to enable the model to better extract feature information under different sizes of the receptive field. Secondly, a coordinate attention feature pyramid network (CA-FPN) is designed to enhance the deep feature map with location information through the coordinate attention module, so that the network gains better capability of object localization. Finally, shallow information is introduced into the feature fusion network to improve the detection accuracy of weak and small objects. Experimental results on the large infrared image dataset ZUT show that the mean Average Precision (mAP50) of our model is improved by 3.6% compared to that of YOLOv5s. In addition, IPD-Net shows various degrees of accuracy improvement compared to other excellent methods. Full article
(This article belongs to the Special Issue Infrared Sensing and Target Detection)
Show Figures

Figure 1

15 pages, 2309 KB  
Article
Surface Soil Moisture Inversion and Distribution Based on Spatio-Temporal Fusion of MODIS and Landsat
by Sinan Wang, Wenjun Wang, Yingjie Wu and Shuixia Zhao
Sustainability 2022, 14(16), 9905; https://doi.org/10.3390/su14169905 - 10 Aug 2022
Cited by 8 | Viewed by 2023
Abstract
Soil moisture plays an important role in hydrology, climate, agriculture, and ecology, and remote sensing is one of the most important tools for estimating the soil moisture over large areas. Soil moisture, which is calculated by remote sensing inversion, is affected by the [...] Read more.
Soil moisture plays an important role in hydrology, climate, agriculture, and ecology, and remote sensing is one of the most important tools for estimating the soil moisture over large areas. Soil moisture, which is calculated by remote sensing inversion, is affected by the uneven distribution of vegetation and therefore the results cannot accurately reflect the spatial distribution of the soil moisture in the study area. This study analyzes the soil moisture of different vegetation covers in the Wushen Banner of Inner Mongolia, recorded in 2016, and using Landsat and MODIS images fused with multispectral bands. Firstly, we compared and analyzed the ability of the visible optical and short-wave infrared drought index (VSDI), the normalized differential infrared index (NDII), and the short-wave infrared water stress index (SIWSI) in monitoring the soil moisture in different vegetation cover soils. Secondly, we used the stepwise multiple regression analysis method in order to correlate the multispectral fusion bands with the field-measured soil water content and established a soil moisture inversion model based on the multispectral fusion bands. As the results show, there was a strong correlation between the established model and the measured soil water content of the different vegetation cover soils: in the bare soil, R2 was 0.86; in the partially vegetated cover soil, R2 was 0.84; and in the highly vegetated cover soil, R2 was 0.87. This shows that the established model could better reflect the actual condition of the surface soil moisture in the different vegetation covers. Full article
Show Figures

Figure 1

23 pages, 10976 KB  
Article
Stepwise Fusion of Hyperspectral, Multispectral and Panchromatic Images with Spectral Grouping Strategy: A Comparative Study Using GF5 and GF1 Images
by Leping Huang, Zhongwen Hu, Xin Luo, Qian Zhang, Jingzhe Wang and Guofeng Wu
Remote Sens. 2022, 14(4), 1021; https://doi.org/10.3390/rs14041021 - 20 Feb 2022
Cited by 14 | Viewed by 3807
Abstract
Since hyperspectral satellite images (HSIs) usually hold low spatial resolution, improving the spatial resolution of hyperspectral imaging (HSI) is an effective solution to explore its potential for remote sensing applications, such as land cover mapping over urban and coastal areas. The fusion of [...] Read more.
Since hyperspectral satellite images (HSIs) usually hold low spatial resolution, improving the spatial resolution of hyperspectral imaging (HSI) is an effective solution to explore its potential for remote sensing applications, such as land cover mapping over urban and coastal areas. The fusion of HSIs with high spatial resolution multispectral images (MSIs) and panchromatic (PAN) images could be a solution. To address the challenging work of fusing HSIs, MSIs and PAN images, a novel easy-to-implement stepwise fusion approach was proposed in this study. The fusion of HSIs and MSIs was decomposed into a set of simple image fusion tasks through spectral grouping strategy. HSI, MSI and PAN images were fused step by step using existing image fusion algorithms. According to different fusion order, two strategies ((HSI+MSI)+PAN and HSI+(MSI+PAN)) were proposed. Using simulated and real Gaofen-5 (GF-5) HSI, MSI and PAN images from the Gaofen-1 (GF-1) PMS sensor as experimental data, we compared the proposed stepwise fusion strategies with the traditional fusion strategy (HSI+PAN), and compared the performances of six fusion algorithms under three fusion strategies. We comprehensively evaluated the fused results through three aspects: spectral fidelity, spatial fidelity and computation efficiency evaluation. The results showed that (1) the spectral fidelity of the fused images obtained by stepwise fusion strategies was better than that of the traditional strategy; (2) the proposed stepwise strategies performed better or comparable spatial fidelity than traditional strategy; (3) the stepwise strategy did not significantly increase the time complexity compared to the traditional strategy; and (4) we also provide suggestions for selecting image fusion algorithms using the proposed strategy. The study provided us with a reference for the selection of fusion strategies and algorithms in different application scenarios, and also provided an easy-to-implement solution and useful references for fusing HSI, MSI and PAN images. Full article
Show Figures

Graphical abstract

17 pages, 3564 KB  
Article
Research on Classification Model of Panax notoginseng Taproots Based on Machine Vision Feature Fusion
by Yinlong Zhu, Fujie Zhang, Lixia Li, Yuhao Lin, Zhongxiong Zhang, Lei Shi, Huan Tao and Tao Qin
Sensors 2021, 21(23), 7945; https://doi.org/10.3390/s21237945 - 28 Nov 2021
Cited by 12 | Viewed by 2838
Abstract
The existing classification methods for Panax notoginseng taproots suffer from low accuracy, low efficiency, and poor stability. In this study, a classification model based on image feature fusion is established for Panax notoginseng taproots. The images of Panax notoginseng taproots collected in the [...] Read more.
The existing classification methods for Panax notoginseng taproots suffer from low accuracy, low efficiency, and poor stability. In this study, a classification model based on image feature fusion is established for Panax notoginseng taproots. The images of Panax notoginseng taproots collected in the experiment are preprocessed by Gaussian filtering, binarization, and morphological methods. Then, a total of 40 features are extracted, including size and shape features, HSV and RGB color features, and texture features. Through BP neural network, extreme learning machine (ELM), and support vector machine (SVM) models, the importance of color, texture, and fusion features for the classification of the main roots of Panax notoginseng is verified. Among the three models, the SVM model performs the best, achieving an accuracy of 92.037% on the prediction set. Next, iterative retaining information variables (IRIVs), variable iterative space shrinkage approach (VISSA), and stepwise regression analysis (SRA) are used to reduce the dimension of all the features. Finally, a traditional machine learning SVM model based on feature selection and a deep learning model based on semantic segmentation are established. With the model size of only 125 kb and the training time of 3.4 s, the IRIV-SVM model achieves an accuracy of 95.370% on the test set, so IRIV-SVM is selected as the main root classification model for Panax notoginseng. After being optimized by the gray wolf optimizer, the IRIV-GWO-SVM model achieves the highest classification accuracy of 98.704% on the test set. The study results of this paper provide a basis for developing online classification methods of Panax notoginseng with different grades in actual production. Full article
Show Figures

Figure 1

12 pages, 851 KB  
Letter
Ramie Yield Estimation Based on UAV RGB Images
by Hongyu Fu, Chufeng Wang, Guoxian Cui, Wei She and Liang Zhao
Sensors 2021, 21(2), 669; https://doi.org/10.3390/s21020669 - 19 Jan 2021
Cited by 18 | Viewed by 4874
Abstract
Timely and accurate crop growth monitoring and yield estimation are important for field management. The traditional sampling method used for estimation of ramie yield is destructive. Thus, this study proposed a new method for estimating ramie yield based on field phenotypic data obtained [...] Read more.
Timely and accurate crop growth monitoring and yield estimation are important for field management. The traditional sampling method used for estimation of ramie yield is destructive. Thus, this study proposed a new method for estimating ramie yield based on field phenotypic data obtained from unmanned aerial vehicle (UAV) images. A UAV platform carrying RGB cameras was employed to collect ramie canopy images during the whole growth period. The vegetation indices (VIs), plant number, and plant height were extracted from UAV-based images, and then, these data were incorporated to establish yield estimation model. Among all of the UAV-based image data, we found that the structure features (plant number and plant height) could better reflect the ramie yield than the spectral features, and in structure features, the plant number was found to be the most useful index to monitor the yield, with a correlation coefficient of 0.6. By fusing multiple characteristic parameters, the yield estimation model based on the multiple linear regression was obviously more accurate than the stepwise linear regression model, with a determination coefficient of 0.66 and a relative root mean square error of 1.592 kg. Our study reveals that it is feasible to monitor crop growth based on UAV images and that the fusion of phenotypic data can improve the accuracy of yield estimations. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

21 pages, 4839 KB  
Article
STAIR 2.0: A Generic and Automatic Algorithm to Fuse Modis, Landsat, and Sentinel-2 to Generate 10 m, Daily, and Cloud-/Gap-Free Surface Reflectance Product
by Yunan Luo, Kaiyu Guan, Jian Peng, Sibo Wang and Yizhi Huang
Remote Sens. 2020, 12(19), 3209; https://doi.org/10.3390/rs12193209 - 1 Oct 2020
Cited by 19 | Viewed by 8425
Abstract
Remote sensing datasets with both high spatial and high temporal resolution are critical for monitoring and modeling the dynamics of land surfaces. However, no current satellite sensor could simultaneously achieve both high spatial resolution and high revisiting frequency. Therefore, the integration of different [...] Read more.
Remote sensing datasets with both high spatial and high temporal resolution are critical for monitoring and modeling the dynamics of land surfaces. However, no current satellite sensor could simultaneously achieve both high spatial resolution and high revisiting frequency. Therefore, the integration of different sources of satellite data to produce a fusion product has become a popular solution to address this challenge. Many methods have been proposed to generate synthetic images with rich spatial details and high temporal frequency by combining two types of satellite datasets—usually frequent coarse-resolution images (e.g., MODIS) and sparse fine-resolution images (e.g., Landsat). In this paper, we introduce STAIR 2.0, a new fusion method that extends the previous STAIR fusion framework, to fuse three types of satellite datasets, including MODIS, Landsat, and Sentinel-2. In STAIR 2.0, input images are first processed to impute missing-value pixels that are due to clouds or sensor mechanical issues using a gap-filling algorithm. The multiple refined time series are then integrated stepwisely, from coarse- to fine- and high-resolution, ultimately providing a synthetic daily, high-resolution surface reflectance observations. We applied STAIR 2.0 to generate a 10-m, daily, cloud-/gap-free time series that covers the 2017 growing season of Saunders County, Nebraska. Moreover, the framework is generic and can be extended to integrate more types of satellite data sources, further improving the quality of the fusion product. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

15 pages, 6748 KB  
Article
Influence of Artificially Generated Interocular Blur Difference on Fusion Stability Under Vergence Stress
by Miroslav Dostalek, Jan Hejda, Karel Fliegel, Michaela Duchackova, Ladislav Dusek, Jiri Hozman, Tomas Lukes and Rudolf Autrata
J. Eye Mov. Res. 2019, 12(4), 1-15; https://doi.org/10.16910/jemr.12.4.4 - 11 Sep 2019
Cited by 2 | Viewed by 174
Abstract
The stability of fusion was evaluated by its breakage when interocular blur differences were presented under vergence demand to healthy subjects. We presumed that these blur differences cause suppression of the more blurred image (interocular blur suppression, IOBS), disrupt binocular fusion and suppressed [...] Read more.
The stability of fusion was evaluated by its breakage when interocular blur differences were presented under vergence demand to healthy subjects. We presumed that these blur differences cause suppression of the more blurred image (interocular blur suppression, IOBS), disrupt binocular fusion and suppressed eye leaves its forced vergent position. During dichoptic presentation of static grayscale images of natural scenes, the luminance contrast (mode B) or higher-spatial frequency content (mode C) or luminance contrast plus higher-spatial frequency content (mode A) were stepwise reduced in the image presented to the non-dominant eye. We studied the effect of these types of blur on fusion stability at various levels of the vergence demand. During the divergence demand, the fusion was disrupted with approximately half blur than during convergence. Various modes of blur influenced fusion differently. The mode C (isolated reduction of higher-spatial frequency content) violated fusion under the lowest vergence demand significantly more than either isolated or combined reduction of luminance contrast (mode B and A). According to our results, the image's details (i.e., higher-spatial frequency content) protects binocular fusion from disruption by the lowest vergence demand. Full article
Show Figures

Figure 1

Back to TopTop