Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (99)

Search Parameters:
Keywords = GeoEye

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 13882 KiB  
Article
Effect of CdO on the Structural and Spectroscopic Properties of Germanium–Tellurite Glass
by Iveth Viridiana García Amaya, David Alejandro Rodríguez Carvajal, Josefina Alvarado-Rivera, R. Lozada-Morales, Paula Cristina Santos-Munguía, Juan José Palafox Reyes, Pedro Hernández-Abril, Gloria Alicia Limón Reynosa and Ma. Elena Zayas
Materials 2025, 18(8), 1739; https://doi.org/10.3390/ma18081739 - 10 Apr 2025
Viewed by 510
Abstract
New glasses in the xCdO-(90 − x)TeO2-10GeO2 system were obtained by the conventional melt-quenching process at 900 °C. The glasses were transparent to the naked eye. The diffraction patterns indicate that the samples were mostly amorphous, except for the CdO-rich [...] Read more.
New glasses in the xCdO-(90 − x)TeO2-10GeO2 system were obtained by the conventional melt-quenching process at 900 °C. The glasses were transparent to the naked eye. The diffraction patterns indicate that the samples were mostly amorphous, except for the CdO-rich glasses, in which the formation of nanocrystals of CdO and Cd3TeO6 were identified. Raman spectroscopy analysis of the samples displayed the existence of TeO3, TeO3+1, TeO4, and GeO4, structural units within the glass matrix. The optical band gap of the glass samples was determined by optical absorption spectroscopy using the Tauc method. Depending on the relative content of TeO2, their values varied in the range of 2.32–2.86 eV. The refractive index was obtained from the band gap values. The XPS measurements showed that Ge 3d, O 1s and Te 3d3/2, Te 3d5/2, Cd 3d5/2, and Cd 3d3/2 doublets shifted to higher binding energy values as the amount of TeO2 was increased. The binding energy values of the Te 3d doublet are related to the TeO4 and TeO3 groups. Full article
Show Figures

Figure 1

29 pages, 8824 KiB  
Article
Toward Reliable Post-Disaster Assessment: Advancing Building Damage Detection Using You Only Look Once Convolutional Neural Network and Satellite Imagery
by César Luis Moreno González, Germán A. Montoya and Carlos Lozano Garzón
Mathematics 2025, 13(7), 1041; https://doi.org/10.3390/math13071041 - 23 Mar 2025
Viewed by 778
Abstract
Natural disasters continuously threaten populations worldwide, with hydrometeorological events standing out due to their unpredictability, rapid onset, and significant destructive capacity. However, developing countries often face severe budgetary constraints and rely heavily on international support, limiting their ability to implement optimal disaster response [...] Read more.
Natural disasters continuously threaten populations worldwide, with hydrometeorological events standing out due to their unpredictability, rapid onset, and significant destructive capacity. However, developing countries often face severe budgetary constraints and rely heavily on international support, limiting their ability to implement optimal disaster response strategies. This study addresses these challenges by developing and implementing YOLOv8-based deep learning models trained on high-resolution satellite imagery from the Maxar GeoEye-1 satellite. Unlike prior studies, we introduce a manually labeled dataset, consisting of 1400 undamaged and 1200 damaged buildings, derived from pre- and post-Hurricane Maria imagery. This dataset has been publicly released, providing a benchmark for future disaster assessment research. Additionally, we conduct a systematic evaluation of optimization strategies, comparing SGD with momentum, RMSProp, Adam, AdaMax, NAdam, and AdamW. Our results demonstrate that SGD with momentum outperforms Adam-based optimizers in training stability, convergence speed, and reliability across higher confidence thresholds, leading to more robust and consistent disaster damage predictions. To enhance usability, we propose deploying the trained model via a REST API, enabling real-time damage assessment with minimal computational resources, making it a low-cost, scalable tool for government agencies and humanitarian organizations. These findings contribute to machine learning-based disaster response, offering an efficient, cost-effective framework for large-scale damage assessment and reinforcing the importance of model selection, hyperparameter tuning, and optimization functions in critical real-world applications. Full article
(This article belongs to the Special Issue Mathematical Methods and Models Applied in Information Technology)
Show Figures

Figure 1

28 pages, 28459 KiB  
Article
Multi-Temporal Remote Sensing Satellite Data Analysis for the 2023 Devastating Flood in Derna, Northern Libya
by Roman Shults, Ashraf Farahat, Muhammad Usman and Md Masudur Rahman
Remote Sens. 2025, 17(4), 616; https://doi.org/10.3390/rs17040616 - 11 Feb 2025
Viewed by 1537
Abstract
Floods are considered to be among the most dangerous and destructive geohazards, leading to human victims and severe economic outcomes. Yearly, many regions around the world suffer from devasting floods. The estimation of flood aftermaths is one of the high priorities for the [...] Read more.
Floods are considered to be among the most dangerous and destructive geohazards, leading to human victims and severe economic outcomes. Yearly, many regions around the world suffer from devasting floods. The estimation of flood aftermaths is one of the high priorities for the global community. One such flood took place in northern Libya in September 2023. The presented study is aimed at evaluating the flood aftermath for Derna city, Libya, using high resolution GEOEYE-1 and Sentinel-2 satellite imagery in Google Earth Engine environment. The primary task is obtaining and analyzing data that provide high accuracy and detail for the study region. The main objective of study is to explore the capabilities of different algorithms and remote sensing datasets for quantitative change estimation after the flood. Different supervised classification methods were examined, including random forest, support vector machine, naïve-Bayes, and classification and regression tree (CART). The various sets of hyperparameters for classification were considered. The high-resolution GEOEYE-1 images were used for precise change detection using image differencing (pixel-to-pixel comparison and geographic object-based image analysis (GEOBIA) for extracting building), whereas Sentinel-2 data were employed for the classification and further change detection by classified images. Object based image analysis (OBIA) was also performed for the extraction of building footprints using very high resolution GEOEYE images for the quantification of buildings that collapsed due to the flood. The first stage of the study was the development of a workflow for data analysis. This workflow includes three parallel processes of data analysis. High-resolution GEOEYE-1 images of Derna city were investigated for change detection algorithms. In addition, different indices (normalized difference vegetation index (NDVI), soil adjusted vegetation index (SAVI), transformed NDVI (TNDVI), and normalized difference moisture index (NDMI)) were calculated to facilitate the recognition of damaged regions. In the final stage, the analysis results were fused to obtain the damage estimation for the studied region. As the main output, the area changes for the primary classes and the maps that portray these changes were obtained. The recommendations for data usage and further processing in Google Earth Engine were developed. Full article
(This article belongs to the Special Issue Image Processing from Aerial and Satellite Imagery)
Show Figures

Figure 1

24 pages, 9850 KiB  
Article
RTAPM: A Robust Top-View Absolute Positioning Method with Visual–Inertial Assisted Joint Optimization
by Pengfei Tong, Xuerong Yang, Xuanzhi Peng and Longfei Wang
Drones 2025, 9(1), 37; https://doi.org/10.3390/drones9010037 - 7 Jan 2025
Viewed by 1111
Abstract
In challenging environments such as disaster aid or forest rescue, unmanned aerial vehicles (UAVs) have been hampered by inconsistent or even denied global navigation satellite system (GNSS) signals, resulting in UAVs becoming incapable of operating normally. Currently, there is no unmanned aerial vehicle [...] Read more.
In challenging environments such as disaster aid or forest rescue, unmanned aerial vehicles (UAVs) have been hampered by inconsistent or even denied global navigation satellite system (GNSS) signals, resulting in UAVs becoming incapable of operating normally. Currently, there is no unmanned aerial vehicle (UAV) positioning method that is capable of substituting or temporarily replacing GNSS positioning. This study proposes a reliable UAV top-down absolute positioning method (RTAPM) based on a monocular RGB camera that employs joint optimization and visual–inertial assistance. The proposed method employs a bird’s-eye view monocular RGB camera to estimate the UAV’s moving position. By comparing real-time aerial images with pre-existing satellite images of the flight area, utilizing components such as template geo-registration, UAV motion constraints, point–line image matching, and joint state estimation, a method is provided to substitute satellites and obtain short-term absolute positioning information of UAVs in challenging and dynamic environments. Based on two open-source datasets and real-time flight experimental tests, the method proposed in this study has significant advantages in positioning accuracy and system robustness over existing typical UAV absolute positioning methods, and it can temporarily replace GNSS for application in challenging environments such as disaster aid or forest rescue. Full article
(This article belongs to the Special Issue Autonomous Drone Navigation in GPS-Denied Environments)
Show Figures

Figure 1

19 pages, 15853 KiB  
Article
Combining OpenStreetMap with Satellite Imagery to Enhance Cross-View Geo-Localization
by Yuekun Hu, Yingfan Liu and Bin Hui
Sensors 2025, 25(1), 44; https://doi.org/10.3390/s25010044 - 25 Dec 2024
Cited by 1 | Viewed by 2019
Abstract
Cross-view geo-localization (CVGL) aims to determine the capture location of street-view images by matching them with corresponding 2D maps, such as satellite imagery. While recent bird’s eye view (BEV)-based methods have advanced this task by addressing viewpoint and appearance differences, the existing approaches [...] Read more.
Cross-view geo-localization (CVGL) aims to determine the capture location of street-view images by matching them with corresponding 2D maps, such as satellite imagery. While recent bird’s eye view (BEV)-based methods have advanced this task by addressing viewpoint and appearance differences, the existing approaches typically rely solely on either OpenStreetMap (OSM) data or satellite imagery, limiting localization robustness due to single-modality constraints. This paper presents a novel CVGL method that fuses OSM data with satellite imagery, leveraging their complementary strengths to enhance localization robustness. We integrate the semantic richness and structural information from OSM with the high-resolution visual details of satellite imagery, creating a unified 2D geospatial representation. Additionally, we employ a transformer-based BEV perception module that utilizes attention mechanisms to construct fine-grained BEV features from street-view images for matching with fused map features. Compared to state-of-the-art methods that utilize only OSM data, our approach achieves substantial improvements, with 12.05% and 12.06% recall enhancements on the KITTI benchmark for lateral and longitudinal localization within a 1-m error, respectively. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

29 pages, 138770 KiB  
Article
Regional-Scale Detection of Palms Using VHR Satellite Imagery and Deep Learning in the Guyanese Rainforest
by Matthew J. Drouillard and Anthony R. Cummings
Remote Sens. 2024, 16(24), 4642; https://doi.org/10.3390/rs16244642 - 11 Dec 2024
Cited by 1 | Viewed by 1059
Abstract
Arecaceae (palms) play a crucial role for native communities and wildlife in the Amazon region. This study presents a first-of-its-kind regional-scale spatial cataloging of palms using remotely sensed data for the country of Guyana. Using very high-resolution satellite images from the GeoEye-1 and [...] Read more.
Arecaceae (palms) play a crucial role for native communities and wildlife in the Amazon region. This study presents a first-of-its-kind regional-scale spatial cataloging of palms using remotely sensed data for the country of Guyana. Using very high-resolution satellite images from the GeoEye-1 and WorldView-2 sensor platforms, which collectively cover an area of 985 km2, a total of 472,753 individual palm crowns are detected with F1 scores of 0.76 and 0.79, respectively, using a convolutional neural network (CNN) instance segmentation model. An example of CNN model transference between images is presented, emphasizing the limitation and practical application of this approach. A method is presented to optimize precision and recall using the confidence of the detection features; this results in a decrease of 45% and 31% in false positive detections, with a moderate increase in false negative detections. The sensitivity of the CNN model to the size of the training set is evaluated, showing that comparable metrics could be achieved with approximately 50% of the samples used in this study. Finally, the diameter of the palm crown is calculated based on the polygon identified by mask detection, resulting in an average of 7.83 m, a standard deviation of 1.05 m, and a range of {4.62, 13.90} m for the GeoEye-1 image. Similarly, for the WorldView-2 image, the average diameter is 8.08 m, with a standard deviation of 0.70 m and a range of {4.82, 15.80} m. Full article
(This article belongs to the Special Issue Deep Learning Techniques Applied in Remote Sensing)
Show Figures

Figure 1

21 pages, 57724 KiB  
Article
MDSCNN: Remote Sensing Image Spatial–Spectral Fusion Method via Multi-Scale Dual-Stream Convolutional Neural Network
by Wenqing Wang, Fei Jia, Yifei Yang, Kunpeng Mu and Han Liu
Remote Sens. 2024, 16(19), 3583; https://doi.org/10.3390/rs16193583 - 26 Sep 2024
Cited by 2 | Viewed by 1923
Abstract
Pansharpening refers to enhancing the spatial resolution of multispectral images through panchromatic images while preserving their spectral features. However, existing traditional methods or deep learning methods always have certain distortions in the spatial or spectral dimensions. This paper proposes a remote sensing spatial–spectral [...] Read more.
Pansharpening refers to enhancing the spatial resolution of multispectral images through panchromatic images while preserving their spectral features. However, existing traditional methods or deep learning methods always have certain distortions in the spatial or spectral dimensions. This paper proposes a remote sensing spatial–spectral fusion method based on a multi-scale dual-stream convolutional neural network, which includes feature extraction, feature fusion, and image reconstruction modules for each scale. In terms of feature fusion, we propose a multi cascade module to better fuse image features. We also design a new loss function aim at enhancing the high degree of consistency between fused images and reference images in terms of spatial details and spectral information. To validate its effectiveness, we conduct thorough experimental analyses on two widely used remote sensing datasets: GeoEye-1 and Ikonos. Compared with the nine leading pansharpening techniques, the proposed method demonstrates superior performance in multiple key evaluation metrics. Full article
Show Figures

Figure 1

20 pages, 8709 KiB  
Article
Automatic Fine Co-Registration of Datasets from Extremely High Resolution Satellite Multispectral Scanners by Means of Injection of Residues of Multivariate Regression
by Luciano Alparone, Alberto Arienzo and Andrea Garzelli
Remote Sens. 2024, 16(19), 3576; https://doi.org/10.3390/rs16193576 - 25 Sep 2024
Cited by 3 | Viewed by 1217
Abstract
This work presents two pre-processing patches to automatically correct the residual local misalignment of datasets acquired by very/extremely high resolution (VHR/EHR) satellite multispectral (MS) scanners, one for, e.g., GeoEye-1 and Pléiades, featuring two separate instruments for MS and panchromatic (Pan) data, the other [...] Read more.
This work presents two pre-processing patches to automatically correct the residual local misalignment of datasets acquired by very/extremely high resolution (VHR/EHR) satellite multispectral (MS) scanners, one for, e.g., GeoEye-1 and Pléiades, featuring two separate instruments for MS and panchromatic (Pan) data, the other for WorldView-2/3 featuring three instruments, two of which are visible and near-infra-red (VNIR) MS scanners. The misalignment arises because the two/three instruments onboard GeoEye-1 / WorldView-2 (four onboard WorldView-3) share the same optics and, thus, cannot have parallel optical axes. Consequently, they image the same swath area from different positions along the orbit. Local height changes (hills, buildings, trees, etc.) originate local shifts among corresponding points in the datasets. The latter would be accurately aligned only if the digital elevation surface model were known with sufficient spatial resolution, which is hardly feasible everywhere because of the extremely high resolution, with Pan pixels of less than 0.5 m. The refined co-registration is achieved by injecting the residue of the multivariate linear regression of each scanner towards lowpass-filtered Pan. Experiments with two and three instruments show that an almost perfect alignment is achieved. MS pansharpening is also shown to greatly benefit from the improved alignment. The proposed alignment procedures are real-time, fully automated, and do not require any additional or ancillary information, but rely uniquely on the unimodality of the MS and Pan sensors. Full article
Show Figures

Figure 1

19 pages, 25201 KiB  
Technical Note
Disparity Refinement for Stereo Matching of High-Resolution Remote Sensing Images Based on GIS Data
by Xuanqi Wang, Liting Jiang, Feng Wang, Hongjian You and Yuming Xiang
Remote Sens. 2024, 16(3), 487; https://doi.org/10.3390/rs16030487 - 26 Jan 2024
Cited by 5 | Viewed by 2463
Abstract
With the emergence of the Smart City concept, the rapid advancement of urban three-dimensional (3D) reconstruction becomes imperative. While current developments in the field of 3D reconstruction have enabled the generation of 3D products such as Digital Surface Models (DSM), challenges persist in [...] Read more.
With the emergence of the Smart City concept, the rapid advancement of urban three-dimensional (3D) reconstruction becomes imperative. While current developments in the field of 3D reconstruction have enabled the generation of 3D products such as Digital Surface Models (DSM), challenges persist in accurately reconstructing shadows, handling occlusions, and addressing low-texture areas in very-high-resolution remote sensing images. These challenges often lead to difficulties in calculating satisfactory disparity maps using existing stereo matching methods, thereby reducing the accuracy of 3D reconstruction. This issue is particularly pronounced in urban scenes, which contain numerous super high-rise and densely distributed buildings, resulting in large disparity values and occluded regions in stereo image pairs, and further leading to a large number of mismatched points in the obtained disparity map. In response to these challenges, this paper proposes a method to refine the disparity in urban scenes based on open-source GIS data. First, we register the GIS data with the epipolar-rectified images since there always exists unignorable geolocation errors between them. Specifically, buildings with different heights present different offsets in GIS data registering; thus, we perform multi-modal matching for each building and merge them into the final building mask. Subsequently, a two-layer optimization process is applied to the initial disparity map based on the building mask, encompassing both global and local optimization. Finally, we perform a post-correction on the building facades to obtain the final refined disparity map that can be employed for high-precision 3D reconstruction. Experimental results on SuperView-1, GaoFen-7, and GeoEye satellite images show that the proposed method has the ability to correct the occluded and mismatched areas in the initial disparity map generated by both hand-crafted and deep-learning stereo matching methods. The DSM generated by the refined disparity reduces the average height error from 2.2 m to 1.6 m, which demonstrates superior performance compared with other disparity refinement methods. Furthermore, the proposed method is able to improve the integrity of the target structure and present steeper building facades and complete roofs, which are conducive to subsequent 3D model generation. Full article
Show Figures

Figure 1

20 pages, 6425 KiB  
Article
Evaluating the Usability of a Gaze-Adaptive Approach for Identifying and Comparing Raster Values between Multilayers
by Changbo Zhang, Hua Liao, Yongbo Huang and Weihua Dong
ISPRS Int. J. Geo-Inf. 2023, 12(10), 412; https://doi.org/10.3390/ijgi12100412 - 8 Oct 2023
Cited by 1 | Viewed by 2238
Abstract
Raster maps provide intuitive visualizations of remote sensing data representing various phenomena on the Earth’s surface. Reading raster maps with intricate information requires a high cognitive workload, especially when it is necessary to identify and compare values between multiple layers. In traditional methods, [...] Read more.
Raster maps provide intuitive visualizations of remote sensing data representing various phenomena on the Earth’s surface. Reading raster maps with intricate information requires a high cognitive workload, especially when it is necessary to identify and compare values between multiple layers. In traditional methods, users need to repeatedly move their mouse and switch their visual focus between the map content and legend to interpret various grid value meanings. Such methods are ineffective and may lead to the loss of visual context for users. In this research, we aim to explore the potential benefits and drawbacks of gaze-adaptive interactions when interpreting raster maps. We focus on the usability of the use of low-cost eye trackers on gaze-based interactions. We designed two gaze-adaptive methods, gaze fixed and gaze dynamic adaptations, for identifying and comparing raster values between multilayers. In both methods, the grid content of different layers is adaptively adjusted depending on the user’s visual focus. We then conducted a user experiment by comparing such adaptation methods with a mouse dynamic adaptation method and a traditional method. Thirty-one participants (n = 31) were asked to complete a series of single-layer identification and multilayer comparison tasks. The results indicated that although gaze interaction with adaptive legends confused participants in single-layer identification, it improved multilayer comparison efficiency and effectiveness. The gaze-adaptive approach was well received by the participants overall, but was also perceived to be distracting and insensitive. By analyzing the participants’ eye movement data, we found that different methods exhibited significant differences in visual behaviors. The results are helpful for gaze-driven adaptation research in (geo)visualization in the future. Full article
Show Figures

Figure 1

26 pages, 11195 KiB  
Article
Assessing and Enhancing Predictive Efficacy of Machine Learning Models in Urban Land Dynamics: A Comparative Study Using Multi-Resolution Satellite Data
by Mohammadreza Safabakhshpachehkenari and Hideyuki Tonooka
Remote Sens. 2023, 15(18), 4495; https://doi.org/10.3390/rs15184495 - 12 Sep 2023
Cited by 5 | Viewed by 3017
Abstract
Reliable and accurate land-use/land cover maps are vital for monitoring and mitigating urbanization impacts. This necessitates evaluating machine learning simulations and incorporating valuable insights. We used four primary models, logistic regression (LR), support vector machine, random decision forests, and artificial neural network (ANN), [...] Read more.
Reliable and accurate land-use/land cover maps are vital for monitoring and mitigating urbanization impacts. This necessitates evaluating machine learning simulations and incorporating valuable insights. We used four primary models, logistic regression (LR), support vector machine, random decision forests, and artificial neural network (ANN), to simulate land cover maps for Tsukuba City, Japan. We incorporated an auxiliary input that used multinomial logistic regression to enhance the ANN and obtained a fifth model (ANN was run twice, with and without the new input). Additionally, we developed a sixth simulation by integrating the predictions of ANN and LR using a fuzzy overlay, wherein ANN had an additional new input alongside driving forces. This study employed six models, using classified maps with three different resolutions: the first involved 15 m (ASTER) covering a study area of 114.8 km2, for the second and third, 5 and 0.5 m (derived from WorldView-2 and GeoEye-1) covering a study area of 14.8 km2, and the models were then evaluated. Due to a synergistic effect, the sixth simulation demonstrated the highest kappa in all data, 86.39%, 72.65%, and 70.65%, respectively. The results indicate that stand-alone machine learning-based simulations achieved satisfactory accuracy, and minimalistic approaches can be employed to improve their performance. Full article
Show Figures

Figure 1

15 pages, 6543 KiB  
Article
Intelligent Identification Method of Geographic Origin for Chinese Wolfberries Based on Color Space Transformation and Texture Morphological Features
by Jiawang He, Tianshu Wang, Hui Yan, Sheng Guo, Kongfa Hu, Xichen Yang, Chenlu Ma and Jinao Duan
Foods 2023, 12(13), 2541; https://doi.org/10.3390/foods12132541 - 29 Jun 2023
Cited by 11 | Viewed by 1725
Abstract
Geographic origins play a vital role in traditional Chinese medicinal materials. Using the geo-authentic crude drug can improve the curative effect. The main producing areas of Chinese wolfberry are Ningxia, Gansu, Qinghai, and so on. The geographic origin of Chinese wolfberry can affect [...] Read more.
Geographic origins play a vital role in traditional Chinese medicinal materials. Using the geo-authentic crude drug can improve the curative effect. The main producing areas of Chinese wolfberry are Ningxia, Gansu, Qinghai, and so on. The geographic origin of Chinese wolfberry can affect its texture, shape, color, smell, nutrients, etc. However, the traditional method for identifying the geographic origin of Chinese wolfberries is still based on human eyes. To efficiently identify Chinese wolfberries from different origins, this paper presents an intelligent identification method for Chinese wolfberries based on color space transformation and texture morphological features. The first step is to prepare the Chinese wolfberry samples and collect the image data. Then the images are preprocessed, and the texture and morphology features of single wolfberry images are extracted. Finally, the random forest algorithm is employed to establish a model of the geographic origin of Chinese wolfberries. The proposed method can accurately predict the origin information of a single wolfberry image and has the advantages of low cost, fast recognition speed, high recognition accuracy, and no damage to the sample. Full article
(This article belongs to the Section Food Analytical Methods)
Show Figures

Figure 1

21 pages, 11132 KiB  
Article
The Effectiveness of Pan-Sharpening Algorithms on Different Land Cover Types in GeoEye-1 Satellite Images
by Emanuele Alcaras and Claudio Parente
J. Imaging 2023, 9(5), 93; https://doi.org/10.3390/jimaging9050093 - 30 Apr 2023
Cited by 5 | Viewed by 3029
Abstract
In recent years, the demand for very high geometric resolution satellite images has increased significantly. The pan-sharpening techniques, which are part of the data fusion techniques, enable the increase in the geometric resolution of multispectral images using panchromatic imagery of the same scene. [...] Read more.
In recent years, the demand for very high geometric resolution satellite images has increased significantly. The pan-sharpening techniques, which are part of the data fusion techniques, enable the increase in the geometric resolution of multispectral images using panchromatic imagery of the same scene. However, it is not trivial to choose a suitable pan-sharpening algorithm: there are several, but none of these is universally recognized as the best for any type of sensor, in addition to the fact that they can provide different results with regard to the investigated scene. This article focuses on the latter aspect: analyzing pan-sharpening algorithms in relation to different land covers. A dataset of GeoEye-1 images is selected from which four study areas (frames) are extracted: one natural, one rural, one urban and one semi-urban. The type of study area is determined considering the quantity of vegetation included in it based on the normalized difference vegetation index (NDVI). Nine pan-sharpening methods are applied to each frame and the resulting pan-sharpened images are compared by means of spectral and spatial quality indicators. Multicriteria analysis permits to define the best performing method related to each specific area as well as the most suitable one, considering the co-presence of different land covers in the analyzed scene. Brovey transformation fast supplies the best results among the methods analyzed in this study. Full article
(This article belongs to the Special Issue Image Processing and Computer Vision: Algorithms and Applications)
Show Figures

Figure 1

13 pages, 2021 KiB  
Article
Comparison of Satellite Imagery for Identifying Seagrass Distribution Using a Machine Learning Algorithm on the Eastern Coast of South Korea
by Liadira Kusuma Widya, Chang-Hwan Kim, Jong-Dae Do, Sung-Jae Park, Bong-Chan Kim and Chang-Wook Lee
J. Mar. Sci. Eng. 2023, 11(4), 701; https://doi.org/10.3390/jmse11040701 - 24 Mar 2023
Cited by 5 | Viewed by 3787
Abstract
Seagrass is an essential component of coastal ecosystems because of its capability to absorb blue carbon, and its involvement in sustaining marine biodiversity. In this study, support vector machine (SVM) technologies with corrected satellite imagery data, were applied to identify the distribution of [...] Read more.
Seagrass is an essential component of coastal ecosystems because of its capability to absorb blue carbon, and its involvement in sustaining marine biodiversity. In this study, support vector machine (SVM) technologies with corrected satellite imagery data, were applied to identify the distribution of seagrasses. Observations of seagrasses from satellite imagery were obtained using GeoEye-1, Sentinel-2 MSI level 1C, and Landsat-8 OLI satellite imagery. The satellite imagery from Google Earth has been obtained at a very high resolution, and was to be used within both the training and testing of a classification method. The optical satellite imagery must be processed for image classification, throughout which radiometric correction, sunglint, and water column adjustments were applied. We restricted the scope of the study area to a maximum depth of 10 m due to the fact that light does not penetrate beyond this level. When classifying the distribution of seagrasses present in the research region, the recently developed SVM technique achieved overall accuracy values of up to 92% (GeoEye-1), 88% (Sentinel-2 MSI level 1C), and 83% (Landsat-8 OLI), respectively. The results of the overall accuracy values are also used to evaluate classification models. Full article
(This article belongs to the Special Issue Ecology and Physiology of Seaweeds and Their Response to Changes)
Show Figures

Figure 1

16 pages, 3055 KiB  
Communication
Satellite-Derived Bathymetry with Sediment Classification Using ICESat-2 and Multispectral Imagery: Case Studies in the South China Sea and Australia
by Shaoyu Li, Xiao Hua Wang, Yue Ma and Fanlin Yang
Remote Sens. 2023, 15(4), 1026; https://doi.org/10.3390/rs15041026 - 13 Feb 2023
Cited by 16 | Viewed by 3905
Abstract
Achieving coastal and shallow-water bathymetry is essential for understanding the marine environment and for coastal management. Bathymetric data in shallow sea areas can currently be obtained using SDB (satellite-derived bathymetry) with multispectral satellites based on depth inversion models. In situ bathymetric data are [...] Read more.
Achieving coastal and shallow-water bathymetry is essential for understanding the marine environment and for coastal management. Bathymetric data in shallow sea areas can currently be obtained using SDB (satellite-derived bathymetry) with multispectral satellites based on depth inversion models. In situ bathymetric data are crucial for validating empirical models but are currently limited in remote and unapproachable areas. In this paper, instead of using the measured water depth data, ICESat-2 (Ice, Cloud, and Land Elevation Satellite-2) ATL03 bathymetric points at different acquisition dates and multispectral imagery from Sentinel-2/GeoEye-1 were used to train and evaluate water depth inversion empirical models in two study regions: Shanhu Island in the South China Sea, and Heron Island in the Great Barrier Reef (GBR) in Australia. However, different sediment types also influenced the SDB results. Therefore, three types of sediments (sand, reef, and coral/algae) were analyzed for Heron Island, and four types of sediments (sand, reef, rubble and coral/algae) were analyzed for Shanhu Island. The results show that accuracy generally improved when sediment classification information was considered in both study areas. For Heron Island, the sand sediments showed the best performance in both models compared to the other sediments, with mean R2 and RMSE values of 0.90 and 1.52 m, respectively, representing a 5.6% improvement of the latter metric. For Shanhu Island, the rubble sediments showed the best accuracy in both models, and the average R2 and RMSE values were 0.97 and 0.65 m, respectively, indicating an RMSE improvement of 15.5%. Finally, bathymetric maps were generated in two regions based on the sediment classification results. Full article
Show Figures

Graphical abstract

Back to TopTop