Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (9)

Search Parameters:
Keywords = Hisea-1

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 7457 KiB  
Article
An Efficient Ship Target Integrated Imaging and Detection Framework (ST-IIDF) for Space-Borne SAR Echo Data
by Can Su, Wei Yang, Yongchen Pan, Hongcheng Zeng, Yamin Wang, Jie Chen, Zhixiang Huang, Wei Xiong, Jie Chen and Chunsheng Li
Remote Sens. 2025, 17(15), 2545; https://doi.org/10.3390/rs17152545 - 22 Jul 2025
Viewed by 324
Abstract
Due to the sparse distribution of ship targets in wide-area offshore scenarios, the typical cascade mode of imaging and detection for space-borne Synthetic Aperture Radar (SAR) echo data would consume substantial computational time and resources, severely affecting the timeliness of ship target information [...] Read more.
Due to the sparse distribution of ship targets in wide-area offshore scenarios, the typical cascade mode of imaging and detection for space-borne Synthetic Aperture Radar (SAR) echo data would consume substantial computational time and resources, severely affecting the timeliness of ship target information acquisition tasks. Therefore, we propose a ship target integrated imaging and detection framework (ST-IIDF) for SAR oceanic region data. A two-step filtering structure is added in the SAR imaging process to extract the potential areas of ship targets, which can accelerate the whole process. First, an improved peak-valley detection method based on one-dimensional scattering characteristics is used to locate the range gate units for ship targets. Second, a dynamic quantization method is applied to the imaged range gate units to further determine the azimuth region. Finally, a lightweight YOLO neural network is used to eliminate false alarm areas and obtain accurate positions of the ship targets. Through experiments on Hisea-1 and Pujiang-2 data, within sparse target scenes, the framework maintains over 90% accuracy in ship target detection, with an average processing speed increase of 35.95 times. The framework can be applied to ship target detection tasks with high timeliness requirements and provides an effective solution for real-time onboard processing. Full article
(This article belongs to the Special Issue Efficient Object Detection Based on Remote Sensing Images)
Show Figures

Figure 1

12 pages, 4351 KiB  
Communication
Automatic Estimation of Tropical Cyclone Centers from Wide-Swath Synthetic-Aperture Radar Images of Miniaturized Satellites
by Yan Wang, Haihua Fu, Lizhen Hu, Xupu Geng, Shaoping Shang, Zhigang He, Yanshuang Xie and Guomei Wei
Appl. Sci. 2024, 14(16), 7047; https://doi.org/10.3390/app14167047 - 11 Aug 2024
Cited by 1 | Viewed by 1579
Abstract
Synthetic-Aperture Radar (SAR) has emerged as an important tool for monitoring tropical cyclones (TCs) due to its high spatial resolution and cloud-penetrating capability. Recent advancements in SAR technology have led to smaller and lighter satellites, yet few studies have evaluated their effectiveness in [...] Read more.
Synthetic-Aperture Radar (SAR) has emerged as an important tool for monitoring tropical cyclones (TCs) due to its high spatial resolution and cloud-penetrating capability. Recent advancements in SAR technology have led to smaller and lighter satellites, yet few studies have evaluated their effectiveness in TC monitoring. This paper employs an algorithm for automatic TC center location, involving three stages: coarse estimation from a whole SAR image; precise estimation from a sub-SAR image; and final identification of the center using the lowest Normalized Radar Cross-Section (NRCS) value within a smaller sub-SAR image. Using three wide-swath miniaturized SAR images of TC Noru (2022), and TCs Doksuri and Koinu (2023), the algorithm’s accuracy was validated by comparing estimated TC center positions with visually located data. For TC Noru, the distances for the three stages were 21.42 km, 14.39 km, and 8.19 km; for TC Doksuri—14.36 km, 20.48 km, and 17.10 km; and for TC Koinu—47.82 km, 31.59 km, and 5.42 km. The results demonstrate the potential of miniaturized SAR in TC monitoring. Full article
(This article belongs to the Topic Radar Signal and Data Processing with Applications)
Show Figures

Figure 1

12 pages, 5498 KiB  
Communication
Assessment of Sea-Surface Wind Retrieval from C-Band Miniaturized SAR Imagery
by Yan Wang, Yan Li, Yanshuang Xie, Guomei Wei, Zhigang He, Xupu Geng and Shaoping Shang
Sensors 2023, 23(14), 6313; https://doi.org/10.3390/s23146313 - 11 Jul 2023
Cited by 3 | Viewed by 1962
Abstract
Synthetic aperture radar (SAR) has been widely used for observing sea-surface wind fields (SSWFs), with many scholars having evaluated the performance of SAR in SSWF retrieval. Due to the large systems and high costs of traditional SAR, a tendency towards the development of [...] Read more.
Synthetic aperture radar (SAR) has been widely used for observing sea-surface wind fields (SSWFs), with many scholars having evaluated the performance of SAR in SSWF retrieval. Due to the large systems and high costs of traditional SAR, a tendency towards the development of smaller and more cost-effective SAR systems has emerged. However, to date, there has been no evaluation of the SSWF retrieval performance of miniaturized SAR systems. This study utilized 1053 HiSea-1 and Chaohu-1 miniaturized SAR images covering the Southeast China Sea to retrieve SSWFs. After a quality control procedure, the retrieved winds were subsequently compared with ERA5, buoy, and ASCAT data. The retrieved wind speeds demonstrated root mean square errors (RMSEs) of 2.42 m/s, 1.64 m/s, and 3.29 m/s, respectively, while the mean bias errors (MBEs) were found to be −0.44 m/s, 1.08 m/s, and −1.65 m/s, respectively. Furthermore, the retrieved wind directions exhibited RMSEs of 11.5°, 36.8°, and 41.7°, with corresponding MBEs of −1.3°, 2.4°, and −8.8°, respectively. The results indicate that HiSea-1 and Chaohu-1 SAR satellites have the potential and practicality for SSWF retrieval, validating the technical indicators and performance requirements implemented during the satellites’ design phase. Full article
Show Figures

Figure 1

19 pages, 9475 KiB  
Article
First Ocean Wave Retrieval from HISEA-1 SAR Imagery through an Improved Semi-Automatic Empirical Model
by Haiyang Sun, Xupu Geng, Lingsheng Meng and Xiao-Hai Yan
Remote Sens. 2023, 15(14), 3486; https://doi.org/10.3390/rs15143486 - 11 Jul 2023
Cited by 4 | Viewed by 2188
Abstract
The HISEA-1 synthetic aperture radar (SAR) minisatellite has been orbiting for over two years since its launch in 2020, acquiring numerous high-resolution images independent of weather and daylight. A typical and important application is the observation of ocean waves, essential ocean dynamical phenomena. [...] Read more.
The HISEA-1 synthetic aperture radar (SAR) minisatellite has been orbiting for over two years since its launch in 2020, acquiring numerous high-resolution images independent of weather and daylight. A typical and important application is the observation of ocean waves, essential ocean dynamical phenomena. Here, we proposed a new semi-automatic empirical method to retrieve ocean wave parameters from HISEA-1 images. We first applied some automated processing methods to remove non-wave information and artifacts, which largely improves the efficiency and robustness. Then, we developed an empirical model to retrieve significant wave height (SWH) by considering the dependence of SWH on azimuth cut-off, wind speed, and information extracted from the cross-spectrum. Comparisons with the Wavewatch III (WW3) data show that the performance of the proposed model significantly improved compared to the previous semi-empirical model; the root mean square error, correlation, and scattering index are 0.45 m (0.63 m), 0.87 (0.75), and 18% (26%), respectively. Our results are also consistent well with those from the altimeter measurements. Further case studies show that this new ocean wave model is reliable even under typhoon conditions. This work first provides accurate ocean-wave products from HISEA-1 SAR data and demonstrates its ability to perform high-resolution observation of coasts and oceans. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

17 pages, 7401 KiB  
Article
Multisource Remote Sensing Data-Based Flood Monitoring and Crop Damage Assessment: A Case Study on the 20 July 2021 Extraordinary Rainfall Event in Henan, China
by Minghui Zhang, Di Liu, Siyuan Wang, Haibing Xiang and Wenxiu Zhang
Remote Sens. 2022, 14(22), 5771; https://doi.org/10.3390/rs14225771 - 15 Nov 2022
Cited by 18 | Viewed by 4623
Abstract
On 20 July 2021, an extraordinary rainfall event occurred in Henan Province, China, resulting in heavy waterlogging, flooding, and hundreds of fatalities and causing considerable property damage. Because the damaged region was a major grain-producing region of China, assessing crop food production losses [...] Read more.
On 20 July 2021, an extraordinary rainfall event occurred in Henan Province, China, resulting in heavy waterlogging, flooding, and hundreds of fatalities and causing considerable property damage. Because the damaged region was a major grain-producing region of China, assessing crop food production losses following this event is very important. Because the crop rotation production system is utilized in the region to accommodate two crops per year, it is very valuable to accurately identify the types of crops affected by the event and to assess the crop production losses separately; however, the results obtained using these methods are still inadequate. In this study, we used China’s first commercial synthetic aperture radar (SAR) data source, named Hisea-1, together with other open-source and widely used remote sensing data (Sentinel-1 and Sentinel 2), to monitor this catastrophic flood. Both the modified normalized difference water index (MNDWI) and Sentinel-1 dual-polarized water index (SDWI) were calculated, and an unsupervised classification (k-means) method was adopted for rapid water body extraction. Based on time-series datasets synthesized from multiple sources, we obtained four flooding characteristics, including the flooded area, flood duration, and start and end times of flooding. Then, according to these characteristics, we conducted a more precise analysis of the damages to flooded farmlands. We used the Google Earth Engine (GEE) platform to obtain normalized difference vegetation index (NDVI) time-series data for the disaster year and normal years and overlaid the flooded areas to extract the effects of flooding on crop species. According to the statistics from previous years, we calculated the areas and types of damaged crops and the yield reduction amounts. Our results showed that (1) the study area endured two floods in July and September of 2021; (2) the maximum areas affected by these two flooding events were 380.2 km2 and 215.6 km2, respectively; (3) the floods significantly affected winter wheat and summer grain (maize or soybean), affecting areas of 106.4 km2 and 263.3 km2, respectively; and (4) the crop production reductions in the affected area were 18,708 t for winter wheat and 160,000 t for maize or soybean. These findings indicate that the temporal-dimension information, as opposed to the traditional use of the affected area and the yield per unit area when estimating food losses, is very important for accurately estimating damaged crop types and yield reductions. Time-series remote sensing data, especially SAR remote sensing data, which have the advantage of penetrating clouds and rain, play an important role in remotely sensed disaster monitoring. Hisea-1 data, with a high spatial resolution and first flood-monitoring capabilities, show their value in this study and have the potential for increased usage in further studies, such as urban flooding research. As such, the approach proposed herein is worth expanding to other applications, such as studies of water resource management and lake/wetland hydrological changes. Full article
(This article belongs to the Special Issue Environmental Health Diagnosis Based on Remote Sensing)
Show Figures

Figure 1

18 pages, 6178 KiB  
Article
High-Performance Segmentation for Flood Mapping of HISEA-1 SAR Remote Sensing Images
by Suna Lv, Lingsheng Meng, Deanna Edwing, Sihan Xue, Xupu Geng and Xiao-Hai Yan
Remote Sens. 2022, 14(21), 5504; https://doi.org/10.3390/rs14215504 - 1 Nov 2022
Cited by 32 | Viewed by 5572
Abstract
Floods are the among the most frequent and common natural disasters, causing numerous casualties and extensive property losses worldwide every year. Since flooding areas are often accompanied by cloudy and rainy weather, synthetic aperture radar (SAR) is one of the most powerful sensors [...] Read more.
Floods are the among the most frequent and common natural disasters, causing numerous casualties and extensive property losses worldwide every year. Since flooding areas are often accompanied by cloudy and rainy weather, synthetic aperture radar (SAR) is one of the most powerful sensors for flood monitoring with capabilities of day-and-night and all-weather imaging. However, SAR images are prone to high speckle noise, shadows, and distortions, which affect the accuracy of water body segmentation. To address this issue, we propose a novel Modified DeepLabv3+ model based on the powerful extraction ability of convolutional neural networks for flood mapping from HISEA-1 SAR remote sensing images. Specifically, a lightweight encoder MobileNetv2 is used to improve floodwater detection efficiency, small jagged arrangement atrous convolutions are employed to capture features at small scales and improve pixel utilization, and more upsampling layers are utilized to refine the segmented boundaries of water bodies. The Modified DeepLabv3+ model is then used to analyze two severe flooding events in China and the United States. Results show that Modified DeepLabv3+ outperforms competing semantic segmentation models (SegNet, U-Net, and DeepLabv3+) with respect to the accuracy and efficiency of floodwater extraction. The modified model training resulted in average accuracy, F1, and mIoU scores of 95.74%, 89.31%, and 87.79%, respectively. Further analysis also revealed that Modified DeepLabv3+ is able to accurately distinguish water feature shape and boundary, despite complicated background conditions, while also retaining the highest efficiency by covering 1140 km2 in 5 min. These results demonstrate that this model is a valuable tool for flood monitoring and emergency management. Full article
(This article belongs to the Special Issue Deep Learning in Remote Sensing Application)
Show Figures

Graphical abstract

27 pages, 9659 KiB  
Article
CRTransSar: A Visual Transformer Based on Contextual Joint Representation Learning for SAR Ship Detection
by Runfan Xia, Jie Chen, Zhixiang Huang, Huiyao Wan, Bocai Wu, Long Sun, Baidong Yao, Haibing Xiang and Mengdao Xing
Remote Sens. 2022, 14(6), 1488; https://doi.org/10.3390/rs14061488 - 19 Mar 2022
Cited by 131 | Viewed by 9533
Abstract
Synthetic-aperture radar (SAR) image target detection is widely used in military, civilian and other fields. However, existing detection methods have low accuracy due to the limitations presented by the strong scattering of SAR image targets, unclear edge contour information, multiple scales, strong sparseness, [...] Read more.
Synthetic-aperture radar (SAR) image target detection is widely used in military, civilian and other fields. However, existing detection methods have low accuracy due to the limitations presented by the strong scattering of SAR image targets, unclear edge contour information, multiple scales, strong sparseness, background interference, and other characteristics. In response, for SAR target detection tasks, this paper combines the global contextual information perception of transformers and the local feature representation capabilities of convolutional neural networks (CNNs) to innovatively propose a visual transformer framework based on contextual joint-representation learning, referred to as CRTransSar. First, this paper introduces the latest Swin Transformer as the basic architecture. Next, it introduces the CNN’s local information capture and presents the design of a backbone, called CRbackbone, based on contextual joint representation learning, to extract richer contextual feature information while strengthening SAR target feature attributes. Furthermore, the design of a new cross-resolution attention-enhancement neck, called CAENeck, is presented to enhance the characterizability of multiscale SAR targets. The mAP of our method on the SSDD dataset attains 97.0% accuracy, reaching state-of-the-art levels. In addition, based on the HISEA-1 commercial SAR satellite, which has been launched into orbit and in whose development our research group participated, we released a larger-scale SAR multiclass target detection dataset, called SMCDD, which verifies the effectiveness of our method. Full article
(This article belongs to the Special Issue Synthetic Aperture Radar (SAR) Meets Deep Learning)
Show Figures

Figure 1

9 pages, 5362 KiB  
Communication
HISEA-1: The First C-Band SAR Miniaturized Satellite for Ocean and Coastal Observation
by Sihan Xue, Xupu Geng, Lingsheng Meng, Ting Xie, Lei Huang and Xiao-Hai Yan
Remote Sens. 2021, 13(11), 2076; https://doi.org/10.3390/rs13112076 - 25 May 2021
Cited by 27 | Viewed by 6387
Abstract
On 22 December 2020, HISEA-1, the first C-band SAR small satellite for ocean remote sensing, was launched from the coastal Wenchang launch site. Though small in weight, the images it produced have a high spatial resolution of 1 m and a large observation [...] Read more.
On 22 December 2020, HISEA-1, the first C-band SAR small satellite for ocean remote sensing, was launched from the coastal Wenchang launch site. Though small in weight, the images it produced have a high spatial resolution of 1 m and a large observation width of 100 km. The first batch of images obtained within the first week after the launch confirmed the rich information in the data, including sea ice, wind, wave, rip currents, vortexes, ships, and oil film on the sea, as well as landmark buildings. Furthermore, geometric characteristics of sea ice, wind vector, ocean wave parameter, 3D features of buildings, and some air-sea interface phenomena in dark spots could also be detected after relevant processing. All these indicate that HISEA-1 could be a reliable, remarkable, and powerful instrument for observing oceans and lands. Full article
(This article belongs to the Section Satellite Missions for Earth and Planetary Exploration)
Show Figures

Graphical abstract

18 pages, 6420 KiB  
Article
On-Board Real-Time Ship Detection in HISEA-1 SAR Images Based on CFAR and Lightweight Deep Learning
by Pan Xu, Qingyang Li, Bo Zhang, Fan Wu, Ke Zhao, Xin Du, Cankun Yang and Ruofei Zhong
Remote Sens. 2021, 13(10), 1995; https://doi.org/10.3390/rs13101995 - 19 May 2021
Cited by 76 | Viewed by 7295
Abstract
Synthetic aperture radar (SAR) satellites produce large quantities of remote sensing images that are unaffected by weather conditions and, therefore, widely used in marine surveillance. However, because of the hysteresis of satellite-ground communication and the massive quantity of remote sensing images, rapid analysis [...] Read more.
Synthetic aperture radar (SAR) satellites produce large quantities of remote sensing images that are unaffected by weather conditions and, therefore, widely used in marine surveillance. However, because of the hysteresis of satellite-ground communication and the massive quantity of remote sensing images, rapid analysis is not possible and real-time information for emergency situations is restricted. To solve this problem, this paper proposes an on-board ship detection scheme that is based on the traditional constant false alarm rate (CFAR) method and lightweight deep learning. This scheme can be used by the SAR satellite on-board computing platform to achieve near real-time image processing and data transmission. First, we use CFAR to conduct the initial ship detection and then apply the You Only Look Once version 4 (YOLOv4) method to obtain more accurate final results. We built a ground verification system to assess the feasibility of our scheme. With the help of the embedded Graphic Processing Unit (GPU) with high integration, our method achieved 85.9% precision for the experimental data, and the experimental results showed that the processing time was nearly half that required by traditional methods. Full article
Show Figures

Graphical abstract

Back to TopTop