Next Article in Journal
Eigenvector Constraint-Based Method for Eliminating Dead Zone in Magnetic Target Localization
Next Article in Special Issue
AEFormer: Zoom Camera Enables Remote Sensing Super-Resolution via Aligned and Enhanced Attention
Previous Article in Journal
Quantification of Urban Greenspace in Shenzhen Based on Remote Sensing Data
Previous Article in Special Issue
High-Quality Object Detection Method for UAV Images Based on Improved DINO and Masked Image Modeling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Shipyard Production State Monitoring Method Based on Satellite Remote Sensing Images

1
School of Geography and Information Engineering, China University of Geosciences (Wuhan), Wuhan 430074, China
2
Institute for Natural Disaster Risk Prevention and Emergency Management, China University of Geosciences (Wuhan), Wuhan 430074, China
3
Engineering with the Remote Sensing Monitoring Department of Sea Area and Islands, National Satellite Ocean Application Service, Beijing 100081, China
4
Key Laboratory of Space Ocean Remote Sensing and Application, Beijing 100081, China
5
Qilu Aerospace Information Research Institute, Jinan 250101, China
6
College of Global Change and Earth System Science, Faculty of Geographical Science, Beijing Normal University, Beijing 100875, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(20), 4958; https://doi.org/10.3390/rs15204958
Submission received: 3 August 2023 / Revised: 9 October 2023 / Accepted: 10 October 2023 / Published: 13 October 2023

Abstract

:
Monitoring the shipyard production state is of great significance to shipbuilding industry development and coastal resource utilization. In this article, it is the first time that satellite remote sensing (RS) data is utilized to monitor the shipyard production state dynamically and efficiently, which can make up for the traditional production state data collection mode. According to the imaging characteristics of optical remote sensing images in shipyards with a different production state, the characteristics are analyzed to establish reliable production state evidence. Firstly, in order to obtain the characteristics of the production state of optical remote sensing data, the high-level semantic information in the shipyard is extracted by transfer learning convolutional neural networks (CNNs). Secondly, in the evidence fusion, for the conflict evidence from the core sites of the shipyard, an improved DS evidence fusion method is proposed, which constructs the correlation metric to measure the degree of conflict in evidence and designs the similarity metric to measure the credibility of evidence. Thirdly, the weight of all the evidence is calculated according to the similarity metric to correct the conflict evidence. The introduction of the iterative idea is motivated by the fact that the fusion result aligns more closely with the desired result, the iterative idea is introduced to correct the fusion result. This method can effectively solve the conflict of evidence and effectively improve the monitoring accuracy of the shipyard production state. In the experiments, the Yangtze River Delta and the Bohai Rim are selected to verify that the proposed method can accurately recognize the shipyard production state, which reveals the potential of satellite RS images in shipyard production state monitoring, and also provides a new research thought perspective for other industrial production state monitoring.

Graphical Abstract

1. Introduction

The shipbuilding industry plays a crucial role in national defense security, transportation, and marine development [1]. Effective monitoring of the shipyard production state is essential for the timely understanding of profitability, thereby facilitating the healthy upgrading and transformation of the shipbuilding industry structure. Additionally, it enables the orderly reorganization of uncompetitive shipyards during prolonged periods of losses to prevent wastage of coastal resources. Satellite remote sensing (RS) data offer an efficient means to accurately monitor the state of shipyard production from both spatial and time series perspectives, thus compensating for the limitations associated with traditional methods used for collecting order data.
With the development of sensor technology, the RS data have found extensive applications in disaster monitoring [2,3,4,5,6,7], precision agriculture [8,9], environmental risk assessment [10,11], and oil spill detection [12]. Furthermore, leveraging the repeated earth observations facilitated by satellites, multi-temporal RS data have been utilized to enhance land cover classification accuracy [13,14,15,16,17,18,19,20,21], object recognition [22,23,24], as well as disaster monitoring and assessment [25,26,27]. In recent years, high-resolution RS (HRS) data together with thermal infrared and night light data have been effectively employed for global-scale to local-region monitoring of human economic activities [28,29,30,31,32,33,34]. However, there is a dearth of research articles specifically addressing the shipyard production state.
The fundamental purpose of utilizing satellite remote sensing data for monitoring the production status of shipyards lies in determining whether the shipyard is engaged in production activities at the time of imaging, based on its distinctive characteristics. In simpler scenarios, discerning the state of a scene typically involves integrating relatively straightforward middle- and low-level semantics with machine learning techniques [35,36,37]. In more complex scenes, two processing approaches can be employed. The first approach entails extracting research objects from intricate backgrounds through image processing or object recognition to eliminate their influence before conducting state recognition on these research objects [38]. The second approach utilizes deep learning methods to extract and classify middle- and high-level semantic features of the scene [7,39,40]. In recent years, the continuous improvement in optical RS image quality has led to a more pronounced distinction in texture features of ground objects depicted in these images. This advancement enables the provision of richer semantic information and enhances interpretability. For extensive areas such as shipyards, employing optical RS images is suitable for identifying their production state.
The shipyard scene is intricate, comprising multiple core work sites. By leveraging high-level semantic information and low/middle-level features extracted from RS imagery, it becomes possible to establish evidence of the shipyard production state. Convolutional neural networks (CNNs), such as AlexNet [41,42,43], VGG [44,45], ResNet [46,47,48,49,50,51], and Inception [20,52,53], among others, can effectively derive high-level semantic information [39,54,55,56] for extracting optical production state features from optical HRS data.
Given the lengthy shipbuilding procedure, this study proposes the utilization of multi-temporal RS data to effectively monitor the production state of shipyards. This approach aims to enhance the quantity of observation data and mitigate errors that may arise from relying solely on single-phase data.
In the shipyard scene, the dock, berth, material stacking area and assembly area are closely related to the production state of the shipyard. They are the core categories in the monitoring of the production state of the shipyard. Therefore, the production state of the shipyard can be inferred by monitoring the dock, berth, assembly area and material stacking area in the satellite RS image. There may be differences in the production state of shipyards pointed to by different core sites, which is related to the manufacturing cycle of different shipyards and different ship types, the preparation stage of shipyards, and the preparation work in the outfitting stage of docks. Henceforth, a fusion method is imperative for resolving conflicts arising from disparate interpretations of a shipyard’s production state based on various core sites.
In current research, there is a predominant focus on pixel-level and feature-level fusions, with less attention being given to the decision level. However, it should be noted that the decision-level fusion algorithm exhibits higher accuracy compared to the standard multi-source data fusion algorithm and has the potential to overcome limitations associated with multi-source data fusion technology [57,58,59]. For instance, in other fusion levels, the loss of one source of data can result in system failure; however, this is not the case for decision-level fusion systems. Moreover, this approach outperforms both aforementioned methods in terms of real-time performance [57]. Voting [60], Bayes inference [61], evidence theory [62], fuzzy integrals, which include ICA and the support vector machine (SVM) [63], as well as various other specific methods [64,65,66], are some important examples of algorithms related to this level [57]. The DS evidence theory [67,68], with its advantages over other fusion methods [69] in capturing the uncertainty of evidence and without using prior conditional probability densities, has been widely used in multi-source and multi-temporal data [70,71,72,73,74]. However, traditional DS evidence fusion fails to address highly conflicting evidence arising from the ambiguity of different core sites within shipyards and the uncertainty inherent in different shipbuilding stages during production state monitoring.
In order to achieve precise monitoring outcomes, a shipyard production state monitoring approach based on multi-temporal RS images is proposed. The primary contributions encompass the following: (1) The utilization of a transfer learning network is proposed to extract high-level semantic information from optical RS images in shipyards located in coastal areas or on river coasts and map it to the evidence value of their production state. (2) To address the limitation of single-phase observation data in fully reflecting the production state across different shipbuilding stages, a multi-phase data-driven approach is suggested for monitoring the production state of shipyards. (3) Aiming at the problem of evidence conflict in different core sites, the correlation metric and similarity metric are constructed, and the monitoring framework of the shipyard production state based on CNN and improved DS evidence theory is proposed. The problem of evidence conflict between different core sites in the shipyard production state monitoring is solved, and the accuracy of shipyard production state monitoring is improved.

2. Data

2.1. Experimental Area and Data

The Yangtze River Delta (26.6°N–35.3°N, 116.7°E–122.8°E) and the Bohai Rim (34.4°N–43.4°N, 113.5°E–125.8°E) regions of China, where the shipyards are densely located, are selected for this study.
Considering the shipbuilding period and the capacity to acquire RS data, quarter monitoring and semi-annual monitoring are determined. Based on information from the China Shipbuilding Industry Yearbook, the International Ship Network and RS images, the true shipyard production states are collected (see Figure 1 and Figure 2) as a reference. The production states of shipyards can be divided into two categories: normal and abnormal. The normal production state means the shipyard runs well with clear signs of production in the RS images. An abnormal production state means that the shipyard is in a state of shutdown, without clear production signs in the RS images.
ZY-3, GF-1 and Google Earth data are used in experiments, with parameters shown in Table 1.
Due to weather factors in the coastal area and along the river area, effective optical HRS imagery can be acquired quarterly. The satellite RS data used in the experimental area are shown in Table 2.

2.2. Characteristics of Shipyards in Different Production States

In Figure 3, the shipyard scene includes the docks, berths, assembly sites and material storage sites which are defined as core sites for production state monitoring. The docks/berths are defined as shipbuilding core sites (SCSs) and the material storage areas/assembly areas are defined as material utilization core sites (MUCSs).
In the normal production state shipyard scene, there are generally some hulls, whole ships or multiple ships under construction in the shipbuilding site, and there are usually block materials in the material utilization core site, which are arranged more regularly. In the shipyard scene with an abnormal production state, there are no hulls in the shipbuilding site, and there is no material stacking in the material utilization core site, which is similar to bare land. The optical images of core sites in different production states are shown in Table 3.

2.3. Optical Sample Datasets

The quantity and quality of the sample datasets are important for the training and detection accuracy of CNNs. Training samples of SCSs and MUCS in different production states are made, respectively, by ZY-3 and Google Earth data from 2018 to 2020 (see Table 4). The number of SCS training samples is 3096, and the ratio of normal and abnormal samples in the production state is 1:1. The number of MUCS training samples is 888, and the normal-to-abnormal sample ratio in the production state is 1:1.

3. Methods

This paper proposes to apply multi-temporal RS data to shipyard production state monitoring. Firstly, high-level semantic information is extracted by the fine-tuned convolutional neural network to obtain the optical production state evidence. Secondly, a correlation metric and similarity metric are used to solve the highly conflicting evidence, calculate the weight of all evidence, correct the conflicting evidence, fuse all the evidence and iteratively correct the fusion results. Finally, the improved DS evidence fusion method is applied to obtain the production state of the shipyard. The flow chart is as follows (see Figure 4).

3.1. Neural Network

There are various types of docks and berths available, such as dry docks, water injection docks, and floating docks. In terms of berths, there are inclined berths and horizontal berths. However, it should be noted that the number of nationwide samples for interpretation in China is limited.
For the complex shipyard optical remote sensing scene composed of multiple land types, Inception v3 [75] and ResNet101 [76] are selected for training based on the characteristics of CNNs and the number of training samples.
The pre-trained Inception v3 and ResNet101 by the ImageNet datasets [77] are fine-tuned using core sites samples with different production states, respectively. The development of a custom architecture was not undertaken during the pre-training phase. The training samples undergo data augmentation, including random rotation, flipping, clipping, scaling, etc., in order to augment the sample size and enhance sample diversity. Manual parameter adjustments are made based on the loss curves of both the verification set and training set, while hyperparameters are determined through testing. In the training, a stochastic gradient descent method and a cross-entropy loss function are used. The learning rate drops to 0.1 times per 20 epochs. The additional training parameters are listed in Table 5.

3.2. Optical Evidence

The fine-tuned CNNs are applied to compute the production state probabilities of the core sites in a shipyard separately. As the shipbuilding period is lengthy, some core sites may not be in a use stage at the satellite sensor imaging moment. Hence, the maximum of the normal production probabilities in core sites of each category is adopted as evidence, as in Formulas (1) and (2).
P d o t o = m a x ( p 1 , p 2 , p 3 , , p r )
P m o t o = m a x ( q 1 , q 2 , q 3 , , q l )
The t o is the imaging time of optical data. In Formula (1), P d o t o is the normal production probability of the SCS in optical data. p i ( 1 i r ) is normal production probability of the ith SCS in optical data and r is the amount of SCSs in the shipyard. In Formula (2), P m o t o is the normal production probability of MUCSs in the optical data. q j 1 j l is the normal production probability of the jth MUCSs in the optical data and l is the amount of MUCSs in the shipyard.
The optical SCS evidence P d consists of the single-phase observation P d o t o as Formula (3). The optical MUCS evidence P m consists of the single-phase observation P m o t o as Formula (4) and h is the amount of optical RS data.
P d = [ P d o I , P d o I I , , P d o t o , , P d o h ] , t o = I , I I , , h
P m = [ P m o I , P m o I I , , P m o t o , , P m o h ] , t o = I , I I , , h

3.3. Decision-Level Fusion

3.3.1. DS Evidence Fusion Basic Theory

The DS evidence theory defines the recognition frame Θ, in which propositions are exhaustive and mutually exclusive. The m A is the basic probability assignment of proposition A, indicating the degree of confidence in proposition A that is satisfied with m = 0   a n d   A Θ m A = 1 . For ∀ A ⊆ Θ, the information fusion rule for evidence m 1 , m 2 , , m n is:
m 1 m n A = 1 1 K · A 1 A 2 A n = A m 1 A 1 · m 2 A 2 m n A n
Among them,
K = A 1 A 2 A n = Φ m 1 A 1 · m 2 A 2 m n A n = 1 A 1 A 2 A n Φ m 1 A 1 · m 2 A 2 m n A n

3.3.2. Evidence Analysis

The proposed evidence is derived by two core sites from multi-temporal RS image data, so there may be conflicts between the evidence.
(1)
Due to the different core sites used at different stages of the shipbuilding, there may be conflicting evidence from the two core sites at the imaging moment.
(2)
Conflict evidence may also be caused by changes in the shipbuilding stage when the sensor continuously observes the same core site.
The DS evidence fusion theory fails when there is highly conflicting evidence. To address these issues, this paper defines two metrics to measure the degree of conflict and credibility of evidence and constructs an improved DS evidence theory calculation system.
The correlation metric is defined to measure the consistency between evidence as Formula (7). When the correlation metric R < 0 , the evidence points to inconsistency with a high conflict. The similarity metric S j is defined in Formula (8). The more similarity there is in the evidence, the more credible it is.
R d m = M d 0.5 M m 0.5
S j = 1 1 n 1 i = 1 n m i A M j 2
M j = A v g ( P j ) , j [ d , m ]
The symbols d , m , denote the SCS and MUCS data. P d is the evidence from the SCS. P m is the MUCS evidence. In Formula (7), R d m is the correlation between P d and P m . M d , M m is the average of P d , P m , respectively, which is defined in Formula (9). In Formula (8), m i A is the evidence value and n is the amount of evidence.

3.3.3. Evidence Fusion

The recognition framework of improved DS evidence fusion for the shipyard production state is Θ = N o r m a l , A b n o r m a l . We assume that proposition A : {shipyard production state is normal}, B : {shipyard production state is abnormal}. The evidence combining SCSs with MUCSs is shown below.
m 0 A = P d , P m
According to the evidence analysis in Section 3.3.2, the following improved DS evidence theory calculation framework is proposed (see Figure 5).
(1)
Calculating the correlation R d m of evidence between two core sites, the SCS and the MUCS. If the R d m < 0 , the SCS and MUCS indicate evidence conflict. The similarity metric is calculated and the weight of all evidence is calculated by the similarity metric (in formula (11)).
(2)
In Formula (12) and (13), the conflict evidence is corrected by weight. The greater the degree of similarity of evidence is, the stronger is the credibility and the greater the weight. The modified evidence is substituted into Formula (5) for evidence fusion, and the first evidence fusion result E 0 is obtained. The definition of m0 is shown in Formula (13).
(3)
The greater the similarity of evidence is, the stronger the credibility is, the greater the weight is, and the more biased the evidence fusion result is for the evidence with greater weight. Therefore, the correlation metric between the corrected evidence with a larger weight and the fusion result is calculated. If R m 0 E 0 < 0 , it indicates that there is still a conflict. Otherwise, the conflict problem has been solved and the fusion result is the output.
(4)
Considering that the fusion result is closer to the expected output, if R m 0 E 0 < 0 the weight of evidence is re-determined according to the fusion result; that is, the iterative idea is introduced to correct the fusion result. The ith (i 1) iteration process is to calculate the correlation metric and the similarity metric of the modified evidence and the i-1th evidence fusion result E i 1 . if R m 0 E i 1 < 0 , the new evidence weight is calculated by the similarity metric, and the conflict evidence is corrected according to the weight to obtain the ith fusion result E i . The convergence condition of the iteration is to end the iteration and output when R m 0 E i > 0 ; otherwise, we continue the iteration until the condition is satisfied.
ω j = S j S d + S m , j [ d , m ]
P d o t o = ω d P d o t o P m o t o = ω m P m o t o P d = [ P d o I , P d o I I , , P d o t o , , P d o h ] , t o = I , I I , , h P m = [ P m o I , P m o I I , , P m o t o , , P m o h ] , t o = I , I I , , h
m 1 = t o = I n ω d P d o t o m 2 = t o = I n ω m P m o t o
m 0 = m 1                   ω d > ω m     m 2                   ω m > ω d
After the above treatment, the probability of the shipyard production state is calculated.
E A = m 1 m 2 m n = m 1 m 2 m n m 1 m 2 m n + 1 m 1 1 m 2 1 m n
E B = 1 E A
E A is the probability that the shipyard is in a normal production state.
The probability of a normal production state takes the value interval of [0, 1]; the closer it is to 1, the more likely it is that the shipyard is in a normal production state. The shipyard production state determination is defined and recognized in Formula (16).
Production   state   of   the   shipyard = N o r m a l ,                             E ( A ) 0.5 A b n o r m a l ,                   E ( A ) < 0.5

3.4. Accuracy Evaluation

In order to validate the proposed method, the accuracy, precision, false alarm (FA), recall, missing alarm (MA) and F1-score are assessed, as follows:
A c c u r a c y = ( T A + T N ) / ( T A + T N + F A + F N )
P r e c i s i o n = T A / ( T A + F N )
F A = F N / ( T A + F N )
R e c a l l = T A / ( T A + F A )
M A = F A / ( T A + F A )
F 1 s c o r e = 2 × P r e c i s i o n × R e c a l l / ( P r e c i s i o n + R e c a l l )
where TA is the number of abnormal shipyards with correctly recognized and TN is the number of normal shipyards with correctly recognized. FA is the number of abnormal shipyards with production states falsely recognized as normal, and FN is the number of normal shipyards with production states falsely recognized as abnormal.

4. Results

4.1. Presentation of Results

Taking the Inception v3 network as an example, the state evidence of optical data is calculated, and the improved DS evidence fusion method is used to obtain the shipyard production state. The monitoring results are accurate as shown in Table 6.
The proposed method can effectively monitor the shipyard production state. The shipyard production state evidence, P d o , P m o , P d s , P m s , is generally consistent with the acquired RS images, as showing the normal shipyard (i) and the abnormal shipyard (ii).
The improved DS evidence fusion can effectively avoid the uncertainty associatedwith conflicting evidence. In shipyards (iii) and (iv), evidence is in conflict from the two core sites with the correlation R d m < 0. The similarity metric is calculated, the conflict evidence is corrected, and the fusion result is corrected by iteration, so that the shipyard can be identified as the normal production state.
Taking the (iii) and the (vi) shipyards in Table 6 as an example, we compare the traditional DS evidence fusion, the voting fusion, and the Yager [78] fusion results with the improved DS evidence fusion method. The comparison results are shown in the following Table 7.
The improved DS evidence fusion method effectively addresses the issue of in different core sites, which remains unresolved by traditional DS evidence fusion, thereby enabling the accurate identification of a shipyard production state. As demonstrated by the experimental data Shipyard (iii) and Shipyard (vi) in Table 7, the traditional DS fusion obtains results of 0.47 and 0.36, respectively, both below 0.5, leading to erroneous classification of an abnormal production state for the shipyard. The voting method fails in this case. The fusion results obtained by the Yager fusion method are 0.13 and 0.14, respectively, both below 0.5, which also leads to the classification error of the abnormal production state of the shipyard. In contrast, the improved DS evidence fusion method involves calculating the source credibility to determine weights (as per Formula (8)), revising original evidence values (as per Formulas (11)–(13) following new evidence. Consequently, the improved fusion results are obtained as 0.58 and 0.88, respectively, accurately identifying the normal production state of the shipyard.
The monitoring results of the shipyard’s production state show that the proposed evidence is consistent with the actual situation and can objectively reflect the production state of the core sites in the multi-temporal data. The improved DS evidence fusion method can better deal with the problem of evidence conflict from the core sites.

4.2. Evaluation Results

The method proposed in this paper can effectively realize the semi-annual monitoring of a shipyard production state, and the evaluation results are shown in Table 8.
Through the proposed framework of CNNs and improved DS evidence fusion, the production state of shipyards can be well monitored. The Inception v3 model combined with the improved DS evidence fusion method performs well in the Bohai Rim research area. However, the result is slightly worse than the result from the Yangtze River Delta study area. The reason is that GF-1 optical RS data are used in the Bohai Rim rather than the ZY-3 data and Google Earth data involved in the training, and there are differences in resolution and imaging quality, resulting in decreased accuracy.
In addition, using this framework, the traditional DS evidence fusion method, voting method and Yager fusion method are compared with the method proposed in this paper. The evaluation results are shown in Table 9.
Compared with the traditional DS evidence fusion, the improved DS evidence fusion improves the precision and F1-score while maintaining high recall. In the Yangtze River Delta study area, for the selected two CNNs, the accuracy of the proposed method is improved by 0.9% and 2.68%, respectively, compared with the traditional DS evidence fusion method.
For the voting method and the Yager fusion method, although the high recall is also maintained, the overall accuracy, precision and F1-score are significantly reduced.

4.3. Discussion

The proposed method is limited in recognizing a small/micro shipyard production state. The partial failure results of the combination of Inception v3 and the improved DS evidence fusion method are shown in Table 10.
The error reasons of the production state monitoring can be divided into two categories.
The first reason is the impact of small and micro shipyard features. Shipyard (vii) is located on mudflats with core sites that are similar in texture structures under different production states, which causes the CNNs to miss detection. Shipyard (viii) is in abnormal production state. However, the gantry cranes and their shadows around the two core sites disturb optical evidence extraction, which leads to missed detection of the shipyards in an abnormal state.
The second reason is the insufficient frequency of optical data acquisition and the problem of image quality. The production state monitoring of the experimental shipyard is semi-annual monitoring. The optical data acquisition frequency used in the monitoring is insufficient, and in shipyard (ix), the optical image is affected by the cloud layer, which ultimately leads to shipyard (ix) being mistakenly identified.
The analysis of the monitoring results shows that the method in this paper has high accuracy in the monitoring of the shipyard production state, but there are still some limitations, which need to be further improved in the future. The details are as follows:
(1)
The method in this paper needs to identify the shipyard and extract the core sites before monitoring, and the workload is large. There are few studies on the extraction of special scenes in shipyards and the automatic extraction of internal core sites. Subsequently, the comprehensive identification of the shipyard scene and state attributes can be carried out.
(2)
The method in this paper has a weak detection ability for the quarterly production state of small/micro shipyards. For the reason of misdetection, it can be improved from two directions. Firstly, we can increase the training samples of core sites of small/micro enterprises to improve the reliability of optical evidence; Secondly, more time-phase optical images can be used to monitor the production state of shipyards in the future to further improve the monitoring accuracy.

5. Conclusions

Under complex and volatile international circumstances, there is significant fluctuation in the demand for new shipbuilding. The utilization of multi-source satellite RS images enables the dynamic monitoring of the shipyard production state from both spatial and temporal perspectives, thereby enhancing the monitoring efficiency and providing comprehensive and timely insights into changes in the shipyard production status. It is of immense significance for industrial development, social stability, resource utilization, and ecological environment restoration.
This paper analyzes the characteristics of shipyards on HRS images and proposes a shipyard production state monitoring framework based on CNNs and improved DS evidence theory.
The innovations of this paper are as follows: (1) It proposes the use of multi-source and multi-temporal satellite RS images to monitor the production state of shipyards, which reflects the potential of HRS data in monitoring the production state of shipyards, and also provides a new research thought perspective for other industrial production state monitoring. (2) A solution strategy is presented that employs a constructed correlation metric and similarity metric to effectively enhance the accuracy of shipyard production state monitoring results through high-conflict evidence fusion.
The proposed monitoring framework for the shipyard production state is implemented in the Yangtze River Delta and Bohai Sea regions. The findings demonstrate that compared to the conventional DS evidence fusion method, the semi-annual monitoring of the shipyard production state can be conducted with greater accuracy using our approach. Furthermore, our proposed method exhibits excellent performance across various types of HRS images.
The article proposes a method that combines the advantages of HRS images and deep learning techniques, which can provide new ideas for the detection and identification of industrial sites and related research in remote sensing. This will broaden new perspectives and application directions for satellite remote sensing applications.

Author Contributions

Conceptualization, W.Q., Y.S., X.Y. and Y.T.; methodology, W.Q., Y.S., X.Y. and Y.T.; validation, W.Q.; formal analysis, W.Q.; data curation, W.Q., X.Y. and Y.T.; investigation, W.Q., X.Y. and Y.T.; software, W.Q., Y.S., X.Y. and Y.T.; funding acquisition, Y.S. and H.Z.; supervision, Y.S. and H.Z.; resources, H.Z.; writing—review and editing, W.Q. and Y.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Automated Identifying of Environment Changes Using Satellite Time-Series, Dragon 5 Cooperation 2020–2024, under Grant 57971, in part by the Jiangsu Provincial Marine Science and Technology Innovation Project, Research and Application Demonstration of Remote Sensing Monitoring Method for Jiangsu Coastal Zone Resources with Multi-source Remote Sensing Data, under Grant JSZRHYKJ202207, in part by the Integration and Application Demonstration in the Marine Field under Grant 2020010004, in part by the national innovation and entrepreneurship training program for college students, in part by the Fundamental Research Funds for the Central Universities, China University of Geosciences (Wuhan).

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the commercial nature of the data.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Jia, Z. Influencing Factors and Optimization of Ship Energy Efficiency under the Background of Climate Change. IOP Conf. Ser. Earth Environ. Sci. 2021, 647, 012178. [Google Scholar] [CrossRef]
  2. Liu, J.; Wang, Y.; Yan, S.; Zhao, F.; Li, Y.; Dang, L.; Liu, X.; Shao, Y.; Peng, B. Underground Coal Fire Detection and Monitoring Based on Landsat-8 and Sentinel-1 Data Sets in Miquan Fire Area, XinJiang. Remote Sens. 2021, 13, 1141. [Google Scholar] [CrossRef]
  3. Yang, Z.; Wei, J.; Deng, J.; Gao, Y.; Zhao, S.; He, Z. Mapping Outburst Floods Using a Collaborative Learning Method Based on Temporally Dense Optical and SAR Data: A Case Study with the Baige Landslide Dam on the Jinsha River, Tibet. Remote Sens. 2021, 13, 2205. [Google Scholar] [CrossRef]
  4. Absalon, D.; Matysik, M.; Woźnica, A.; Janczewska, N. Detection of Changes in the Hydrobiological Parameters of the Oder River during the Ecological Disaster in July 2022 Based on Multi-Parameter Probe Tests and Remote Sensing Methods. Ecol. Indic. 2023, 148, 110103. [Google Scholar] [CrossRef]
  5. Qian, L.; Chen, S.; Jiang, H.; Dai, X.; Jia, K. Quantitative Monitoring of Sugarcane Typhoon Disaster Based on Multi-Source Remote Sensing Data. In Proceedings of the 2022 3rd International Conference on Geology, Mapping and Remote Sensing (ICGMRS), Zhoushan, China, 22–24 April 2022; pp. 926–930. [Google Scholar]
  6. Zhang, W.; Dong, Y. Research on Flood Remote Sensing Monitoring Based on Multi-Source Remote Sensing Data. In Proceedings of the 2022 3rd International Conference on Geology, Mapping and Remote Sensing (ICGMRS), Zhoushan, China, 22–24 April 2022; pp. 646–649. [Google Scholar]
  7. Lin, Q.; Ci, T.; Wang, L.; Mondal, S.K.; Yin, H.; Wang, Y. Transfer Learning for Improving Seismic Building Damage Assessment. Remote Sens. 2022, 14, 201. [Google Scholar] [CrossRef]
  8. Som-ard, J.; Atzberger, C.; Izquierdo-Verdiguier, E.; Vuolo, F.; Immitzer, M. Remote Sensing Applications in Sugarcane Cultivation: A Review. Remote Sens. 2021, 13, 4040. [Google Scholar] [CrossRef]
  9. Das, K.; Twarakavi, N.; Khiripet, N.; Chattanrassamee, P.; Kijkullert, C. A Machine Learning Framework for Mapping Soil Nutrients with Multi-Source Data Fusion. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 3705–3708. [Google Scholar]
  10. Jin, X.; Wan, J.; Hu, W.; Song, Y.; Lu, B. Retrieval of Green Tide Concentration and Interannual Variation Analysis in Yellow Sea Based on Multi-Source Remote Sensing Monitoring. In Proceedings of the Global Oceans 2020: Singapore—U.S. Gulf Coast, Biloxi, MS, USA, 5–30 October 2020; pp. 1–5. [Google Scholar]
  11. Behkamal, B.; Entezami, A.; De Michele, C.; Arslan, A.N. Elimination of Thermal Effects from Limited Structural Displacements Based on Remote Sensing by Machine Learning Techniques. Remote Sens. 2023, 15, 3095. [Google Scholar] [CrossRef]
  12. Blondeau-Patissier, D.; Schroeder, T.; Suresh, G.; Li, Z.; Diakogiannis, F.I.; Irving, P.; Witte, C.; Steven, A.D.L. Detection of Marine Oil-like Features in Sentinel-1 SAR Images by Supplementary Use of Deep Learning and Empirical Methods: Performance Assessment for the Great Barrier Reef Marine Park. Mar. Pollut. Bull. 2023, 188, 114598. [Google Scholar] [CrossRef] [PubMed]
  13. Bui, D.H.; Mucsi, L. From Land Cover Map to Land Use Map: A Combined Pixel-Based and Object-Based Approach Using Multi-Temporal Landsat Data, a Random Forest Classifier, and Decision Rules. Remote Sens. 2021, 13, 1700. [Google Scholar] [CrossRef]
  14. Grybas, H.; Congalton, R.G. A Comparison of Multi-Temporal RGB and Multispectral UAS Imagery for Tree Species Classification in Heterogeneous New Hampshire Forests. Remote Sens. 2021, 13, 2631. [Google Scholar] [CrossRef]
  15. Abir, F.A.; Saha, R. Assessment of Land Surface Temperature and Land Cover Variability during Winter: A Spatio-Temporal Analysis of Pabna Municipality in Bangladesh. Environ. Chall. 2021, 4, 100167. [Google Scholar] [CrossRef]
  16. Koko, F.; Yue, W.; Abubakar, G.; Hamed, R.; Alabsi, A. Analyzing Urban Growth and Land Cover Change Scenario in Lagos, Nigeria Using Multi-Temporal Remote Sensing Data and GIS to Mitigate Flooding. Geomat. Nat. Hazards Risk 2021, 12, 631–652. [Google Scholar] [CrossRef]
  17. Ru, L.; Du, B.; Wu, C. Multi-Temporal Scene Classification and Scene Change Detection with Correlation Based Fusion. IEEE Trans. Image Process. 2021, 30, 1382–1394. [Google Scholar] [CrossRef] [PubMed]
  18. Xu, J.; Yang, J.; Xiong, X.; Li, H.; Huang, J.; Ting, K.C.; Ying, Y.; Lin, T. Towards Interpreting Multi-Temporal Deep Learning Models in Crop Mapping. Remote Sens. Environ. 2021, 264, 112599. [Google Scholar] [CrossRef]
  19. Zhu, Y.; Geiß, C.; So, E. Image Super-Resolution with Dense-Sampling Residual Channel-Spatial Attention Networks for Multi-Temporal Remote Sensing Image Classification. Int. J. Appl. Earth Obs. Geoinf. 2021, 104, 102543. [Google Scholar] [CrossRef]
  20. Tasci, B.; Acharya, M.R.; Baygin, M.; Dogan, S.; Tuncer, T.; Belhaouari, S.B. InCR: Inception and Concatenation Residual Block-Based Deep Learning Network for Damaged Building Detection Using Remote Sensing Images. Int. J. Appl. Earth Obs. Geoinf. 2023, 123, 103483. [Google Scholar] [CrossRef]
  21. Kalita, I.; Roy, M. Inception Time DCNN for Land Cover Classification by Analyzing Multi-Temporal Remotely Sensed Images. In Proceedings of the IGARSS 2022–2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; pp. 5736–5739. [Google Scholar]
  22. Aeberli, A.; Johansen, K.; Robson, A.; Lamb, D.W.; Phinn, S. Detection of Banana Plants Using Multi-Temporal Multispectral UAV Imagery. Remote Sens. 2021, 13, 2123. [Google Scholar] [CrossRef]
  23. Gill, J.; Faisal, K.; Shaker, A.; Yan, W.Y. Detection of Waste Dumping Locations in Landfill Using Multi-Temporal Landsat Thermal Images. Waste Manag. Res. 2019, 37, 386–393. [Google Scholar] [CrossRef]
  24. Wei, X.; Fu, X.; Yun, Y.; Lv, X. Multiscale and Multitemporal Road Detection from High Resolution SAR Images Using Attention Mechanism. Remote Sens. 2021, 13, 3149. [Google Scholar] [CrossRef]
  25. Xia, Z.G.; Motagh, M.; Li, T.; Roessner, S. The June 2020 Aniangzhai landslide in Sichuan Province, Southwest China: Slope instability analysis from radar and optical satellite remote sensing data. Landslides 2022, 19, 313–329. [Google Scholar] [CrossRef]
  26. Refice, A.; D’Addabbo, A.; Lovergine, F.P.; Bovenga, F.; Nutricato, R.; Nitti, D.O. Improving Flood Monitoring Through Advanced Modeling of Sentinel-1 Multi-Temporal Stacks. In Proceedings of the IGARSS 2022–2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; pp. 5881–5884. [Google Scholar]
  27. Mehravar, S.; Razavi-Termeh, S.V.; Moghimi, A.; Ranjgar, B.; Foroughnia, F.; Amani, M. Flood Susceptibility Mapping Using Multi-Temporal SAR Imagery and Novel Integration of Nature-Inspired Algorithms into Support Vector Regression. J. Hydrol. 2023, 617, 129100. [Google Scholar] [CrossRef]
  28. Zhao, Y.; Qu, Z.; Zhang, Y.; Ao, Y.; Han, L.; Kang, S.; Sun, Y. Effects of Human Activity Intensity on Habitat Quality Based on Nighttime Light Remote Sensing: A Case Study of Northern Shaanxi, China. Sci. Total Environ. 2022, 851, 158037. [Google Scholar] [CrossRef]
  29. Price, N.; Atkinson, P.M. Global GDP Prediction with Night-Lights and Transfer Learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 7128–7138. [Google Scholar] [CrossRef]
  30. Li, J.; Sun, Q.; Zhang, P.; Sun, D.; Wen, L.; Li, X. A Study of Auxiliary Monitoring in Iron and Steel Plant Based on Multi-Temporal Thermal Infrared Remote Sensing. Remote Sens. Nat. Resour. 2019, 31, 220–228. [Google Scholar]
  31. Sun, S.; Li, L.; Zhao, W.; Wang, L.; Qiu, Y.; Jiang, L.; Zhang, L. Industrail Pollution Emissions Based on Thermal Anomaly Remote Sensing Monitoring: A Case Study of Southern Hebei Urban Agglomerations, China. China Environ. Sci. 2019, 39, 3120–3129. [Google Scholar]
  32. Tao, J.; Fan, M.; Gu, J.; Chen, L. Satallite Observations of Return-to-Work over China during the Period of COVID-19. Natl. Remote Sens. Bull. 2020, 24, 824–836. [Google Scholar] [CrossRef]
  33. Li, J.; Zou, Q.; Luo, S.; Huang, Y.; Wang, J.; Lu, Y. The Temporal and Spatial Analysis of County Poverty Prevention in Ningxia Based on Night Light Remote Sensing Data. In Proceedings of the 2022 4th International Academic Exchange Conference on Science and Technology Innovation (IAECST), Guangzhou, China, 9–11 December 2022; pp. 699–706. [Google Scholar]
  34. He, T.; Song, H.; Chen, W. Recognizing the Transformation Characteristics of Resource-Based Cities Using Night-Time Light Remote Sensing Data: Evidence from 126 Cities in China. Resour. Policy 2023, 85, 104013. [Google Scholar] [CrossRef]
  35. Guo, N.; Jiang, M.; Gao, L.; Tang, Y.; Han, J.; Chen, X. CRABR-Net: A Contextual Relational Attention-Based Recognition Network for Remote Sensing Scene Objective. Sensors 2023, 23, 7514. [Google Scholar] [CrossRef]
  36. Li, R.; Wei, P.; Liu, X.; Li, C.; Ni, J.; Zhao, W.; Zhao, L.; Hou, K. Cutting Tool Wear State Recognition Based on a Channel-Space Attention Mechanism. J. Manuf. Syst. 2023, 69, 135–149. [Google Scholar] [CrossRef]
  37. Gui, Q.; Wang, G.; Wang, L.; Cheng, J.; Fang, H. Road Surface State Recognition Using Deep Convolution Network on the Low-Power-Consumption Embedded Device. Microprocess. Microsyst. 2023, 96, 104740. [Google Scholar] [CrossRef]
  38. Ruichao, S.; Xiaodong, C.; Liming, L.; Qianwen, Z. Fastener State Detection Based on Foreground Segmentation. Railw. Stand. Des. 2021, 65, 28–34. [Google Scholar]
  39. Ciocca, G.; Micali, G.; Napoletano, P. State Recognition of Food Images Using Deep Features. IEEE Access 2020, 8, 32003–32017. [Google Scholar] [CrossRef]
  40. Tu, Y.; Song, Y.; Li, B.; Zhu, Q.; Cui, S.; Zhu, H. A Deformable Spatial Attention Mechanism-Based Method and a Benchmark for Dock Detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 3730–3741. [Google Scholar] [CrossRef]
  41. Manzoor, K.; Majeed, F.; Siddique, A.; Meraj, T.; Rauf, H.T.; El-Meligy, M.A.; Sharaf, M.; Elgawad, A.E.E.A. A Lightweight Approach for Skin Lesion Detection Through Optimal Features Fusion. Comput. Mater. Contin. 2022, 70, 1617–1630. [Google Scholar] [CrossRef]
  42. Eldem, H.; Ülker, E.; Işıklı, O.Y. Alexnet Architecture Variations with Transfer Learning for Classification of Wound Images. Eng. Sci. Technol. Int. J. 2023, 45, 101490. [Google Scholar] [CrossRef]
  43. Win Lwin, L.Y.; Htwe, A.N. Image Classification for Rice Leaf Disease Using AlexNet Model. In Proceedings of the 2023 IEEE Conference on Computer Applications (ICCA), Yangon, Myanmar, 27–28 February 2023; pp. 124–129. [Google Scholar]
  44. Deng, P.; Huang, H.; Xu, K. A Deep Neural Network Combined with Context Features for Remote Sensing Scene Classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  45. Yang, Y.; Li, Z.; Zhang, J. Fire Detection of Satellite Remote Sensing Images Based on VGG Ensemble Classifier. In Proceedings of the 2021 IEEE Conference on Telecommunications, Optics and Computer Science (TOCS), Shenyang, China, 10–11 December 2021; pp. 31–36. [Google Scholar]
  46. Andreasen, D.; Edmund, J.M.; Zografos, V.; Menze, B.H.; Van Leemput, K. Computed Tomography Synthesis from Magnetic Resonance Images in the Pelvis Using Multiple Random Forests and Auto-Context Features. In Proceedings of the SPIE Medical Imaging, San Diego, CA, USA, 27 February–3 March 2016; Volume 9784, p. 978417. [Google Scholar]
  47. Liu, X.; Zhou, Y.; Zhao, J.; Yao, R.; Liu, B.; Ma, D.; Zheng, Y. Multiobjective ResNet Pruning by Means of EMOAs for Remote Sensing Scene Classification. Neurocomputing 2020, 381, 298–305. [Google Scholar] [CrossRef]
  48. Yu, D.; Guo, H.; Xu, Q.; Lu, J.; Zhao, C.; Lin, Y. Hierarchical Attention and Bilinear Fusion for Remote Sensing Image Scene Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 6372–6383. [Google Scholar] [CrossRef]
  49. Zhang, Y.; Chan, S.; Park, V.Y.; Chang, K.-T.; Mehta, S.; Kim, M.J.; Combs, F.J.; Chang, P.; Chow, D.; Parajuli, R.; et al. Automatic Detection and Segmentation of Breast Cancer on MRI Using Mask R-CNN Trained on Non–Fat-Sat Images and Tested on Fat-Sat Images. Acad. Radiol. 2022, 29, S135–S144. [Google Scholar] [CrossRef]
  50. Natya, S.; Manu, C.; Anand, A. Deep Transfer Learning with RESNET for Remote Sensing Scene Classification. In Proceedings of the 2022 IEEE International Conference on Data Science and Information System (ICDSIS), Hassan, India, 29–30 July 2022; pp. 1–6. [Google Scholar]
  51. Cheng, L.; Wang, L.; Feng, R.; Tian, S. A Deep Learning-Based Framework for Urban Active Population Mapping from Remote Sensing Imagery. In Proceedings of the IGARSS 2022–2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; pp. 7799–7802. [Google Scholar]
  52. Alanazi, S.A. Melanoma Identification Through X-Ray Modality Using Inception-v3 Based Convolutional Neural Network. Comput. Mater. Amp; Contin. 2022, 72, 37–55. [Google Scholar] [CrossRef]
  53. Yang, M.; Wang, H.; Hu, K.; Yin, G.; Wei, Z. IA-Net: An Inception–Attention-Module-Based Network for Classifying Underwater Images from Others. IEEE J. Ocean. Eng. 2022, 47, 704–717. [Google Scholar] [CrossRef]
  54. Aslan, M.F.; Sabanci, K.; Durdu, A.; Unlersen, M.F. COVID-19 Diagnosis Using State-of-the-Art CNN Architecture Features and Bayesian Optimization. Comput. Biol. Med. 2022, 142, 105244. [Google Scholar] [CrossRef] [PubMed]
  55. Wu, B.; Wang, C.; Huang, W.; Huang, D.; Peng, H. Recognition of Student Classroom Behaviors Based on Moving Target Detection. Trait. Du Signal 2021, 38, 215–220. [Google Scholar] [CrossRef]
  56. Guédon, A.C.; Meij, S.E.; Osman, K.N.; Kloosterman, H.A.; van Stralen, K.J.; Grimbergen, M.C.; Eijsbouts, Q.A.; van den Dobbelsteen, J.J.; Twinanda, A.P. Twinanda Deep Learning for Surgical Phase Recognition Using Endoscopic Videos. Surg. Endosc. 2020, 35, 6150–6157. [Google Scholar] [CrossRef]
  57. Choudhary, G.; Sethi, D. From Conventional Approach to Machine Learning and Deep Learning Approach: An Experimental and Comprehensive Review of Image Fusion Techniques. Arch. Comput. Methods Eng. 2023, 30, 1267–1304. [Google Scholar] [CrossRef]
  58. Benediktsson, J.A.; Kanellopoulos, I. Classification of Multisource and Hyperspectral Data Based on Decision Fusion. IEEE Trans. Geosci. Remote Sens. 1999, 37, 1367–1377. [Google Scholar] [CrossRef]
  59. Gunatilaka, A.H.; Baertlein, B.A. Feature-Level and Decision-Level Fusion of Noncoincidently Sampled Sensors for Land Mine Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 577–589. [Google Scholar] [CrossRef]
  60. Jimenez, L.O.; Morales-Morell, A.; Creus, A. Classification of Hyperdimensional Data Based on Feature and Decision Fusion Approaches Using Projection Pursuit, Majority Voting, and Neural Networks. IEEE Trans. Geosci. Remote Sens. 1999, 37, 1360–1366. [Google Scholar] [CrossRef]
  61. Jeon, B.; Landgrebe, D.A. Landgrebe Decision Fusion Approach for Multitemporal Classification. IEEE Trans. Geosci. Remote Sens. 1999, 37, 1227–1233. [Google Scholar] [CrossRef]
  62. Wang, A.; Jiang, J.; Zhang, H. Multi-Sensor Image Decision Level Fusion Detection Algorithm Based on D-S Evidence Theory. In Proceedings of the 2014 Fourth International Conference on Instrumentation and Measurement, Computer, Communication and Control, Harbin, China, 18–20 September 2014; pp. 620–623. [Google Scholar]
  63. Petrakos, M.; Benediktsson, J.A.; Kanellopoulos, I. The Effect of Classifier Agreement on the Accuracy of the Combined Classifier in Decision Level Fusion. IEEE Trans. Geosci. Remote Sens. 2001, 39, 2539–2546. [Google Scholar] [CrossRef]
  64. Prabhakar, S.; Jain, A.K. Decision-Level Fusion in Fingerprint Verification. Pattern Recognit. 2002, 35, 861–874. [Google Scholar] [CrossRef]
  65. Zhao, Y.; Yin, Y. Dongmei Fu Decision-Level Fusion of Infrared and Visible Images for Face Recognition. In Proceedings of the 2008 Chinese Control and Decision Conference, Yantai, China, 2–4 July 2 2008; pp. 2411–2414. [Google Scholar]
  66. Seal, A.; Bhattacharjee, D.; Nasipuri, M.; Gonzalo-Martin, C.; Menasalvas, E. À-Trous Wavelet Transform-Based Hybrid Image Fusion for Face Recognition Using Region Classifiers. Expert Syst. 2018, 35, e12307. [Google Scholar] [CrossRef]
  67. Yager, R.; Liu, L. Classic Works of the Dempster-Shafer Theory of Belief Functions; Springer: Berlin/Heidelberg, Germany, 2008; Volume 219, ISBN 978-3-540-25381-5. [Google Scholar]
  68. Rota, G.-C. 222 pp Deterministic and Stochastic Optimal Control, W.H. Fleming, R.W. Rishel, Springer (1975). Adv. Math. 1977, 24, 341. [Google Scholar] [CrossRef]
  69. Hermessi, H.; Mourali, O.; Zagrouba, E. Multimodal Medical Image Fusion Review: Theoretical Background and Recent Advances. Signal Process. 2021, 183, 108036. [Google Scholar] [CrossRef]
  70. Ding, B.; Wen, G.; Huang, X.; Ma, C.; Yang, X. Target Recognition in Synthetic Aperture Radar Images via Matching of Attributed Scattering Centers. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 3334–3347. [Google Scholar] [CrossRef]
  71. Feng, H.C. Land-Cover Classification of High-Resolution Remote Sensing Image Based on Multi-Classifier Fusion and the Improved Dempster-Shafer Evidence Theory. J. Appl. Remote Sens. 2021, 15, 014506. [Google Scholar] [CrossRef]
  72. Haouas, F.; Solaiman, B.; Ben Dhiaf, Z.; Hamouda, A.; Bsaies, K. Multi-Temporal Image Change Mining Based on Evidential Conflict Reasoning. ISPRS J. Photogramm. Remote Sens. 2019, 151, 59–75. [Google Scholar] [CrossRef]
  73. Wang, Z.; Fang, Z.; Wu, Y.; Liang, J.; Song, X. Multi-Source Evidence Data Fusion Approach to Detect Daily Distribution and Coverage of Ulva Prolifera in the Yellow Sea, China. IEEE Access 2019, 7, 115214–115228. [Google Scholar] [CrossRef]
  74. Zhou, Y.; Song, Y.; Cui, S.; Zhu, H.; Sun, J.; Qin, W. A Novel Change Detection Framework in Urban Area Using Multilevel Matching Feature and Automatic Sample Extraction Strategy. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 3967–3987. [Google Scholar] [CrossRef]
  75. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception architecture for computer vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
  76. He, K.M.; Zhang, X.Y.; Ren, S.Q.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  77. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Li, F.F. ImageNet: A large-scale hierarchical image database. In Proceedings of the CVPR: 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; Volumes 1–4; pp. 248–255. [Google Scholar]
  78. Yager, R.R. On the Dempster-Shafer Framework and New Combination Rules. Inf. Sci. 1987, 41, 93–137. [Google Scholar] [CrossRef]
Figure 1. Reference of the Yangtze River Delta. (a) The 1st quarter; (b) The 2nd quarter; (c) The 1st half-year.
Figure 1. Reference of the Yangtze River Delta. (a) The 1st quarter; (b) The 2nd quarter; (c) The 1st half-year.
Remotesensing 15 04958 g001
Figure 2. Reference of the Bohai Rim. (a) The 1st quarter; (b) The 2nd quarter; (c) The 1st half-year.
Figure 2. Reference of the Bohai Rim. (a) The 1st quarter; (b) The 2nd quarter; (c) The 1st half-year.
Remotesensing 15 04958 g002
Figure 3. Core sites in the shipyard scene.
Figure 3. Core sites in the shipyard scene.
Remotesensing 15 04958 g003
Figure 4. The proposed method flowchart.
Figure 4. The proposed method flowchart.
Remotesensing 15 04958 g004
Figure 5. Evidence fusion flow chart, keyed as follows: P d : the SCS evidence composed of single-phase observation P d o t o as Formulas (3). P m : the MUCS evidence composed of single-phase observation P m o t o as Formulas (4). ω j : the calculated weight as (11). P d o t o : the single-phase SCS evidence based on weight correction. P m o t o : the single-phase MUCS evidence based on weight correction. m 1 , m 2 : the corrected SCS and MUCS evidence. m 0 : the evidence after correction with greater weight. E i : the ith evidence fusion result.
Figure 5. Evidence fusion flow chart, keyed as follows: P d : the SCS evidence composed of single-phase observation P d o t o as Formulas (3). P m : the MUCS evidence composed of single-phase observation P m o t o as Formulas (4). ω j : the calculated weight as (11). P d o t o : the single-phase SCS evidence based on weight correction. P m o t o : the single-phase MUCS evidence based on weight correction. m 1 , m 2 : the corrected SCS and MUCS evidence. m 0 : the evidence after correction with greater weight. E i : the ith evidence fusion result.
Remotesensing 15 04958 g005
Table 1. Selected satellite RS data parameters.
Table 1. Selected satellite RS data parameters.
SatelliteSensorsLevelRevisit
Cycle/Day
Spatial Resolution/m
ZY-3OpticalTrue Color Image Products52.1
GF-1OpticalLevel-1A42
Google EarthOpticalTrue Color Image Products/2
Table 2. Selected satellite data used in the experimental area #.
Table 2. Selected satellite data used in the experimental area #.
Yangtze River DeltaTime/MonthJanFebMarAprMayJun
Time/QuarterIII
Optical (ZY-3)
Bohai RimTime/MonthJanFebMarAprMayJun
Time/QuarterIII
Optical (GF-1)
# ⚪ indicates data are available.
Table 3. Images of core sites in different production states.
Table 3. Images of core sites in different production states.
Core SiteThe Normal Production StateThe Abnormal Production State
ZY-3 Optical ImageZY-3 Optical Image
SCS
(dock/berth)
Remotesensing 15 04958 i001
Remotesensing 15 04958 i002Remotesensing 15 04958 i003
MUCS
(material storage area/assembly area)
Remotesensing 15 04958 i004
Remotesensing 15 04958 i005Remotesensing 15 04958 i006
Table 4. Example of training sample datasets.
Table 4. Example of training sample datasets.
Core SitesProduction StateAmountSample Example
SCSNormal1548Remotesensing 15 04958 i007
Abnormal1548Remotesensing 15 04958 i008
MUCSNormal444Remotesensing 15 04958 i009
Abnormal444Remotesensing 15 04958 i010
Table 5. Training parameters.
Table 5. Training parameters.
Sample Datasets Ratio (Training: Validation)Training OptimizerMomentum FactorBatch SizeEpochInitial Learning Rate
8:2SGDM0.9641000.001
Table 6. Positive test results display.
Table 6. Positive test results display.
ShipyardImages P d o t o = I P d o t o = I I P m o t o = I P m o t o = I I True State
iRemotesensing 15 04958 i0110.990.990.990.99Normal
iiRemotesensing 15 04958 i0120.010.020.010.01Abnormal
iiiRemotesensing 15 04958 i0130.140.020.900.92Normal
ivRemotesensing 15 04958 i0140.990.990.010.06Normal
vRemotesensing 15 04958 i0150.750.040.990.93Normal
viRemotesensing 15 04958 i0160.040.160.820.85Normal
Table 7. Comparison of fusion results.
Table 7. Comparison of fusion results.
Shipyard P d o t o = I P d o t o = I I P m o t o = I P m o t o = I I Voting YagerDS
Evidence Fusion
Improved DS
Evidence Fusion
True State
iii0.140.020.900.92--0.130.470.58Normal
vi0.040.160.820.85--0.140.360.88Normal
Table 8. Evaluation of overall results of production state monitoring.
Table 8. Evaluation of overall results of production state monitoring.
AreaCNNPeriodAccuracyPrecisionFARecallMAF1-Score
Yangtze River DeltaInception v3The 1st half-year99.11%100.00%0.00%94.12%5.88%96.97%
ResNet101The 1st half-year100.00%100.00%0.00%100.00%0.00%100.00%
Bohai RimInception v3The 1st half-year97.67%87.50%12.50%100.00%0.00%93.33%
ResNet101The 1st half-year95.35%85.71%14.29%85.71%14.29%85.71%
Table 9. Fusion results of other methods in the study area.
Table 9. Fusion results of other methods in the study area.
AreaFusion MethodCNNPeriodsAccuracyPrecisionFARecallMAF1-Score
Yangtze River DeltaTraditional DS evidence fusionInception v3The 1st half-year98.21%89.47%10.53%100.00%0.00%94.44%
ResNet101The 1st half year97.32%85.00%15.00%100.00%0.00%91.89%
VotingInception v3The 1st half-year90.18%60.71%39.29%100.00%0.00%75.56%
ResNet101The 1st half-year91.94%65.38%34.62%100.00%0.00%75.56%
YagerInception v3The 1st half-year89.29%58.62%41.38%100.00%0.00%73.91%
ResNet101The 1st half-year88.39%56.67%43.33%100.00%0.00%72.34%
Bohai RimTraditional DS evidence fusionInception v3The 1st half-year97.67%87.50%12.50%100.00%0.00%93.33%
ResNet101The 1st half-year97.67%87.50%12.50%100.00%0.00%93.33%
VotingInception v3The 1st half-year88.37%58.33%41.67%100.00%0.00%73.68%
ResNet101The 1st half-year90.70%63.64%36.36%100.00%0.00%77.78%
YagerInception v3The 1st half-year86.05%53.85%46.15%100.00%0.00%70.00%
ResNet101The 1st half-year86.05%53.85%46.15%100.00%0.00%70.00%
Table 10. False test results display.
Table 10. False test results display.
ShipyardImagesTrue State
viiRemotesensing 15 04958 i017Abnormal
viiiRemotesensing 15 04958 i018Abnormal
ixRemotesensing 15 04958 i019Abnormal
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qin, W.; Song, Y.; Zhu, H.; Yu, X.; Tu, Y. A Novel Shipyard Production State Monitoring Method Based on Satellite Remote Sensing Images. Remote Sens. 2023, 15, 4958. https://doi.org/10.3390/rs15204958

AMA Style

Qin W, Song Y, Zhu H, Yu X, Tu Y. A Novel Shipyard Production State Monitoring Method Based on Satellite Remote Sensing Images. Remote Sensing. 2023; 15(20):4958. https://doi.org/10.3390/rs15204958

Chicago/Turabian Style

Qin, Wanrou, Yan Song, Haitian Zhu, Xinli Yu, and Yuhong Tu. 2023. "A Novel Shipyard Production State Monitoring Method Based on Satellite Remote Sensing Images" Remote Sensing 15, no. 20: 4958. https://doi.org/10.3390/rs15204958

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop