Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (32)

Search Parameters:
Keywords = HRRS images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 9571 KiB  
Article
Performance Evaluation of Real-Time Image-Based Heat Release Rate Prediction Model Using Deep Learning and Image Processing Methods
by Joohyung Roh, Sehong Min and Minsuk Kong
Fire 2025, 8(7), 283; https://doi.org/10.3390/fire8070283 - 18 Jul 2025
Viewed by 506
Abstract
Heat release rate (HRR) is a key indicator for characterizing fire behavior, and it is conventionally measured under laboratory conditions. However, this measurement is limited in its widespread application to various fire conditions, due to its high cost, operational complexity, and lack of [...] Read more.
Heat release rate (HRR) is a key indicator for characterizing fire behavior, and it is conventionally measured under laboratory conditions. However, this measurement is limited in its widespread application to various fire conditions, due to its high cost, operational complexity, and lack of real-time predictive capability. Therefore, this study proposes an image-based HRR prediction model that uses deep learning and image processing techniques. The flame region in a fire video was segmented using the YOLO-YCbCr model, which integrates YCbCr color-space-based segmentation with YOLO object detection. For comparative analysis, the YOLO segmentation model was used. Furthermore, the fire diameter and flame height were determined from the spatial information of the segmented flame, and the HRR was predicted based on the correlation between flame size and HRR. The proposed models were applied to various experimental fire videos, and their prediction performances were quantitatively assessed. The results indicated that the proposed models accurately captured the HRR variations over time, and applying the average flame height calculation enhanced the prediction performance by reducing fluctuations in the predicted HRR. These findings demonstrate that the image-based HRR prediction model can be used to estimate real-time HRR values in diverse fire environments. Full article
Show Figures

Figure 1

21 pages, 3798 KiB  
Article
Nondestructive Detection of Rice Milling Quality Using Hyperspectral Imaging with Machine and Deep Learning Regression
by Zhongjie Tang, Shanlin Ma, Hengnian Qi, Xincheng Zhang and Chu Zhang
Foods 2025, 14(11), 1977; https://doi.org/10.3390/foods14111977 - 3 Jun 2025
Viewed by 537
Abstract
The brown rice rate (BRR), milled rice rate (MRR), and head rice rate (HRR) are important indicators of rice milling quality. The simultaneous detection of these three metrics holds significant economic value for rice milling quality assessments. In this study, hyperspectral imaging was [...] Read more.
The brown rice rate (BRR), milled rice rate (MRR), and head rice rate (HRR) are important indicators of rice milling quality. The simultaneous detection of these three metrics holds significant economic value for rice milling quality assessments. In this study, hyperspectral imaging was employed to estimate the rice milling quality attributes of two rice varieties (Xiushui121 and Zhehujing26). Partial Least Squares Regression (PLSR), Support Vector Regression (SVR), Convolutional Neural Networks (CNNs), and Backpropagation Neural Networks (BPNNs) were used to establish both single-task and multi-task models for the prediction of milling quality attributes. Most multi-task models demonstrated a higher prediction accuracy compared with their corresponding single-task models. Among single-task models, BPNNs outperformed the others in predicting BRR and HRR, with correlation coefficients (r) up to 0.9. SVR excelled in forecasting the MRR. In multi-task learning, BPNNs exhibited relatively better performance, with r values exceeding 0.81 for all three indicators. SHapley Additive exPlanations (SHAP) analysis was used to explore the relationship between wavelength and rice milling quality attributes. This study confirmed that this nondestructive detection method for rice milling quality using hyperspectral imaging combined with machine learning and deep learning algorithms could effectively assess rice milling quality, thus contributing to breeding and growth management in the industry. Full article
Show Figures

Figure 1

18 pages, 2889 KiB  
Article
Experimental Study of Flame Dynamics in a Triple-Injector Swirling Nonpremixed Combustor Under Different Thermoacoustic Self-Excited Instability Modes
by Xiang Zhang, Suofang Wang and Yong Liu
Sensors 2025, 25(3), 850; https://doi.org/10.3390/s25030850 - 30 Jan 2025
Viewed by 906
Abstract
Combustion instability is one of the prominent and unavoidable problems in the design of high-performance propulsion systems. This study investigates the heat release rate (HRR) responses in a triple-nozzle swirling nonpremixed combustor under various thermoacoustic self-excited instability modes. Dynamic pressure sensors and high-speed [...] Read more.
Combustion instability is one of the prominent and unavoidable problems in the design of high-performance propulsion systems. This study investigates the heat release rate (HRR) responses in a triple-nozzle swirling nonpremixed combustor under various thermoacoustic self-excited instability modes. Dynamic pressure sensors and high-speed imaging were employed to capture the pressure oscillations within the combustion chamber and the characteristics of flame dynamics, respectively. The results reveal nonlinear bifurcations in the self-excited thermoacoustic instabilities at different equivalence ratios. Significant differences in flame dynamics were observed across the instability modes. In lower frequency modes, the fluctuations in flame length contribute to the driving force of thermoacoustic instability. In relatively high-frequency modes, HRR fluctuations are dominated by the rolling up and convective processes of wrinkles on the flame surface. Alternating regions of gain and damping are observed on the flame surface. At even higher frequencies, both aforementioned HRR fluctuation patterns are simultaneously observed. These findings provide a deeper understanding of the complex interactions between flame dynamics and thermoacoustic instabilities, offering new insights into the design and optimization of nonpremixed combustion systems. The study underscores the importance of considering the spatial and temporal variations in flame behavior to effectively predict and control thermoacoustic instabilities. Full article
(This article belongs to the Special Issue Sensors Technologies for Measurements and Signal Processing)
Show Figures

Figure 1

32 pages, 10548 KiB  
Article
An Unsupervised Remote Sensing Image Change Detection Method Based on RVMamba and Posterior Probability Space Change Vector
by Jiaxin Song, Shuwen Yang, Yikun Li and Xiaojun Li
Remote Sens. 2024, 16(24), 4656; https://doi.org/10.3390/rs16244656 - 12 Dec 2024
Cited by 2 | Viewed by 1337
Abstract
Change vector analysis in posterior probability space (CVAPS) is an effective change detection (CD) framework that does not require sound radiometric correction and is robust against accumulated classification errors. Based on training samples within target images, CVAPS can generate a uniformly scaled change-magnitude [...] Read more.
Change vector analysis in posterior probability space (CVAPS) is an effective change detection (CD) framework that does not require sound radiometric correction and is robust against accumulated classification errors. Based on training samples within target images, CVAPS can generate a uniformly scaled change-magnitude map that is suitable for a global threshold. However, vigorous user intervention is required to achieve optimal performance. Therefore, to eliminate user intervention and retain the merit of CVAPS, an unsupervised CVAPS (UCVAPS) CD method, RFCC, which does not require rigorous user training, is proposed in this study. In the RFCC, we propose an unsupervised remote sensing image segmentation algorithm based on the Mamba model, i.e., RVMamba differentiable feature clustering, which introduces two loss functions as constraints to ensure that RVMamba achieves accurate segmentation results and to supply the CSBN module with high-quality training samples. In the CD module, the fuzzy C-means clustering (FCM) algorithm decomposes mixed pixels into multiple signal classes, thereby alleviating cumulative clustering errors. Then, a context-sensitive Bayesian network (CSBN) model is introduced to incorporate spatial information at the pixel level to estimate the corresponding posterior probability vector. Thus, it is suitable for high-resolution remote sensing (HRRS) imagery. Finally, the UCVAPS framework can generate a uniformly scaled change-magnitude map that is suitable for the global threshold and can produce accurate CD results. The experimental results on seven change detection datasets confirmed that the proposed method outperforms five state-of-the-art competitive CD methods. Full article
Show Figures

Figure 1

16 pages, 2602 KiB  
Article
Multi-Scale and Multi-Network Deep Feature Fusion for Discriminative Scene Classification of High-Resolution Remote Sensing Images
by Baohua Yuan, Sukhjit Singh Sehra and Bernard Chiu
Remote Sens. 2024, 16(21), 3961; https://doi.org/10.3390/rs16213961 - 24 Oct 2024
Cited by 3 | Viewed by 1644
Abstract
The advancement in satellite image sensors has enabled the acquisition of high-resolution remote sensing (HRRS) images. However, interpreting these images accurately and obtaining the computational power needed to do so is challenging due to the complexity involved. This manuscript proposed a multi-stream convolutional [...] Read more.
The advancement in satellite image sensors has enabled the acquisition of high-resolution remote sensing (HRRS) images. However, interpreting these images accurately and obtaining the computational power needed to do so is challenging due to the complexity involved. This manuscript proposed a multi-stream convolutional neural network (CNN) fusion framework that involves multi-scale and multi-CNN integration for HRRS image recognition. The pre-trained CNNs were used to learn and extract semantic features from multi-scale HRRS images. Feature extraction using pre-trained CNNs is more efficient than training a CNN from scratch or fine-tuning a CNN. Discriminative canonical correlation analysis (DCCA) was used to fuse deep features extracted across CNNs and image scales. DCCA reduced the dimension of the features extracted from CNNs while providing a discriminative representation by maximizing the within-class correlation and minimizing the between-class correlation. The proposed model has been evaluated on NWPU-RESISC45 and UC Merced datasets. The accuracy associated with DCCA was 10% and 6% higher than discriminant correlation analysis (DCA) in the NWPU-RESISC45 and UC Merced datasets. The advantage of DCCA was better demonstrated in the NWPU-RESISC45 dataset due to the incorporation of richer within-class variability in this dataset. While both DCA and DCCA minimize between-class correlation, only DCCA maximizes the within-class correlation and, therefore, attains better accuracy. The proposed framework achieved higher accuracy than all state-of-the-art frameworks involving unsupervised learning and pre-trained CNNs and 2–3% higher than the majority of fine-tuned CNNs. The proposed framework offers computational time advantages, requiring only 13 s for training in NWPU-RESISC45, compared to a day for fine-tuning the existing CNNs. Thus, the proposed framework achieves a favourable balance between efficiency and accuracy in HRRS image recognition. Full article
Show Figures

Figure 1

25 pages, 17970 KiB  
Article
A New Subject-Sensitive Hashing Algorithm Based on Multi-PatchDrop and Swin-Unet for the Integrity Authentication of HRRS Image
by Kaimeng Ding, Yingying Wang, Chishe Wang and Ji Ma
ISPRS Int. J. Geo-Inf. 2024, 13(9), 336; https://doi.org/10.3390/ijgi13090336 - 21 Sep 2024
Viewed by 1068
Abstract
Transformer-based subject-sensitive hashing algorithms exhibit good integrity authentication performance and have the potential to ensure the authenticity and convenience of high-resolution remote sensing (HRRS) images. However, the robustness of Transformer-based subject-sensitive hashing is still not ideal. In this paper, we propose a Multi-PatchDrop [...] Read more.
Transformer-based subject-sensitive hashing algorithms exhibit good integrity authentication performance and have the potential to ensure the authenticity and convenience of high-resolution remote sensing (HRRS) images. However, the robustness of Transformer-based subject-sensitive hashing is still not ideal. In this paper, we propose a Multi-PatchDrop mechanism to improve the performance of Transformer-based subject-sensitive hashing. The Multi-PatchDrop mechanism determines different patch dropout values for different Transformer blocks in ViT models. On the basis of a Multi-PatchDrop, we propose an improved Swin-Unet for implementing subject-sensitive hashing. In this improved Swin-Unet, Multi-PatchDrop has been integrated, and each Swin Transformer block (except the first one) is preceded by a patch dropout layer. Experimental results demonstrate that the robustness of our proposed subject-sensitive hashing algorithm is not only stronger than that of the CNN-based algorithms but also stronger than that of Transformer-based algorithms. The tampering sensitivity is of the same intensity as the AGIM-net and M-net-based algorithms, stronger than other Transformer-based algorithms. Full article
Show Figures

Figure 1

17 pages, 6521 KiB  
Article
Predict Future Transient Fire Heat Release Rates Based on Fire Imagery and Deep Learning
by Lei Xu, Jinyuan Dong and Delei Zou
Fire 2024, 7(6), 200; https://doi.org/10.3390/fire7060200 - 14 Jun 2024
Cited by 5 | Viewed by 2732
Abstract
The fire heat release rate (HRR) is a crucial parameter for describing the combustion process and its thermal effects. In recent years, some studies have employed fire scene images and deep learning algorithms to predict real-time fire HRR, which has led to the [...] Read more.
The fire heat release rate (HRR) is a crucial parameter for describing the combustion process and its thermal effects. In recent years, some studies have employed fire scene images and deep learning algorithms to predict real-time fire HRR, which has led to the advancement of HRR prediction in terms of both lightweightness and real-time monitoring. Nevertheless, the development of an early-stage monitoring system for fires and the ability to predict future HRR based on current moment data represents a crucial foundation for evaluating the scale of indoor fires and enhancing the capacity to prevent and control such incidents. This paper proposes a deep learning model based on continuous fire scene images (containing both flame and smoke features) and their time-series information to predict the future transient fire HRR. The model (Att-BiLSTM) comprises three bi-directional long- and short-term memory (Bi-LSTM) layers and one attention layer. The model employs a bidirectional feature extraction approach, followed by the introduction of an attention mechanism to highlight the image features that have a critical impact on the prediction results. In this paper, a large-scale dataset is constructed by collecting 27,231 fire scene images with instantaneous HRR annotations from 40 different fire trials from the NIST database. The experimental results demonstrate that Att-BiLSTM is capable of effectively utilizing fire scene image features and temporal information to accurately predict future transient HRR, including those in high-brightness fire environments and complex fire source situations. The research presented in this paper offers novel insights and methodologies for fire monitoring and emergency response. Full article
(This article belongs to the Special Issue The Use of Remote Sensing Technology for Forest Fire)
Show Figures

Figure 1

23 pages, 3884 KiB  
Article
Cropland Extraction in Southern China from Very High-Resolution Images Based on Deep Learning
by Dehua Xie, Han Xu, Xiliu Xiong, Min Liu, Haoran Hu, Mengsen Xiong and Luo Liu
Remote Sens. 2023, 15(9), 2231; https://doi.org/10.3390/rs15092231 - 23 Apr 2023
Cited by 14 | Viewed by 3249
Abstract
Accurate cropland information is crucial for the assessment of food security and the formulation of effective agricultural policies. Extracting cropland from remote sensing imagery is challenging due to spectral diversity and mixed pixels. Recent advances in remote sensing technology have facilitated the availability [...] Read more.
Accurate cropland information is crucial for the assessment of food security and the formulation of effective agricultural policies. Extracting cropland from remote sensing imagery is challenging due to spectral diversity and mixed pixels. Recent advances in remote sensing technology have facilitated the availability of very high-resolution (VHR) remote sensing images that provide detailed ground information. However, VHR cropland extraction in southern China is difficult because of the high heterogeneity and fragmentation of cropland and the insufficient observations of VHR sensors. To address these challenges, we proposed a deep learning-based method for automated high-resolution cropland extraction. The method used an improved HRRS-U-Net model to accurately identify the extent of cropland and explicitly locate field boundaries. The HRRS-U-Net maintained high-resolution details throughout the network to generate precise cropland boundaries. Additionally, the residual learning (RL) and the channel attention mechanism (CAM) were introduced to extract deeper discriminative representations. The proposed method was evaluated over four city-wide study areas (Qingyuan, Yangjiang, Guangzhou, and Shantou) with a diverse range of agricultural systems, using GaoFen-2 (GF-2) images. The cropland extraction results for the study areas had an overall accuracy (OA) ranging from 97.00% to 98.33%, with F1 scores (F1) of 0.830–0.940 and Kappa coefficients (Kappa) of 0.814–0.929. The OA was 97.85%, F1 was 0.915, and Kappa was 0.901 over all study areas. Moreover, our proposed method demonstrated advantages compared to machine learning methods (e.g., RF) and previous semantic segmentation models, such as U-Net, U-Net++, U-Net3+, and MPSPNet. The results demonstrated the generalization ability and reliability of the proposed method for cropland extraction in southern China using VHR remote images. Full article
(This article belongs to the Special Issue Monitoring Agricultural Land-Use Change and Land-Use Intensity Ⅱ)
Show Figures

Figure 1

21 pages, 5414 KiB  
Article
Transformer-Based Subject-Sensitive Hashing for Integrity Authentication of High-Resolution Remote Sensing (HRRS) Images
by Kaimeng Ding, Shiping Chen, Yue Zeng, Yingying Wang and Xinyun Yan
Appl. Sci. 2023, 13(3), 1815; https://doi.org/10.3390/app13031815 - 31 Jan 2023
Cited by 6 | Viewed by 2027
Abstract
The implicit prerequisite for using HRRS images is that the images can be trusted. Otherwise, their value would be greatly reduced. As a new data security technology, subject-sensitive hashing overcomes the shortcomings of existing integrity authentication methods and could realize subject-sensitive authentication of [...] Read more.
The implicit prerequisite for using HRRS images is that the images can be trusted. Otherwise, their value would be greatly reduced. As a new data security technology, subject-sensitive hashing overcomes the shortcomings of existing integrity authentication methods and could realize subject-sensitive authentication of HRRS images. However, shortcomings of the existing algorithm, in terms of robustness, limit its application. For example, the lack of robustness against JPEG compression makes existing algorithms more passive in some applications. To enhance the robustness, we proposed a Transformer-based subject-sensitive hashing algorithm. In this paper, first, we designed a Transformer-based HRRS image feature extraction network by improving Swin-Unet. Next, subject-sensitive features of HRRS images were extracted by this improved Swin-Unet. Then, the hash sequence was generated through a feature coding method that combined mapping mechanisms with principal component analysis (PCA). Our experimental results showed that the robustness of the proposed algorithm was greatly improved in comparison with existing algorithms, especially the robustness against JPEG compression. Full article
(This article belongs to the Special Issue GeoAI Data and Processing in Applied Sciences)
Show Figures

Figure 1

26 pages, 15130 KiB  
Article
Multi-Level Dynamic Analysis of Landscape Patterns of Chinese Megacities during the Period of 2016–2021 Based on a Spatiotemporal Land-Cover Classification Model Using High-Resolution Satellite Imagery: A Case Study of Beijing, China
by Zhi Li, Yi Lu and Xiaomei Yang
Remote Sens. 2023, 15(1), 74; https://doi.org/10.3390/rs15010074 - 23 Dec 2022
Cited by 3 | Viewed by 2851
Abstract
In today’s accelerating urbanization process, timely and effective monitoring of land-cover dynamics, landscape pattern analysis, and evaluation of built-up urban areas (BUAs) have important research significance and practical value for the sustainable development, planning and management, and ecological protection of cities. High-spatial-resolution remote [...] Read more.
In today’s accelerating urbanization process, timely and effective monitoring of land-cover dynamics, landscape pattern analysis, and evaluation of built-up urban areas (BUAs) have important research significance and practical value for the sustainable development, planning and management, and ecological protection of cities. High-spatial-resolution remote sensing (HRRS) images have the advantages of high-accuracy Earth observations, covering a large area, and having a short playback period, and they can objectively and accurately provide fine dynamic spatial information about the land cover in urban built-up areas. However, the complexity and comprehensiveness of the urban structure have led to a single-scale analysis method, which makes it difficult to accurately and comprehensively reflect the characteristics of the BUA landscape pattern. Therefore, in this study, a joint evaluation method for an urban land-cover spatiotemporal-mapping chain and multi-scale landscape pattern using high-resolution remote sensing imagery was developed. First, a pixel–object–knowledge model with temporal and spatial classifications was proposed for the spatiotemporal mapping of urban land cover. Based on this, a multi-scale district–BUA–city block–land cover type map of the city was established and a joint multi-scale evaluation index was constructed for the multi-scale dynamic analysis of the urban landscape pattern. The accuracies of the land cover in 2016 and 2021 were 91.9% and 90.4%, respectively, and the kappa coefficients were 0.90 and 0.88, respectively, indicating that the method can provide effective and reliable information for spatial mapping and landscape pattern analysis. In addition, the multi-scale analysis of the urban landscape pattern revealed that, during the period of 2016–2021, Beijing maintained the same high urbanization rate in the inner part of the city, while the outer part of the city kept expanding, which also reflects the validity and comprehensiveness of the analysis method developed in this study. Full article
(This article belongs to the Special Issue Applications of AI and Remote Sensing in Urban Systems)
Show Figures

Figure 1

19 pages, 4631 KiB  
Article
A Land Cover Classification Method for High-Resolution Remote Sensing Images Based on NDVI Deep Learning Fusion Network
by Jingzheng Zhao, Liyuan Wang, Hui Yang, Penghai Wu, Biao Wang, Chengrong Pan and Yanlan Wu
Remote Sens. 2022, 14(21), 5455; https://doi.org/10.3390/rs14215455 - 30 Oct 2022
Cited by 18 | Viewed by 4447
Abstract
High-resolution remote sensing (HRRS) images have few spectra, low interclass separability and large intraclass differences, and there are some problems in land cover classification (LCC) of HRRS images that only rely on spectral information, such as misclassification of small objects and unclear boundaries. [...] Read more.
High-resolution remote sensing (HRRS) images have few spectra, low interclass separability and large intraclass differences, and there are some problems in land cover classification (LCC) of HRRS images that only rely on spectral information, such as misclassification of small objects and unclear boundaries. Here, we propose a deep learning fusion network that effectively utilizes NDVI, called the Dense-Spectral-Location-NDVI network (DSLN). In DSLN, we first extract spatial location information from NDVI data at the same time as remote sensing image data to enhance the boundary information. Then, the spectral features are put into the encoding-decoding structure to abstract the depth features and restore the spatial information. The NDVI fusion module is used to fuse the NDVI information and depth features to improve the separability of land cover information. Experiments on the GF-1 dataset show that the mean OA (mOA) and the mean value of the Kappa coefficient (mKappa) of the DSLN network model reach 0.8069 and 0.7161, respectively, which have good applicability to temporal and spatial distribution. The comparison of the forest area released by Xuancheng Forestry Bureau and the forest area in Xuancheng produced by the DSLN model shows that the former is consistent with the latter. In conclusion, the DSLN network model is effectively applied in practice and can provide more accurate land cover data for regional ESV analysis. Full article
Show Figures

Graphical abstract

16 pages, 1801 KiB  
Article
A New Subject-Sensitive Hashing Algorithm Based on MultiRes-RCF for Blockchains of HRRS Images
by Kaimeng Ding, Shiping Chen, Jiming Yu, Yanan Liu and Jie Zhu
Algorithms 2022, 15(6), 213; https://doi.org/10.3390/a15060213 - 17 Jun 2022
Cited by 3 | Viewed by 2849
Abstract
Aiming at the deficiency that blockchain technology is too sensitive to the binary-level changes of high resolution remote sensing (HRRS) images, we propose a new subject-sensitive hashing algorithm specially for HRRS image blockchains. To implement this subject-sensitive hashing algorithm, we designed and implemented [...] Read more.
Aiming at the deficiency that blockchain technology is too sensitive to the binary-level changes of high resolution remote sensing (HRRS) images, we propose a new subject-sensitive hashing algorithm specially for HRRS image blockchains. To implement this subject-sensitive hashing algorithm, we designed and implemented a deep neural network model MultiRes-RCF (richer convolutional features) for extracting features from HRRS images. A MultiRes-RCF network is an improved RCF network that borrows the MultiRes mechanism of MultiResU-Net. The subject-sensitive hashing algorithm based on MultiRes-RCF can detect the subtle tampering of HRRS images while maintaining robustness to operations that do not change the content of the HRRS images. Experimental results show that our MultiRes-RCF-based subject-sensitive hashing algorithm has better tamper sensitivity than the existing deep learning models such as RCF, AAU-net, and Attention U-net, meeting the needs of HRRS image blockchains. Full article
(This article belongs to the Special Issue Advances in Blockchain Architecture and Consensus)
Show Figures

Figure 1

19 pages, 1610 KiB  
Article
Dual Modality Collaborative Learning for Cross-Source Remote Sensing Retrieval
by Jingjing Ma, Duanpeng Shi, Xu Tang, Xiangrong Zhang and Licheng Jiao
Remote Sens. 2022, 14(6), 1319; https://doi.org/10.3390/rs14061319 - 9 Mar 2022
Cited by 6 | Viewed by 2907
Abstract
Content-based remote sensing (RS) image retrieval (CBRSIR) is a critical way to organize high-resolution RS (HRRS) images in the current big data era. The increasing volume of HRRS images from different satellites and sensors leads to more attention to the cross-source CSRSIR (CS-CBRSIR) [...] Read more.
Content-based remote sensing (RS) image retrieval (CBRSIR) is a critical way to organize high-resolution RS (HRRS) images in the current big data era. The increasing volume of HRRS images from different satellites and sensors leads to more attention to the cross-source CSRSIR (CS-CBRSIR) problem. Due to the data drift, one crucial problem in CS-CBRSIR is the modality discrepancy. Most existing methods focus on finding a common feature space for various HRRS images to address this issue. In this space, their similarity relations can be measured directly to obtain the cross-source retrieval results straight. This way is feasible and reasonable, however, the specific information corresponding to HRRS images from different sources is always ignored, limiting retrieval performance. To overcome this limitation, we develop a new model for CS-CBRSIR in this paper named dual modality collaborative learning (DMCL). To fully explore the specific information from diverse HRRS images, DMCL first introduces ResNet50 as the feature extractor. Then, a common space mutual learning module is developed to map the specific features into a common space. Here, the modality discrepancy is reduced from the aspects of features and their distributions. Finally, to supplement the specific knowledge to the common features, we develop modality transformation and the dual-modality feature learning modules. Their function is to transmit the specific knowledge from different sources mutually and fuse the specific and common features adaptively. The comprehensive experiments are conducted on a public dataset. Compared with many existing methods, the behavior of our DMCL is stronger. These encouraging results for a public dataset indicate that the proposed DMCL is useful in CS-CBRSIR tasks. Full article
Show Figures

Figure 1

13 pages, 6574 KiB  
Article
PET Foams Surface Treated with Graphene Nanoplatelets: Evaluation of Thermal Resistance and Flame Retardancy
by Samuele Matta, Laura Giorgia Rizzi and Alberto Frache
Polymers 2021, 13(4), 501; https://doi.org/10.3390/polym13040501 - 6 Feb 2021
Cited by 3 | Viewed by 2881
Abstract
In this work, fire-retardant systems consisting of graphene nanoplatelets (GNPs) and dispersant agents were designed and applied on polyethylene terephthalate (PET) foam. Manual deposition from three different liquid solutions was performed in order to create a protective coating on the specimen’s surface. A [...] Read more.
In this work, fire-retardant systems consisting of graphene nanoplatelets (GNPs) and dispersant agents were designed and applied on polyethylene terephthalate (PET) foam. Manual deposition from three different liquid solutions was performed in order to create a protective coating on the specimen’s surface. A very low amount of coating, between 1.5 and 3.5 wt%, was chosen for the preparation of coated samples. Flammability, flame penetration, and combustion tests demonstrated the improvement provided to the foam via coating. In particular, specimens with PSS/GNPs coating, compared to neat foam, were able to interrupt the flame during horizontal and vertical flammability tests and led to longer endurance times during the flame penetration test. Furthermore, during cone calorimetry tests, the time to ignition (TTI) increased and the peak of heat release rate (pHRR) was drastically reduced by up to 60% compared to that of the uncoated PET foam. Finally, ageing for 48 and 115 h at 160 °C was performed on coated specimens to evaluate the effect on flammability and combustion behavior. Scanning electron microscopy (SEM) images proved the morphological effect of the heat treatment on the surface, showing that the coating was uniformly distributed. In this case, fire-retardant properties were enhanced, even if fewer GNPs were used. Full article
(This article belongs to the Special Issue Graphene-Based Polymer Nanocomposites: Recent Advances)
Show Figures

Figure 1

22 pages, 8867 KiB  
Article
Automatic Building Detection from High-Resolution Remote Sensing Images Based on Joint Optimization and Decision Fusion of Morphological Attribute Profiles
by Chao Wang, Yan Zhang, Xiaohui Chen, Hao Jiang, Mithun Mukherjee and Shuai Wang
Remote Sens. 2021, 13(3), 357; https://doi.org/10.3390/rs13030357 - 21 Jan 2021
Cited by 11 | Viewed by 2874
Abstract
High-resolution remote sensing (HRRS) images, when used for building detection, play a key role in urban planning and other fields. Compared with the deep learning methods, the method based on morphological attribute profiles (MAPs) exhibits good performance in the absence of massive annotated [...] Read more.
High-resolution remote sensing (HRRS) images, when used for building detection, play a key role in urban planning and other fields. Compared with the deep learning methods, the method based on morphological attribute profiles (MAPs) exhibits good performance in the absence of massive annotated samples. MAPs have been proven to have a strong ability for extracting detailed characterizations of buildings with multiple attributes and scales. So far, a great deal of attention has been paid to this application. Nevertheless, the constraints of rational selection of attribute scales and evidence conflicts between attributes should be overcome, so as to establish reliable unsupervised detection models. To this end, this research proposes a joint optimization and fusion building detection method for MAPs. In the pre-processing step, the set of candidate building objects are extracted by image segmentation and a set of discriminant rules. Second, the differential profiles of MAPs are screened by using a genetic algorithm and a cross-probability adaptive selection strategy is proposed; on this basis, an unsupervised decision fusion framework is established by constructing a novel statistics-space building index (SSBI). Finally, the automated detection of buildings is realized. We show that the proposed method is significantly better than the state-of-the-art methods on HRRS images with different groups of different regions and different sensors, and overall accuracy (OA) of our proposed method is more than 91.9%. Full article
Show Figures

Graphical abstract

Back to TopTop