Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (74)

Search Parameters:
Keywords = Markov Random Field (MRF)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
8 pages, 1425 KiB  
Proceeding Paper
Enhanced Skin Lesion Classification Using Deep Learning, Integrating with Sequential Data Analysis: A Multiclass Approach
by Azmath Mubeen and Uma N. Dulhare
Eng. Proc. 2024, 78(1), 6; https://doi.org/10.3390/engproc2024078006 - 7 Jan 2025
Cited by 3 | Viewed by 1218
Abstract
In dermatological research, accurately identifying different types of skin lesions, such as nodules, is essential for early diagnosis and effective treatment. This study introduces a novel method for classifying skin lesions, including nodules, by combining a unified attention (UA) network with deep convolutional [...] Read more.
In dermatological research, accurately identifying different types of skin lesions, such as nodules, is essential for early diagnosis and effective treatment. This study introduces a novel method for classifying skin lesions, including nodules, by combining a unified attention (UA) network with deep convolutional neural networks (DCNNs) for feature extraction. The UA network processes sequential data, such as patient histories, while long short-term memory (LSTM) networks track nodule progression. Additionally, Markov random fields (MRFs) enhance pattern recognition. The integrated system classifies lesions and evaluates whether they are responding to treatment or worsening, achieving 93% accuracy in distinguishing nodules, melanoma, and basal cell carcinoma. This system outperforms existing methods in precision and sensitivity, offering advancements in dermatological diagnostics. Full article
Show Figures

Figure 1

26 pages, 2887 KiB  
Article
Implicit Is Not Enough: Explicitly Enforcing Anatomical Priors inside Landmark Localization Models
by Simon Johannes Joham, Arnela Hadzic and Martin Urschler
Bioengineering 2024, 11(9), 932; https://doi.org/10.3390/bioengineering11090932 - 17 Sep 2024
Cited by 1 | Viewed by 1900
Abstract
The task of localizing distinct anatomical structures in medical image data is an essential prerequisite for several medical applications, such as treatment planning in orthodontics, bone-age estimation, or initialization of segmentation methods in automated image analysis tools. Currently, Anatomical Landmark Localization (ALL) is [...] Read more.
The task of localizing distinct anatomical structures in medical image data is an essential prerequisite for several medical applications, such as treatment planning in orthodontics, bone-age estimation, or initialization of segmentation methods in automated image analysis tools. Currently, Anatomical Landmark Localization (ALL) is mainly solved by deep-learning methods, which cannot guarantee robust ALL predictions; there may always be outlier predictions that are far from their ground truth locations due to out-of-distribution inputs. However, these localization outliers are detrimental to the performance of subsequent medical applications that rely on ALL results. The current ALL literature relies heavily on implicit anatomical constraints built into the loss function and network architecture to reduce the risk of anatomically infeasible predictions. However, we argue that in medical imaging, where images are generally acquired in a controlled environment, we should use stronger explicit anatomical constraints to reduce the number of outliers as much as possible. Therefore, we propose the end-to-end trainable Global Anatomical Feasibility Filter and Analysis (GAFFA) method, which uses prior anatomical knowledge estimated from data to explicitly enforce anatomical constraints. GAFFA refines the initial localization results of a U-Net by approximately solving a Markov Random Field (MRF) with a single iteration of the sum-product algorithm in a differentiable manner. Our experiments demonstrate that GAFFA outperforms all other landmark refinement methods investigated in our framework. Moreover, we show that GAFFA is more robust to large outliers than state-of-the-art methods on the studied X-ray hand dataset. We further motivate this claim by visualizing the anatomical constraints used in GAFFA as spatial energy heatmaps, which allowed us to find an annotation error in the hand dataset not previously discussed in the literature. Full article
(This article belongs to the Special Issue Machine Learning-Aided Medical Image Analysis)
Show Figures

Graphical abstract

22 pages, 13810 KiB  
Article
An Underwater Stereo Matching Method: Exploiting Segment-Based Method Traits without Specific Segment Operations
by Xinlin Xu, Huiping Xu, Lianjiang Ma, Kelin Sun and Jingchuan Yang
J. Mar. Sci. Eng. 2024, 12(9), 1599; https://doi.org/10.3390/jmse12091599 - 10 Sep 2024
Viewed by 1421
Abstract
Stereo matching technology, enabling the acquisition of three-dimensional data, holds profound implications for marine engineering. In underwater images, irregular object surfaces and the absence of texture information make it difficult for stereo matching algorithms that rely on discrete disparity values to accurately capture [...] Read more.
Stereo matching technology, enabling the acquisition of three-dimensional data, holds profound implications for marine engineering. In underwater images, irregular object surfaces and the absence of texture information make it difficult for stereo matching algorithms that rely on discrete disparity values to accurately capture the 3D details of underwater targets. This paper proposes a stereo method based on an energy function of Markov random field (MRF) with 3D labels to fit the inclined plane of underwater objects. Through the integration of a cross-based patch alignment approach with two label optimization stages, the proposed method demonstrates features akin to segment-based stereo matching methods, enabling it to handle images with sparse textures effectively. Through experiments conducted on both simulated UW-Middlebury datasets and real deteriorated underwater images, our method demonstrates superiority compared to classical or state-of-the-art methods by analyzing the acquired disparity maps and observing the three-dimensional reconstruction of the underwater target. Full article
(This article belongs to the Special Issue Underwater Observation Technology in Marine Environment)
Show Figures

Figure 1

25 pages, 94594 KiB  
Article
Harbor Detection in Polarimetric SAR Images Based on Context Features and Reflection Symmetry
by Chun Liu, Jie Gao, Shichong Liu, Chao Li, Yongchao Cheng, Yi Luo and Jian Yang
Remote Sens. 2024, 16(16), 3079; https://doi.org/10.3390/rs16163079 - 21 Aug 2024
Cited by 1 | Viewed by 1110
Abstract
The detection of harbors presents difficulties related to their diverse sizes, varying morphology and scattering, and complex backgrounds. To avoid the extraction of unstable geometric features, in this paper, we propose an unsupervised harbor detection method for polarimetric SAR images using context features [...] Read more.
The detection of harbors presents difficulties related to their diverse sizes, varying morphology and scattering, and complex backgrounds. To avoid the extraction of unstable geometric features, in this paper, we propose an unsupervised harbor detection method for polarimetric SAR images using context features and polarimetric reflection symmetry. First, the image is segmented into three region types, i.e., water low-scattering regions, strong-scattering urban regions, and other regions, based on a multi-region Markov random field (MRF) segmentation method. Second, by leveraging the fact that harbors are surrounded by water on one side and a large number of buildings on the other, the coastal narrow-band area is extracted from the low-scattering regions, and the harbor regions of interest (ROIs) are determined by extracting the strong-scattering regions from the narrow-band area. Finally, by using the scattering reflection asymmetry of harbor buildings, harbors are identified based on the global threshold segmentation of the horizontal, vertical, and circular co- and cross-polarization correlation powers of the extracted ROIs. The effectiveness of the proposed method was validated with experiments on RADARSAT-2 quad-polarization images of Zhanjiang, Fuzhou, Lingshui, and Dalian, China; San Francisco, USA; and Singapore. The proposed method had high detection rates and low false detection rates in the complex coastal environment scenarios studied, far outperforming the traditional spatial harbor detection method considered for comparison. Full article
Show Figures

Figure 1

14 pages, 5359 KiB  
Technical Note
Detection of Surface Rocks and Small Craters in Permanently Shadowed Regions of the Lunar South Pole Based on YOLOv7 and Markov Random Field Algorithms in SAR Images
by Tong Xia, Xuancheng Ren, Yuntian Liu, Niutao Liu, Feng Xu and Ya-Qiu Jin
Remote Sens. 2024, 16(11), 1834; https://doi.org/10.3390/rs16111834 - 21 May 2024
Cited by 2 | Viewed by 2270
Abstract
Excluding rough areas with surface rocks and craters is critical for the safety of landing missions, such as China’s Chang’e-7 mission, in the permanently shadowed region (PSR) of the lunar south pole. Binned digital elevation model (DEM) data can describe the undulating surface, [...] Read more.
Excluding rough areas with surface rocks and craters is critical for the safety of landing missions, such as China’s Chang’e-7 mission, in the permanently shadowed region (PSR) of the lunar south pole. Binned digital elevation model (DEM) data can describe the undulating surface, but the DEM data can hardly detect surface rocks because of median-averaging. High-resolution images from a synthetic aperture radar (SAR) can be used to map discrete rocks and small craters according to their strong backscattering. This study utilizes the You Only Look Once version 7 (YOLOv7) tool to detect varying-sized craters in SAR images. It also employs the Markov random field (MRF) algorithm to identify surface rocks, which are usually difficult to detect in DEM data. The results are validated by optical images and DEM data in non-PSR. With the assistance of the DEM data, regions with slopes larger than 10° are excluded. YOLOv7 and MRF are applied to detect craters and rocky surfaces and exclude regions with steep slopes in the PSRs of craters Shoemaker, Slater, and Shackleton, respectively. This study proves SAR images are feasible in the selection of landing sites in the PSRs for future missions. Full article
(This article belongs to the Special Issue Planetary Exploration Using Remote Sensing—Volume II)
Show Figures

Figure 1

17 pages, 30409 KiB  
Article
Data Fusion of RGB and Depth Data with Image Enhancement
by Lennard Wunsch, Christian Görner Tenorio, Katharina Anding, Andrei Golomoz and Gunther Notni
J. Imaging 2024, 10(3), 73; https://doi.org/10.3390/jimaging10030073 - 21 Mar 2024
Cited by 2 | Viewed by 3412
Abstract
Since 3D sensors became popular, imaged depth data are easier to obtain in the consumer sector. In applications such as defect localization on industrial objects or mass/volume estimation, precise depth data is important and, thus, benefits from the usage of multiple information sources. [...] Read more.
Since 3D sensors became popular, imaged depth data are easier to obtain in the consumer sector. In applications such as defect localization on industrial objects or mass/volume estimation, precise depth data is important and, thus, benefits from the usage of multiple information sources. However, a combination of RGB images and depth images can not only improve our understanding of objects, capacitating one to gain more information about objects but also enhance data quality. Combining different camera systems using data fusion can enable higher quality data since disadvantages can be compensated. Data fusion itself consists of data preparation and data registration. A challenge in data fusion is the different resolutions of sensors. Therefore, up- and downsampling algorithms are needed. This paper compares multiple up- and downsampling methods, such as different direct interpolation methods, joint bilateral upsampling (JBU), and Markov random fields (MRFs), in terms of their potential to create RGB-D images and improve the quality of depth information. In contrast to the literature in which imaging systems are adjusted to acquire the data of the same section simultaneously, the laboratory setup in this study was based on conveyor-based optical sorting processes, and therefore, the data were acquired at different time periods and different spatial locations. Data assignment and data cropping were necessary. In order to evaluate the results, root mean square error (RMSE), signal-to-noise ratio (SNR), correlation (CORR), universal quality index (UQI), and the contour offset are monitored. With JBU outperforming the other upsampling methods, achieving a meanRMSE = 25.22, mean SNR = 32.80, mean CORR = 0.99, and mean UQI = 0.97. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

20 pages, 10858 KiB  
Article
PolSAR Image Classification with Active Complex-Valued Convolutional-Wavelet Neural Network and Markov Random Fields
by Lu Liu and Yongxiang Li
Remote Sens. 2024, 16(6), 1094; https://doi.org/10.3390/rs16061094 - 20 Mar 2024
Cited by 4 | Viewed by 1871
Abstract
PolSAR image classification has attracted extensive significant research in recent decades. Aiming at improving PolSAR classification performance with speckle noise, this paper proposes an active complex-valued convolutional-wavelet neural network by incorporating dual-tree complex wavelet transform (DT-CWT) and Markov random field (MRF). In this [...] Read more.
PolSAR image classification has attracted extensive significant research in recent decades. Aiming at improving PolSAR classification performance with speckle noise, this paper proposes an active complex-valued convolutional-wavelet neural network by incorporating dual-tree complex wavelet transform (DT-CWT) and Markov random field (MRF). In this approach, DT-CWT is introduced into the complex-valued convolutional neural network to suppress the speckle noise of PolSAR images and maintain the structures of learned feature maps. In addition, by applying active learning (AL), we iteratively select the most informative unlabeled training samples of PolSAR datasets. Moreover, MRF is utilized to obtain spatial local correlation information, which has been proven to be effective in improving classification performance. The experimental results on three benchmark PolSAR datasets demonstrate that the proposed method can achieve a significant classification performance gain in terms of its effectiveness and robustness beyond some state-of-the-art deep learning methods. Full article
Show Figures

Figure 1

21 pages, 33135 KiB  
Article
A Multi-Scale Graph Based on Spatio-Temporal-Radiometric Interaction for SAR Image Change Detection
by Peijing Zhang, Jinbao Jiang, Peng Kou, Shining Wang and Bin Wang
Remote Sens. 2024, 16(3), 560; https://doi.org/10.3390/rs16030560 - 31 Jan 2024
Cited by 1 | Viewed by 1850
Abstract
Change detection (CD) in remote sensing imagery has found broad applications in ecosystem service assessment, disaster evaluation, urban planning, land utilization, etc. In this paper, we propose a novel graph model-based method for synthetic aperture radar (SAR) image CD. To mitigate the influence [...] Read more.
Change detection (CD) in remote sensing imagery has found broad applications in ecosystem service assessment, disaster evaluation, urban planning, land utilization, etc. In this paper, we propose a novel graph model-based method for synthetic aperture radar (SAR) image CD. To mitigate the influence of speckle noise on SAR image CD, we opt for comparing the structures of multi-temporal images instead of the conventional approach of directly comparing pixel values, which is more robust to the speckle noise. Specifically, we first segment the multi-temporal images into square patches at multiple scales and construct multi-scale K-nearest neighbor (KNN) graphs for each image, and then develop an effective graph fusion strategy, facilitating the exploitation of multi-scale information within SAR images, which offers an enhanced representation of the complex relationships among features in the images. Second, we accomplish the interaction of spatio-temporal-radiometric information between graph models through graph mapping, which can efficiently uncover the connections between multi-temporal images, leading to a more precise extraction of changes between the images. Finally, we use the Markov random field (MRF) based segmentation method to obtain the binary change map. Through extensive experimentation on real datasets, we demonstrate the remarkable superiority of our methodologies by comparing with some current state-of-the-art methods. Full article
Show Figures

Figure 1

13 pages, 675 KiB  
Article
Using Markov Random Field and Analytic Hierarchy Process to Account for Interdependent Criteria
by Jih-Jeng Huang and Chin-Yi Chen
Algorithms 2024, 17(1), 1; https://doi.org/10.3390/a17010001 - 19 Dec 2023
Cited by 12 | Viewed by 2317
Abstract
The Analytic Hierarchy Process (AHP) has been a widely used multi-criteria decision-making (MCDM) method since the 1980s because of its simplicity and rationality. However, the conventional AHP assumes criteria independence, which is not always accurate in realistic scenarios where interdependencies between criteria exist. [...] Read more.
The Analytic Hierarchy Process (AHP) has been a widely used multi-criteria decision-making (MCDM) method since the 1980s because of its simplicity and rationality. However, the conventional AHP assumes criteria independence, which is not always accurate in realistic scenarios where interdependencies between criteria exist. Several methods have been proposed to relax the postulation of the independent criteria in the AHP, e.g., the Analytic Network Process (ANP). However, these methods usually need a number of pairwise comparison matrices (PCMs) and make it hard to apply to a complicated and large-scale problem. This paper presents a groundbreaking approach to address this issue by incorporating discrete Markov Random Fields (MRFs) into the AHP framework. Our method enhances decision making by effectively and sensibly capturing interdependencies among criteria, reflecting actual weights. Moreover, we showcase a numerical example to illustrate the proposed method and compare the results with the conventional AHP and Fuzzy Cognitive Map (FCM). The findings highlight our method’s ability to influence global priority values and the ranking of alternatives when considering interdependencies between criteria. These results suggest that the introduced method provides a flexible and adaptable framework for modeling interdependencies between criteria, ultimately leading to more accurate and reliable decision-making outcomes. Full article
Show Figures

Figure 1

19 pages, 1802 KiB  
Article
Real Aperture Radar Super-Resolution Imaging for Sea Surface Monitoring Based on a Hybrid Model
by Ke Tan, Shengqi Zhou, Xingyu Lu, Jianchao Yang, Weimin Su and Hong Gu
Sensors 2023, 23(23), 9609; https://doi.org/10.3390/s23239609 - 4 Dec 2023
Cited by 1 | Viewed by 1544
Abstract
In recent years, super-resolution imaging techniques have been intensely introduced to enhance the azimuth resolution of real aperture scanning radar (RASR). However, there is a paucity of research on the subject of sea surface imaging with small incident angles for complex scenarios. This [...] Read more.
In recent years, super-resolution imaging techniques have been intensely introduced to enhance the azimuth resolution of real aperture scanning radar (RASR). However, there is a paucity of research on the subject of sea surface imaging with small incident angles for complex scenarios. This research endeavors to explore super-resolution imaging for sea surface monitoring, with a specific emphasis on grounded or shipborne platforms. To tackle the inescapable interference of sea clutter, it was segregated from the imaging objects and was modeled alongside I/Q channel noise within the maximum likelihood framework, thus mitigating clutter’s impact. Simultaneously, for characterizing the non-stationary regions of the monitoring scene, we harnessed the Markov random field (MRF) model for its two-dimensional (2D) spatial representational capacity, augmented by a quadratic term to bolster outlier resilience. Subsequently, the maximum a posteriori (MAP) criterion was employed to unite the ML function with the statistical model regarding imaging scene. This hybrid model forms the core of our super-resolution methodology. Finally, a fast iterative threshold shrinkage method was applied to solve this objective function, yielding stable estimates of the monitored scene. Through the validation of simulation and real data experiments, the superiority of the proposed approach in recovering the monitoring scenes and clutter suppression has been verified. Full article
(This article belongs to the Special Issue Recent Advancements in Radar Imaging and Sensing Technology II)
Show Figures

Figure 1

6 pages, 2934 KiB  
Proceeding Paper
Comparison between Classic Methods and Deep Learning Approach in Detecting Changes of Waterbodies from Sentinel-1 Images
by Sahand Tahermanesh, Behnam Asghari Beirami and Mehdi Mokhtarzade
Environ. Sci. Proc. 2024, 29(1), 26; https://doi.org/10.3390/ECRS2023-16186 - 28 Nov 2023
Viewed by 891
Abstract
Climate change has directly impacted Earth’s habitats, resulting in various adverse effects, such as the desiccation of water bodies. The process of identifying such changes through field observations is time-consuming and costly. By using remote sensing techniques, it has become easier than ever [...] Read more.
Climate change has directly impacted Earth’s habitats, resulting in various adverse effects, such as the desiccation of water bodies. The process of identifying such changes through field observations is time-consuming and costly. By using remote sensing techniques, it has become easier than ever to monitor changes in the environment. Radar satellites, unlike optics, can acquire data in all weather conditions, regardless of the time of day. These data can provide valuable information about the environment and surface roughness. Various methods have been proposed for detecting changes, which can be divided into classic and deep learning methods. Classic methods only use image information, such as radar backscatter, which cannot extract spatial information. Sentinel-1 (S1) is an Earth observation radar sensor that provides free access to SAR (Synthetic Aperture Radar) images. This study aims to compare the performance of two classic methods, a ratio index (RI) and Markov random field (MRF), with deep learning networks in detecting changes. As a deep network, Inception CNN (convolutional neural network) is presented as an enhancement of the original CNN to detect the changes. To evaluate methods, two instances of S1 images from Lake Poopó, located in the Altiplano Mountains in Oruro Department, Bolivia, are used as a primary dataset. The results of the comparison models were assessed using three evaluation metrics: Overall Accuracy (O.A), Missed Error (M.E), and Kappa Coefficient (K). Based on the evaluations, the Inception CNN performed exceptionally in all metrics, with O.A, K, and M.E rates of 97.35%, 90.28%, and 9%, respectively. Meanwhile, the ratio index had poor performance, with 83.27%, 29.05%, and 75.03%, respectively, for O.A, K, and M.E. These results indicated that the Inception CNN could provide better performance in detecting changes from S1 images. Full article
(This article belongs to the Proceedings of ECRS 2023)
Show Figures

Figure 1

26 pages, 45161 KiB  
Article
Polarimetric Synthetic Aperture Radar Image Classification Based on Double-Channel Convolution Network and Edge-Preserving Markov Random Field
by Junfei Shi, Mengmeng Nie, Shanshan Ji, Cheng Shi, Hongying Liu and Haiyan Jin
Remote Sens. 2023, 15(23), 5458; https://doi.org/10.3390/rs15235458 - 22 Nov 2023
Cited by 5 | Viewed by 2461
Abstract
Deep learning methods have gained significant popularity in the field of polarimetric synthetic aperture radar (PolSAR) image classification. These methods aim to extract high-level semantic features from the original PolSAR data to learn the polarimetric information. However, using only original data, these methods [...] Read more.
Deep learning methods have gained significant popularity in the field of polarimetric synthetic aperture radar (PolSAR) image classification. These methods aim to extract high-level semantic features from the original PolSAR data to learn the polarimetric information. However, using only original data, these methods cannot learn multiple scattering features and complex structures for extremely heterogeneous terrain objects. In addition, deep learning methods always cause edge confusion due to the high-level features. To overcome these limitations, we propose a novel approach that combines a new double-channel convolutional neural network (CNN) with an edge-preserving Markov random field (MRF) model for PolSAR image classification, abbreviated to “DCCNN-MRF”. Firstly, a double-channel convolution network (DCCNN) is developed to combine complex matrix data and multiple scattering features. The DCCNN consists of two subnetworks: a Wishart-based complex matrix network and a multi-feature network. The Wishart-based complex matrix network focuses on learning the statistical characteristics and channel correlation, and the multi-feature network is designed to learn high-level semantic features well. Then, a unified network framework is designed to fuse two kinds of weighted features in order to enhance advantageous features and reduce redundant ones. Finally, an edge-preserving MRF model is integrated with the DCCNN network. In the MRF model, a sketch map-based edge energy function is designed by defining an adaptive weighted neighborhood for edge pixels. Experiments were conducted on four real PolSAR datasets with different sensors and bands. The experimental results demonstrate the effectiveness of the proposed DCCNN-MRF method. Full article
(This article belongs to the Special Issue Modeling, Processing and Analysis of Microwave Remote Sensing Data)
Show Figures

Figure 1

18 pages, 10495 KiB  
Article
Improving Differential Interferometry Synthetic Aperture Radar Phase Unwrapping Accuracy with Global Navigation Satellite System Monitoring Data
by Hui Wang, Yuxi Cao, Guorui Wang, Peixian Li, Jia Zhang and Yongfeng Gong
Sustainability 2023, 15(17), 13277; https://doi.org/10.3390/su151713277 - 4 Sep 2023
Cited by 2 | Viewed by 1558
Abstract
: We developed a GNSS-assisted InSAR phase unwrapping algorithm for large-deformation DInSAR data processing in coal mining areas. Utilizing the Markov random field (MRF) theory and simulated annealing, the algorithm derived the energy function using MRF theory, Gibbs distribution, and the Hammersley–Clifford theorem. [...] Read more.
: We developed a GNSS-assisted InSAR phase unwrapping algorithm for large-deformation DInSAR data processing in coal mining areas. Utilizing the Markov random field (MRF) theory and simulated annealing, the algorithm derived the energy function using MRF theory, Gibbs distribution, and the Hammersley–Clifford theorem. It calculated an image probability ratio and unwrapped the phase through iterative calculations of the initial integer perimeter matrix, interference phase, and weight matrix. Algorithm reliability was confirmed by combining simulated phases with digital elevation model (DEM) data for deconvolution calculations, showing good agreement with real phase-value results (median error: 4.8 × 104). Applied to ALOS-2 data in the Jinfeng mining area, the algorithm transformed interferometric phase into deformation, obtaining simulated deformation by fitting GNSS monitoring data. It effectively solved meter-scale deformation variables between single-period images, particularly for unwrapping problems due to decoherence. To improve calculation speed, a coherence-based threshold was set. Points with high coherence avoided iterative optimization, while points below the threshold underwent iterative optimization (coherence threshold: 0.32). The algorithm achieved a median error of 30.29 mm and a relative error of 2.5% compared to GNSS fitting results, meeting accuracy requirements for mining subsidence monitoring in large mining areas. Full article
Show Figures

Figure 1

19 pages, 15585 KiB  
Article
Land Cover Classification of SAR Based on 1DCNN-MRF Model Using Improved Dual-Polarization Radar Vegetation Index
by Yabo Huang, Mengmeng Meng, Zhuoyan Hou, Lin Wu, Zhengwei Guo, Xiajiong Shen, Wenkui Zheng and Ning Li
Remote Sens. 2023, 15(13), 3221; https://doi.org/10.3390/rs15133221 - 21 Jun 2023
Cited by 5 | Viewed by 2894
Abstract
Accurate land cover classification (LCC) is essential for studying global change. Synthetic aperture radar (SAR) has been used for LCC due to its advantage of weather independence. In particular, the dual-polarization (dual-pol) SAR data have a wider coverage and are easier to obtain, [...] Read more.
Accurate land cover classification (LCC) is essential for studying global change. Synthetic aperture radar (SAR) has been used for LCC due to its advantage of weather independence. In particular, the dual-polarization (dual-pol) SAR data have a wider coverage and are easier to obtain, which provides an unprecedented opportunity for LCC. However, the dual-pol SAR data have a weak discrimination ability due to limited polarization information. Moreover, the complex imaging mechanism leads to the speckle noise of SAR images, which also decreases the accuracy of SAR LCC. To address the above issues, an improved dual-pol radar vegetation index based on multiple components (DpRVIm) and a new LCC method are proposed for dual-pol SAR data. Firstly, in the DpRVIm, the scattering information of polarization and terrain factors were considered to improve the separability of ground objects for dual-pol data. Then, the Jeffries-Matusita (J-M) distance and one-dimensional convolutional neural network (1DCNN) algorithm were used to analyze the effect of difference dual-pol radar vegetation indexes on LCC. Finally, in order to reduce the influence of the speckle noise, a two-stage LCC method, the 1DCNN-MRF, based on the 1DCNN and Markov random field (MRF) was designed considering the spatial information of ground objects. In this study, the HH-HV model data of the Gaofen-3 satellite in the Dongting Lake area were used, and the results showed that: (1) Through the combination of the backscatter coefficient and dual-pol radar vegetation indexes based on the polarization decomposition technique, the accuracy of LCC can be improved compared with the single backscatter coefficient. (2) The DpRVIm was more conducive to improving the accuracy of LCC than the classic dual-pol radar vegetation index (DpRVI) and radar vegetation index (RVI), especially for farmland and forest. (3) Compared with the classic machine learning methods K-nearest neighbor (KNN), random forest (RF), and the 1DCNN, the designed 1DCNN-MRF achieved the highest accuracy, with an overall accuracy (OA) score of 81.76% and a Kappa coefficient (Kappa) score of 0.74. This study indicated the application potential of the polarization decomposition technique and DEM in enhancing the separability of different land cover types in SAR LCC. Furthermore, it demonstrated that the combination of deep learning networks and MRF is suitable to suppress the influence of speckle noise. Full article
Show Figures

Graphical abstract

17 pages, 2657 KiB  
Article
Combining CNNs and Markov-like Models for Facial Landmark Detection with Spatial Consistency Estimates
by Ahmed Gdoura, Markus Degünther, Birgit Lorenz and Alexander Effland
J. Imaging 2023, 9(5), 104; https://doi.org/10.3390/jimaging9050104 - 22 May 2023
Cited by 6 | Viewed by 3138
Abstract
The accurate localization of facial landmarks is essential for several tasks, including face recognition, head pose estimation, facial region extraction, and emotion detection. Although the number of required landmarks is task-specific, models are typically trained on all available landmarks in the datasets, limiting [...] Read more.
The accurate localization of facial landmarks is essential for several tasks, including face recognition, head pose estimation, facial region extraction, and emotion detection. Although the number of required landmarks is task-specific, models are typically trained on all available landmarks in the datasets, limiting efficiency. Furthermore, model performance is strongly influenced by scale-dependent local appearance information around landmarks and the global shape information generated by them. To account for this, we propose a lightweight hybrid model for facial landmark detection designed specifically for pupil region extraction. Our design combines a convolutional neural network (CNN) with a Markov random field (MRF)-like process trained on only 17 carefully selected landmarks. The advantage of our model is the ability to run different image scales on the same convolutional layers, resulting in a significant reduction in model size. In addition, we employ an approximation of the MRF that is run on a subset of landmarks to validate the spatial consistency of the generated shape. This validation process is performed against a learned conditional distribution, expressing the location of one landmark relative to its neighbor. Experimental results on popular facial landmark localization datasets such as 300 w, WFLW, and HELEN demonstrate the accuracy of our proposed model. Furthermore, our model achieves state-of-the-art performance on a well-defined robustness metric. In conclusion, the results demonstrate the ability of our lightweight model to filter out spatially inconsistent predictions, even with significantly fewer training landmarks. Full article
(This article belongs to the Topic Computer Vision and Image Processing)
Show Figures

Figure 1

Back to TopTop