Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (151)

Search Parameters:
Keywords = image co-registration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 851 KiB  
Article
Evaluating Accuracy of Smartphone Facial Scanning System with Cone-Beam Computed Tomography Images
by Konstantinos Megkousidis, Elie Amm and Melih Motro
Bioengineering 2025, 12(8), 792; https://doi.org/10.3390/bioengineering12080792 - 23 Jul 2025
Viewed by 279
Abstract
Objectives: Facial soft tissue imaging is crucial in orthodontic treatment planning, and the structured light scanning technology found in the latest iPhone models constitutes a promising method. Currently, studies which evaluate the accuracy of smartphone-based three-dimensional (3D) facial scanners are scarce. This study [...] Read more.
Objectives: Facial soft tissue imaging is crucial in orthodontic treatment planning, and the structured light scanning technology found in the latest iPhone models constitutes a promising method. Currently, studies which evaluate the accuracy of smartphone-based three-dimensional (3D) facial scanners are scarce. This study compares smartphone scans with cone-beam computed tomography (CBCT) images. Materials and Methods: Three-dimensional images of 23 screened patients were captured with the camera of an iPhone 13 Pro Max and processed with the Scandy Pro application; CBCT scans were also taken as a standard of care. After establishing unique image pairs of the same patient, linear and angular measurements were compared between the images to assess the scanner’s two-dimensional trueness. Following the co-registration of the virtual models, a heat map was generated, and root mean square (RMS) deviations were calculated for quantitative assessment of 3D trueness. Precision was determined by comparing consecutive 3D facial scans of five participants, while intraobserver reliability was assessed by repeating measurements on five subjects after a two-week interval. Results: This study found no significant difference in soft tissue measurements between smartphone and CBCT images (p > 0.05). The mean absolute difference was 1.43 mm for the linear and 3.16° for the angular measurements. The mean RMS value was 1.47 mm. Intraobserver reliability and scanner precision were assessed, and the Intraclass Correlation Coefficients were found to be excellent. Conclusions: Smartphone facial scanners offer an accurate and reliable alternative to stereophotogrammetry systems, though clinicians should exercise caution when examining the lateral sections of those images due to inherent inaccuracies. Full article
(This article belongs to the Special Issue Orthodontic Biomechanics)
Show Figures

Figure 1

23 pages, 24301 KiB  
Article
Robust Optical and SAR Image Registration Using Weighted Feature Fusion
by Ao Luo, Anxi Yu, Yongsheng Zhang, Wenhao Tong and Huatao Yu
Remote Sens. 2025, 17(15), 2544; https://doi.org/10.3390/rs17152544 - 22 Jul 2025
Viewed by 295
Abstract
Image registration constitutes the fundamental basis for the joint interpretation of synthetic aperture radar (SAR) and optical images. However, robust image registration remains challenging due to significant regional heterogeneity in remote sensing scenes (e.g., co-existing urban and marine areas within a single image). [...] Read more.
Image registration constitutes the fundamental basis for the joint interpretation of synthetic aperture radar (SAR) and optical images. However, robust image registration remains challenging due to significant regional heterogeneity in remote sensing scenes (e.g., co-existing urban and marine areas within a single image). To overcome this challenge, this article proposes a novel optical–SAR image registration method named Gradient and Standard Deviation Feature Weighted Fusion (GDWF). First, a Block-local standard deviation (Block-LSD) operator is proposed to extract block-based feature points with regional adaptability. Subsequently, a dual-modal feature description is developed, constructing both gradient-based descriptors and local standard deviation (LSD) descriptors for the neighborhoods surrounding the detected feature points. To further enhance matching robustness, a confidence-weighted feature fusion strategy is proposed. By establishing a reliability evaluation model for similarity measurement maps, the contribution weights of gradient features and LSD features are dynamically optimized, ensuring adaptive performance under varying conditions. To verify the effectiveness of the method, different optical and SAR datasets are used to compare it with the currently advanced algorithms MOGF, CFOG, and FED-HOPC. The experimental results demonstrate that the proposed GDWF algorithm achieves the best performance in terms of registration accuracy and robustness among all compared methods, effectively handling optical–SAR image pairs with significant regional heterogeneity. Full article
Show Figures

Figure 1

16 pages, 1289 KiB  
Review
The Role of Intravascular Imaging in Coronary Chronic Total Occlusion PCI: Enhancing Procedural Success Through Real-Time Visualization
by Hussein Sliman, Rim Kasem Ali Sliman, Paul Knaapen, Alex Nap, Grzegorz Sobieszek and Maksymilian P. Opolski
J. Pers. Med. 2025, 15(7), 318; https://doi.org/10.3390/jpm15070318 - 15 Jul 2025
Viewed by 346
Abstract
Coronary chronic total occlusions (CTOs) are diagnosed in a significant portion of patients undergoing coronary angiography and represent one of the most complex scenarios in contemporary percutaneous coronary interventions (PCI). This review systematically examines how adjunctive imaging modalities’—intravascular ultrasound (IVUS), optical coherence tomography [...] Read more.
Coronary chronic total occlusions (CTOs) are diagnosed in a significant portion of patients undergoing coronary angiography and represent one of the most complex scenarios in contemporary percutaneous coronary interventions (PCI). This review systematically examines how adjunctive imaging modalities’—intravascular ultrasound (IVUS), optical coherence tomography (OCT), and coronary computed tomography angiography (CCTA)—co-registration enhances the precision and success rates of CTO-PCI during the procedure. The strategic integration of these technologies enables the development of patient-specific intervention strategies tailored to individual vascular architecture and lesion characteristics. This personalized approach marks a transition from standardized protocols to precision interventional cardiology, potentially optimizing procedural success rates while minimizing complications. Full article
(This article belongs to the Special Issue Interventional Cardiology: Latest Technology, Progress and Challenge)
Show Figures

Figure 1

26 pages, 92114 KiB  
Article
Multi-Modal Remote Sensing Image Registration Method Combining Scale-Invariant Feature Transform with Co-Occurrence Filter and Histogram of Oriented Gradients Features
by Yi Yang, Shuo Liu, Haitao Zhang, Dacheng Li and Ling Ma
Remote Sens. 2025, 17(13), 2246; https://doi.org/10.3390/rs17132246 - 30 Jun 2025
Viewed by 399
Abstract
Multi-modal remote sensing images often exhibit complex and nonlinear radiation differences which significantly hinder the performance of traditional feature-based image registration methods such as Scale-Invariant Feature Transform (SIFT). In contrast, structural features—such as edges and contours—remain relatively consistent across modalities. To address this [...] Read more.
Multi-modal remote sensing images often exhibit complex and nonlinear radiation differences which significantly hinder the performance of traditional feature-based image registration methods such as Scale-Invariant Feature Transform (SIFT). In contrast, structural features—such as edges and contours—remain relatively consistent across modalities. To address this challenge, we propose a novel multi-modal image registration method, Cof-SIFT, which integrates a co-occurrence filter with SIFT. By replacing the traditional Gaussian filter with a co-occurrence filter, Cof-SIFT effectively suppresses texture variations while preserving structural information, thereby enhancing robustness to cross-modal differences. To further improve image registration accuracy, we introduce an extended approach, Cof-SIFT_HOG, which extracts Histogram of Oriented Gradients (HOG) features from the image gradient magnitude map of corresponding points and refines their positions based on HOG similarity. This refinement yields more precise alignment between the reference and image to be registered. We evaluated Cof-SIFT and Cof-SIFT_HOG on a diverse set of multi-modal remote sensing image pairs. The experimental results demonstrate that both methods outperform existing approaches, including SIFT, COFSM, SAR-SIFT, PSO-SIFT, and OS-SIFT, in terms of robustness and registration accuracy. Notably, Cof-SIFT_HOG achieves the highest overall performance, confirming the effectiveness of the proposed structural-preserving and corresponding point location refinement strategies in cross-modal registration tasks. Full article
Show Figures

Figure 1

25 pages, 34678 KiB  
Article
Historical Coast Snaps: Using Centennial Imagery to Track Shoreline Change
by Fátima Valverde, Rui Taborda, Amy E. East and Cristina Ponte Lira
Remote Sens. 2025, 17(8), 1326; https://doi.org/10.3390/rs17081326 - 8 Apr 2025
Viewed by 901
Abstract
Understanding long-term coastal evolution requires historical data, yet accessing reliable information becomes increasingly challenging for extended periods. While vertical aerial imagery has been extensively used in coastal studies since the mid-20th century, and satellite-derived shoreline measurements are now revolutionizing shoreline change studies, ground-based [...] Read more.
Understanding long-term coastal evolution requires historical data, yet accessing reliable information becomes increasingly challenging for extended periods. While vertical aerial imagery has been extensively used in coastal studies since the mid-20th century, and satellite-derived shoreline measurements are now revolutionizing shoreline change studies, ground-based images, such as historical photographs and picture postcards, provide an alternative source of shoreline data for earlier periods when other datasets are scarce. Despite their frequent use for documenting qualitative morphological changes, these valuable historical data sources have rarely supported quantitative assessments of coastal evolution. This study demonstrates the potential of historical ground-oblique images for quantitatively assessing shoreline position and long-term change. Using Conceição-Duquesa Beach (Cascais, Portugal) as a case study, we analyze shoreline evolution over 92 years by applying a novel methodology to historical photographs and postcards. The approach combines image registration, shoreline detection, coordinate transformation, and rectification while accounting for positional uncertainty. Results reveal a significant counterclockwise rotation of the shoreline between the 20th and 21st centuries, exceeding estimated uncertainty thresholds. This study highlights the feasibility of using historical ground-based imagery to reconstruct shoreline positions and quantify long-term coastal change. The methodology is straightforward, adaptable, and offers a promising avenue for extending the temporal range of shoreline datasets, advancing our understanding of coastal evolution. Full article
(This article belongs to the Special Issue Advances in Remote Sensing of the Inland and Coastal Water Zones II)
Show Figures

Figure 1

23 pages, 7777 KiB  
Article
UOrtos: Methodology for Co-Registration and Subpixel Georeferencing of Satellite Imagery for Coastal Monitoring
by Gonzalo Simarro, Daniel Calvete, Francesca Ribas, Yeray Castillo and Càrol Puig-Polo
Remote Sens. 2025, 17(7), 1160; https://doi.org/10.3390/rs17071160 - 25 Mar 2025
Viewed by 513
Abstract
This study introduces a novel methodology for the automated co-registration and georeferencing of satellite imagery to enhance the accuracy of shoreline detection and coastal monitoring. The approach utilizes feature-based methods, cross-correlation, and RANSAC (RANdom SAmple Consensus) algorithms to accurately align images while avoiding [...] Read more.
This study introduces a novel methodology for the automated co-registration and georeferencing of satellite imagery to enhance the accuracy of shoreline detection and coastal monitoring. The approach utilizes feature-based methods, cross-correlation, and RANSAC (RANdom SAmple Consensus) algorithms to accurately align images while avoiding outliers. By collectively analyzing the entire set of images and clustering them based on their pixel-pair connections, the method ensures robust transformations across the dataset. The methodology is applied to Sentinel-2 and Landsat images across four coastal sites (Duck, Narrabeen, Torrey Pines, and Truc Vert) from January 2020 to December 2023. The results show that the proposed approach effectively reduces the errors from ∼1 to at least 0.4 px (although they are likely below 0.2 px). This approach can enhance the precision of existing algorithms for coastal feature tracking, such as shoreline detection, and aids in differentiating georeferencing errors from the actual impacts of storms or beach nourishment activities. The tool can also handle complex cases of significant image rotation due to varied projections. The findings emphasize the importance of co-registration for reliable shoreline monitoring, with potential applications in coastal management and climate change impact studies. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Figure 1

20 pages, 42010 KiB  
Article
Coastline and Riverbed Change Detection in the Broader Area of the City of Patras Using Very High-Resolution Multi-Temporal Imagery
by Spiros Papadopoulos, Vassilis Anastassopoulos and Georgia Koukiou
Electronics 2025, 14(6), 1096; https://doi.org/10.3390/electronics14061096 - 11 Mar 2025
Viewed by 700
Abstract
Accurate and robust information on land cover changes in urban and coastal areas is essential for effective urban land management, ecosystem monitoring, and urban planning. This paper details the methodology and results of a pixel-level classification and change detection analysis, leveraging 1945 Royal [...] Read more.
Accurate and robust information on land cover changes in urban and coastal areas is essential for effective urban land management, ecosystem monitoring, and urban planning. This paper details the methodology and results of a pixel-level classification and change detection analysis, leveraging 1945 Royal Air Force (RAF) aerial imagery and 2011 Very High-Resolution (VHR) multispectral WorldView-2 satellite imagery from the broader area of Patras, Greece. Our attention is mainly focused on the changes in the coastline from the city of Patras to the northeast direction and the two major rivers, Charadros and Selemnos. The methodology involves preprocessing steps such as registration, denoising, and resolution adjustments to ensure computational feasibility for both coastal and riverbed change detection procedures while maintaining critical spatial features. For change detection at coastal areas over time, the Normalized Difference Water Index (NDWI) was applied to the new imagery to mask out the sea from the coastline and manually archive imagery from 1945. To determine the differences in the coastline between 1945 and 2011, we perform image differencing by subtracting the 1945 image from the 2011 image. This highlights the areas where changes have occurred over time. To conduct riverbed change detection, feature extraction using the Gray-Level Co-occurrence Matrix (GLCM) was applied to capture spatial characteristics. A Support Vector Machine (SVM) classification model was trained to distinguish river pixels from non-river pixels, enabling the identification of changes in riverbeds and achieving 92.6% and 92.5% accuracy for new and old imagery, respectively. Post-classification processing included classification maps to enhance the visualization of the detected changes. This approach highlights the potential of combining historical and modern imagery with supervised machine learning methods to effectively assess coastal erosion and riverbed alterations. Full article
Show Figures

Figure 1

22 pages, 6757 KiB  
Article
Co-Registration of Multi-Modal UAS Pushbroom Imaging Spectroscopy and RGB Imagery Using Optical Flow
by Ryan S. Haynes, Arko Lucieer, Darren Turner and Emiliano Cimoli
Drones 2025, 9(2), 132; https://doi.org/10.3390/drones9020132 - 11 Feb 2025
Cited by 1 | Viewed by 1014
Abstract
Remote sensing from unoccupied aerial systems (UASs) has witnessed exponential growth. The increasing use of imaging spectroscopy sensors and RGB cameras on UAS platforms demands accurate, cross-comparable multi-sensor data. Inherent errors during image capture or processing can introduce spatial offsets, diminishing spatial accuracy [...] Read more.
Remote sensing from unoccupied aerial systems (UASs) has witnessed exponential growth. The increasing use of imaging spectroscopy sensors and RGB cameras on UAS platforms demands accurate, cross-comparable multi-sensor data. Inherent errors during image capture or processing can introduce spatial offsets, diminishing spatial accuracy and hindering cross-comparison and change detection analysis. To address this, we demonstrate the use of an optical flow algorithm, eFOLKI, for co-registering imagery from two pushbroom imaging spectroscopy sensors (VNIR and NIR/SWIR) to an RGB orthomosaic. Our study focuses on two ecologically diverse vegetative sites in Tasmania, Australia. Both sites are structurally complex, posing challenging datasets for co-registration algorithms with initial georectification spatial errors of up to 9 m planimetrically. The optical flow co-registration significantly improved the spatial accuracy of the imaging spectroscopy relative to the RGB orthomosaic. After co-registration, spatial alignment errors were greatly improved, with RMSE and MAE values of less than 13 cm for the higher-spatial-resolution dataset and less than 33 cm for the lower resolution dataset, corresponding to only 2–4 pixels in both cases. These results demonstrate the efficacy of optical flow co-registration in reducing spatial discrepancies between multi-sensor UAS datasets, enhancing accuracy and alignment to enable robust environmental monitoring. Full article
Show Figures

Figure 1

18 pages, 7563 KiB  
Article
Quantitative Analysis Using PMOD and FreeSurfer for Three Types of Radiopharmaceuticals for Alzheimer’s Disease Diagnosis
by Hyun Jin Yoon, Daye Yoon, Sungmin Jun, Young Jin Jeong and Do-Young Kang
Algorithms 2025, 18(2), 57; https://doi.org/10.3390/a18020057 - 21 Jan 2025
Viewed by 1109
Abstract
In amyloid brain PET, after parcellation using the finite element method (FEM)-based algorithm FreeSurfer and voxel-based algorithm PMOD, SUVr examples can be extracted and compared. This study presents the classification SUVr threshold in PET images of F-18 florbetaben (FBB), F-18 flutemetamol (FMM), and [...] Read more.
In amyloid brain PET, after parcellation using the finite element method (FEM)-based algorithm FreeSurfer and voxel-based algorithm PMOD, SUVr examples can be extracted and compared. This study presents the classification SUVr threshold in PET images of F-18 florbetaben (FBB), F-18 flutemetamol (FMM), and F-18 florapronol (FPN) and compares and analyzes the classification performance according to computational algorithm in each brain region. PET images were co-registered after the generated MRI was registered with standard template information. Using MATLAB script, SUVr was calculated using the built-in parcellation number labeled in the brain region. PMOD and FreeSurfer with different algorithms were used to load the PET image, and after registration in MRI, it was normalized to the MRI template. The volume and SUVr of the individual gray matter space region were calculated using an automated anatomical labeling atlas. The SUVr values of eight regions of the frontal cortex (FC), lateral temporal cortex (LTC), mesial temporal cortex (MTC), parietal cortex (PC), occipital cortex (OC), anterior and posterior cingulate cortex (GCA, GCP), and composite were calculated. After calculating the correlation of SUVr using the FreeSurfer and PMOD algorithms and calculating the AUC for amyloid-positive/negative subjects, the classification ability was calculated, and the SVUr threshold was calculated using the Youden index. The correlation coefficients of FreeSurfer and PMOD SUVr calculations of the eight regions of the brain cortex were FBB (0.95), FMM (0.94), and FPN (0.91). The SUVr threshold was SUVr(LTC,min) = 1.264 and SUVr(THA,max) = 1.725 when calculated using FPN-FreeSurfer, and SUVr(MTC,min) = 1.093 and SUVr(MCT,max) = 1.564 when calculated using FPN-PMOD. The AUC comparison showed that there was no statistically significant difference (p > 0.05) in the SUVr classification results using the three radiopharmaceuticals, specifically for the LTC and OC regions in the PMOD analysis, and the LTC and PC regions in the FreeSurfer analysis. The SUVr calculation using PMOD (voxel-based algorithm) has a strong correlation with the calculation using FreeSurfer (FEM-based algorithm); therefore, they complement each other. Quantitative classification analysis with high accuracy is possible using the suggested SUVr threshold. The SUVr classification performance was good in the order of FMM, FBB, and FPN, and showed a good classification performance in the LTC region regardless of the type of radiotracer and analysis algorithm. Full article
(This article belongs to the Special Issue Algorithms in Data Classification (2nd Edition))
Show Figures

Figure 1

12 pages, 693 KiB  
Article
Haralick Texture Analysis for Differentiating Suspicious Prostate Lesions from Normal Tissue in Low-Field MRI
by Dang Bich Thuy Le, Ram Narayanan, Meredith Sadinski, Aleksandar Nacev, Yuling Yan and Srirama S. Venkataraman
Bioengineering 2025, 12(1), 47; https://doi.org/10.3390/bioengineering12010047 - 9 Jan 2025
Viewed by 962
Abstract
This study evaluates the feasibility of using Haralick texture analysis on low-field, T2-weighted MRI images for detecting prostate cancer, extending current research from high-field MRI to the more accessible and cost-effective low-field MRI. A total of twenty-one patients with biopsy-proven prostate cancer (Gleason [...] Read more.
This study evaluates the feasibility of using Haralick texture analysis on low-field, T2-weighted MRI images for detecting prostate cancer, extending current research from high-field MRI to the more accessible and cost-effective low-field MRI. A total of twenty-one patients with biopsy-proven prostate cancer (Gleason score 4+3 or higher) were included. Before transperineal biopsy guided by low-field (58–74mT) MRI, a radiologist annotated suspicious regions of interest (ROIs) on high-field (3T) MRI. Rigid image registration was performed to align corresponding regions on both high- and low-field images, ensuring an accurate propagation of annotations to the co-registered low-field images for texture feature calculations. For each cancerous ROI, a matching ROI of identical size was drawn in a non-suspicious region presumed to be normal tissue. Four Haralick texture features (Energy, Correlation, Contrast, and Homogeneity) were extracted and compared between cancerous and non-suspicious ROIs. Two extraction methods were used: the direct computation of texture measures within the ROIs and a sliding window technique generating texture maps across the prostate from which average values were derived. The results demonstrated statistically significant differences in texture features between cancerous and non-suspicious regions. Specifically, Energy and Homogeneity were elevated (p-values: <0.00001–0.004), while Contrast and Correlation were reduced (p-values: <0.00001–0.03) in cancerous ROIs. These findings suggest that Haralick texture features are both feasible and informative for differentiating abnormalities, offering promise in assisting prostate cancer detection on low-field MRI. Full article
(This article belongs to the Special Issue Advancements in Medical Imaging Technology)
Show Figures

Figure 1

29 pages, 8502 KiB  
Article
Seed Protein Content Estimation with Bench-Top Hyperspectral Imaging and Attentive Convolutional Neural Network Models
by Imran Said, Vasit Sagan, Kyle T. Peterson, Haireti Alifu, Abuduwanli Maiwulanjiang, Abby Stylianou, Omar Al Akkad, Supria Sarkar and Noor Al Shakarji
Sensors 2025, 25(2), 303; https://doi.org/10.3390/s25020303 - 7 Jan 2025
Viewed by 1287
Abstract
Wheat is a globally cultivated cereal crop with substantial protein content present in its seeds. This research aimed to develop robust methods for predicting seed protein concentration in wheat seeds using bench-top hyperspectral imaging in the visible, near-infrared (VNIR), and shortwave infrared (SWIR) [...] Read more.
Wheat is a globally cultivated cereal crop with substantial protein content present in its seeds. This research aimed to develop robust methods for predicting seed protein concentration in wheat seeds using bench-top hyperspectral imaging in the visible, near-infrared (VNIR), and shortwave infrared (SWIR) regions. To fully utilize the spectral and texture features of the full VNIR and SWIR spectral domains, a computer-vision-aided image co-registration methodology was implemented to seamlessly align the VNIR and SWIR bands. Sensitivity analyses were also conducted to identify the most sensitive bands for seed protein estimation. Convolutional neural networks (CNNs) with attention mechanisms were proposed along with traditional machine learning models based on feature engineering including Random Forest (RF) and Support Vector Machine (SVM) regression for comparative analysis. Additionally, the CNN classification approach was used to estimate low, medium, and high protein concentrations because this type of classification is more applicable for breeding efforts. Our results showed that the proposed CNN with attention mechanisms predicted wheat protein content with R2 values of 0.70 and 0.65 for ventral and dorsal seed orientations, respectively. Although, the R2 of the CNN approach was lower than of the best performing feature-based method, RF (R2 of 0.77), end-to-end prediction capabilities with CNN hold great promise for the automation of wheat protein estimation for breeding. The CNN model achieved better classification of protein concentrations between low, medium, and high protein contents, with an R2 of 0.82. This study’s findings highlight the significant potential of hyperspectral imaging and machine learning techniques for advancing precision breeding practices, optimizing seed sorting processes, and enabling targeted agricultural input applications. Full article
(This article belongs to the Special Issue Spectral Detection Technology, Sensors and Instruments, 2nd Edition)
Show Figures

Figure 1

15 pages, 14788 KiB  
Article
The DEM Registration Method Without Ground Control Points for Landslide Deformation Monitoring
by Yunchuan Wang, Jia Li, Ping Duan, Rui Wang and Xinrui Yu
Remote Sens. 2024, 16(22), 4236; https://doi.org/10.3390/rs16224236 - 14 Nov 2024
Viewed by 1162
Abstract
Landslides are geological disasters that are harmful to both humans and society. Digital elevation model (DEM) time series data are usually used to monitor dynamic changes or surface damage. To solve the problem of landslide deformation monitoring without ground control points (GCPs), a [...] Read more.
Landslides are geological disasters that are harmful to both humans and society. Digital elevation model (DEM) time series data are usually used to monitor dynamic changes or surface damage. To solve the problem of landslide deformation monitoring without ground control points (GCPs), a multidimensional feature-based coregistration method (MFBR) was studied to achieve accurate registration of multitemporal DEMs without GCPs and obtain landslide deformation information. The method first derives the elevation information of the DEM into image pixel information, and the feature points are extracted on the basis of the image. The initial plane position registration of the DEM is implemented. Therefore, the expected maximum algorithm is applied to calculate the stable regions that have not changed between multitemporal DEMs and to perform accurate registrations. Finally, the shape variables are calculated by constructing a DEM differential model. The method was evaluated using simulated data and data from two real landslide cases, and the experimental results revealed that the registration accuracies of the three datasets were 0.963 m, 0.368 m, and 2.459 m, which are 92%, 50%, and 24% better than the 12.189 m, 0.745 m, and 3.258 m accuracies of the iterative closest-point algorithm, respectively. Compared with the GCP-based method, the MFBR method can achieve 70% deformation acquisition capability, which indicates that the MFBR method has better applicability in the field of landslide monitoring. This study provides an idea for landslide deformation monitoring without GCPs and is helpful for further understanding the state and behavior of landslides. Full article
(This article belongs to the Special Issue Advances in GIS and Remote Sensing Applications in Natural Hazards)
Show Figures

Graphical abstract

18 pages, 5084 KiB  
Article
Activation of Ms 6.9 Milin Earthquake on Sedongpu Disaster Chain, China with Multi-Temporal Optical Images
by Yubin Xin, Chaoying Zhao, Bin Li, Xiaojie Liu, Yang Gao and Jianqi Lou
Remote Sens. 2024, 16(21), 4003; https://doi.org/10.3390/rs16214003 - 28 Oct 2024
Cited by 1 | Viewed by 1106
Abstract
In recent years, disaster chains caused by glacier movements have occurred frequently in the lower Yarlung Tsangpo River in southwest China. However, it is still unclear whether earthquakes significantly contribute to glacier movements and disaster chains. In addition, it is difficult to measure [...] Read more.
In recent years, disaster chains caused by glacier movements have occurred frequently in the lower Yarlung Tsangpo River in southwest China. However, it is still unclear whether earthquakes significantly contribute to glacier movements and disaster chains. In addition, it is difficult to measure the high-frequency and large gradient displacement time series with optical remote sensing images due to cloud coverage. To this end, we take the Sedongpu disaster chain as an example, where the Milin earthquake, with an epicenter 11 km away, occurred on 18 November 2017. Firstly, to deal with the cloud coverage problem for single optical remote sensing analysis, we employed multiple platform optical images and conducted a cross-platform correlation technique to invert the two-dimensional displacement rate and the cumulative displacement time series of the Sedongpu glacier. To reveal the correlation between earthquakes and disaster chains, we divided the optical images into three classes according to the Milin earthquake event. Lastly, to increase the accuracy and reliability, we propose two strategies for displacement monitoring, that is, a four-quadrant block registration strategy and a multi-window fusion strategy. Results show that the RMSE reduction percentage of the proposed registration method reaches 80%, and the fusion method can retrieve the large magnitude displacements and complete displacement field. Secondly, the Milin earthquake accelerated the Sedongpu glacier movement, where the pre-seismic velocities were less than 0.5 m/day, the co-seismic velocities increased to 1 to 6 m/day, and the post-seismic velocities decreased to 0.5 to 3 m/day. Lastly, the earthquake had a triggering effect around 33 days on the Sedongpu disaster chain event on 21 December 2017. The failure pattern can be summarized as ice and rock collapse in the source area, large magnitude glacier displacement in the moraine area, and a large volume of sediment in the deposition area, causing a river blockage. Full article
Show Figures

Figure 1

20 pages, 8709 KiB  
Article
Automatic Fine Co-Registration of Datasets from Extremely High Resolution Satellite Multispectral Scanners by Means of Injection of Residues of Multivariate Regression
by Luciano Alparone, Alberto Arienzo and Andrea Garzelli
Remote Sens. 2024, 16(19), 3576; https://doi.org/10.3390/rs16193576 - 25 Sep 2024
Cited by 3 | Viewed by 1220
Abstract
This work presents two pre-processing patches to automatically correct the residual local misalignment of datasets acquired by very/extremely high resolution (VHR/EHR) satellite multispectral (MS) scanners, one for, e.g., GeoEye-1 and Pléiades, featuring two separate instruments for MS and panchromatic (Pan) data, the other [...] Read more.
This work presents two pre-processing patches to automatically correct the residual local misalignment of datasets acquired by very/extremely high resolution (VHR/EHR) satellite multispectral (MS) scanners, one for, e.g., GeoEye-1 and Pléiades, featuring two separate instruments for MS and panchromatic (Pan) data, the other for WorldView-2/3 featuring three instruments, two of which are visible and near-infra-red (VNIR) MS scanners. The misalignment arises because the two/three instruments onboard GeoEye-1 / WorldView-2 (four onboard WorldView-3) share the same optics and, thus, cannot have parallel optical axes. Consequently, they image the same swath area from different positions along the orbit. Local height changes (hills, buildings, trees, etc.) originate local shifts among corresponding points in the datasets. The latter would be accurately aligned only if the digital elevation surface model were known with sufficient spatial resolution, which is hardly feasible everywhere because of the extremely high resolution, with Pan pixels of less than 0.5 m. The refined co-registration is achieved by injecting the residue of the multivariate linear regression of each scanner towards lowpass-filtered Pan. Experiments with two and three instruments show that an almost perfect alignment is achieved. MS pansharpening is also shown to greatly benefit from the improved alignment. The proposed alignment procedures are real-time, fully automated, and do not require any additional or ancillary information, but rely uniquely on the unimodality of the MS and Pan sensors. Full article
Show Figures

Figure 1

17 pages, 2260 KiB  
Article
From Phantoms to Patients: Improved Fusion and Voxel-Wise Analysis of Diffusion-Weighted Imaging and FDG-Positron Emission Tomography in Positron Emission Tomography/Magnetic Resonance Imaging for Combined Metabolic–Diffusivity Index (cDMI)
by Katharina Deininger, Patrick Korf, Leonard Lauber, Robert Grimm, Ralph Strecker, Jochen Steinacker, Catharina S. Lisson, Bernd M. Mühling, Gerlinde Schmidtke-Schrezenmeier, Volker Rasche, Tobias Speidel, Gerhard Glatting, Meinrad Beer, Ambros J. Beer and Wolfgang Thaiss
Diagnostics 2024, 14(16), 1787; https://doi.org/10.3390/diagnostics14161787 - 16 Aug 2024
Viewed by 1484
Abstract
Hybrid positron emission tomography/magnetic resonance imaging (PET/MR) opens new possibilities in multimodal multiparametric (m2p) image analyses. But even the simultaneous acquisition of positron emission tomography (PET) and magnetic resonance imaging (MRI) does not guarantee perfect voxel-by-voxel co-registration due to organs and distortions, especially [...] Read more.
Hybrid positron emission tomography/magnetic resonance imaging (PET/MR) opens new possibilities in multimodal multiparametric (m2p) image analyses. But even the simultaneous acquisition of positron emission tomography (PET) and magnetic resonance imaging (MRI) does not guarantee perfect voxel-by-voxel co-registration due to organs and distortions, especially in diffusion-weighted imaging (DWI), which would be, however, crucial to derive biologically meaningful information. Thus, our aim was to optimize fusion and voxel-wise analyses of DWI and standardized uptake values (SUVs) using a novel software for m2p analyses. Using research software, we evaluated the precision of image co-registration and voxel-wise analyses including the rigid and elastic 3D registration of DWI and [18F]-Fluorodeoxyglucose (FDG)-PET from an integrated PET/MR system. We analyzed DWI distortions with a volume-preserving constraint in three different 3D-printed phantom models. A total of 12 PET/MR-DWI clinical datasets (bronchial carcinoma patients) were referenced to the T1 weighted-DIXON sequence. Back mapping of scatterplots and voxel-wise registration was performed and compared to the non-optimized datasets. Fusion was rated using a 5-point Likert scale. Using the 3D-elastic co-registration algorithm, geometric shapes were restored in phantom measurements; the measured ADC values did not change significantly (F = 1.12, p = 0.34). Reader assessment showed a significant improvement in fusion precision for DWI and morphological landmarks in the 3D-registered datasets (4.3 ± 0.2 vs. 4.6 ± 0.2, p = 0.009). Most pronounced differences were noted for the chest wall (p = 0.006), tumor (p = 0.007), and skin contour (p = 0.014). Co-registration increased the number of plausible ADC and SUV combinations by 25%. The volume-preserving elastic 3D registration of DWI significantly improved the precision of fusion with anatomical sequences in phantom and clinical datasets. The research software allowed for a voxel-wise analysis and visualization of [18F]FDG-PET/MR data as a “combined diffusivity–metabolic index” (cDMI). The clinical value of the optimized PET/MR biomarker can thus be tested in future PET/MR studies. Full article
(This article belongs to the Special Issue New Trends and Advances of MRI and PET Hybrid Imaging in Diagnostics)
Show Figures

Figure 1

Back to TopTop