Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (112)

Search Parameters:
Keywords = unsupervised term mapping

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 954 KB  
Article
Exploring the Interplay Between Healthcare Quality and Economic Viability Through Massive Data Analysis-Driven Multi-Hospital Management in a Spanish Private Multi-Hospital Network
by David Baulenas-Parellada, Javier Villalón-Coca, Daniel Vicente-Gallegos, Sandra Paniagua-Sánchez, Xavier Corbella-Viros and Angel Ayuso-Sacido
Healthcare 2025, 13(23), 3034; https://doi.org/10.3390/healthcare13233034 - 24 Nov 2025
Viewed by 412
Abstract
Background: Hospital management increasingly requires integrating quality and economic performance metrics to ensure efficiency and sustainability. However, evidence on how hospital key performance indicators (KPIs) relate to financial outcomes remains scarce, particularly in private healthcare systems. Objective: To examine the relationships [...] Read more.
Background: Hospital management increasingly requires integrating quality and economic performance metrics to ensure efficiency and sustainability. However, evidence on how hospital key performance indicators (KPIs) relate to financial outcomes remains scarce, particularly in private healthcare systems. Objective: To examine the relationships between hospital KPIs and two financial metrics—Sales and EBITDA (Earnings Before Interest, Taxes, Depreciation, and Amortization)—in a Spanish private multi-hospital network. Methods: This retrospective, observational, multi-center study analyzed a final dataset of 47 standardized KPIs from 14 hospitals in the Vithas network. KPIs were examined using Self-Organizing Maps (SOM), an unsupervised neural network technique, to identify patterns and temporal dependencies with financial outcomes at contemporaneous, 3-month, and 6-month horizons. Robustness was evaluated through sensitivity analyses of model stability, data completeness, and clustering consistency. Results: The SOM analysis revealed six distinct clusters of KPIs, reflecting logical and interconnected behaviors. Sales and EBITDA were strongly associated with scheduled activity and space occupancy in the immediate term, while quality-related KPIs such as patient satisfaction and accessibility influenced financial outcomes at 3 and 6 months. These patterns suggest that selected KPIs can serve as predictive tools for financial performance. Conclusions: SOM proved effective for uncovering complex, nonlinear relationships between KPIs and financial metrics in hospital management. The study provides an operational framework linking standardized KPIs to financial outcomes in private hospitals, with implications for forecasting and strategic planning. Future research should incorporate additional KPIs, updated datasets, and SOM variants to validate and extend these findings across diverse healthcare systems. Full article
Show Figures

Figure 1

48 pages, 4222 KB  
Review
Machine Learning Models of the Geospatial Distribution of Groundwater Quality: A Systematic Review
by Mohammad Mehrabi, David A. Polya and Yang Han
Water 2025, 17(19), 2861; https://doi.org/10.3390/w17192861 - 30 Sep 2025
Viewed by 2941
Abstract
Assessing the quality of groundwater, a primary source of water in many sectors, is of paramount importance. To this end, modeling the geospatial distribution of chemical contaminants in groundwater can be of great utility. Machine learning (ML) models are being increasingly used to [...] Read more.
Assessing the quality of groundwater, a primary source of water in many sectors, is of paramount importance. To this end, modeling the geospatial distribution of chemical contaminants in groundwater can be of great utility. Machine learning (ML) models are being increasingly used to overcome the shortcomings of conventional predictive techniques. We report here a systematic review of the nature and utility of various supervised and unsupervised ML models during the past two decades of machine learning groundwater hazard mapping (MLGHM). We identified and reviewed 284 relevant MLGHM journal articles that met our inclusion criteria. Firstly, trend analysis showed (i) an exponential increase in the number of MLGHM studies published between 2004 and 2025, with geographical distribution outlining Iran, India, the US, and China as the countries with the most extensively studied areas; (ii) nitrate as the most studied target, and groundwater chemicals as the most frequently considered category of predictive variables; (iii) that tree-based ML was the most popular model for feature selection; (iv) that supervised ML was far more favored than unsupervised ML (94% vs. 6% of models) with tree-based category—mostly random forest (RF)—as the most popular supervised ML. Secondly, compiling accuracy-based comparisons of ML models from the explored literature revealed that RF, deep learning, and ensembles (mostly meta-model ensembles and boosting ensembles) were frequently reported as the most accurate models. Thirdly, a critical evaluation of MLGHM models in terms of predictive accuracy, along with several other factors such as models’ computational efficiency and predictive power—which have often been overlooked in earlier review studies—resulted in considering the relative merits of commonly used MLGHM models. Accordingly, a flowchart was designed by integrating several MLGHM key criteria (i.e., accuracy, transparency, training speed, number of hyperparameters, intended scale of modeling, and required user’s expertise) to assist in informed model selection, recognising that the weighting of criteria for model selection may vary from problem to problem. Lastly, potential challenges that may arise during different stages of MLGHM efforts are discussed along with ideas for optimizing MLGHM models. Full article
(This article belongs to the Section Hydrogeology)
Show Figures

Figure 1

27 pages, 6263 KB  
Article
Revealing the Ecological Security Pattern in China’s Ecological Civilization Demonstration Area
by Xuelong Yang, Haisheng Cai, Xiaomin Zhao and Han Zhang
Land 2025, 14(8), 1560; https://doi.org/10.3390/land14081560 - 29 Jul 2025
Cited by 3 | Viewed by 1788
Abstract
The construction and maintenance of an ecological security pattern (ESP) are important for promoting the regional development of ecological civilizations, realizing sustainable and healthy development, and creating a harmonious and beautiful space for human beings and nature to thrive. Traditional construction methods have [...] Read more.
The construction and maintenance of an ecological security pattern (ESP) are important for promoting the regional development of ecological civilizations, realizing sustainable and healthy development, and creating a harmonious and beautiful space for human beings and nature to thrive. Traditional construction methods have the limitations of a single dimension, a single method, and excessive human subjective intervention for source and corridor identification, without considering the multidimensional quality of the sources and the structural connectivity and resilience optimization of the corridors. Therefore, an ecological civilization demonstration area (Jiangxi Province) was used as the study area, a new research method for ESP was proposed, and an empirical study was conducted. To evaluate ecosystem service (ES) importance–disturbance–risk and extract sustainability sources through the deep embedded clustering–self-organizing map (DEC–SOM) deep unsupervised learning clustering algorithm, ecological networks (ENs) were constructed by applying the minimum cumulative resistance (MCR) gravity model and circuit theory. The ENs were then optimized to improve performance by combining the comparative advantages of the two approaches in terms of structural connectivity and resilience. A comparative analysis of EN performance was constructed among different functional control zones, and the ESP was constructed to include 42 ecological sources, 134 corridors, 210 restoration nodes, and 280 protection nodes. An ESP of ‘1 nucleus, 3 belts, 6 zones, and multiple corridors’ was constructed, and the key restoration components and protection functions were clarified. This study offers a valuable reference for ecological management, protection, and restoration and provides insights into the promotion of harmonious symbiosis between human beings and nature and sustainable regional development. Full article
(This article belongs to the Special Issue Urban Ecological Indicators: Land Use and Coverage)
Show Figures

Figure 1

18 pages, 7213 KB  
Article
DFCNet: Dual-Stage Frequency-Domain Calibration Network for Low-Light Image Enhancement
by Hui Zhou, Jun Li, Yaming Mao, Lu Liu and Yiyang Lu
J. Imaging 2025, 11(8), 253; https://doi.org/10.3390/jimaging11080253 - 28 Jul 2025
Viewed by 1083
Abstract
Imaging technologies are widely used in surveillance, medical diagnostics, and other critical applications. However, under low-light conditions, captured images often suffer from insufficient brightness, blurred details, and excessive noise, degrading quality and hindering downstream tasks. Conventional low-light image enhancement (LLIE) methods not only [...] Read more.
Imaging technologies are widely used in surveillance, medical diagnostics, and other critical applications. However, under low-light conditions, captured images often suffer from insufficient brightness, blurred details, and excessive noise, degrading quality and hindering downstream tasks. Conventional low-light image enhancement (LLIE) methods not only require annotated data but also often involve heavy models with high computational costs, making them unsuitable for real-time processing. To tackle these challenges, a lightweight and unsupervised LLIE method utilizing a dual-stage frequency-domain calibration network (DFCNet) is proposed. In the first stage, the input image undergoes the preliminary feature modulation (PFM) module to guide the illumination estimation (IE) module in generating a more accurate illumination map. The final enhanced image is obtained by dividing the input by the estimated illumination map. The second stage is used only during training. It applies a frequency-domain residual calibration (FRC) module to the first-stage output, generating a calibration term that is added to the original input to darken dark regions and brighten bright areas. This updated input is then fed back to the PFM and IE modules for parameter optimization. Extensive experiments on benchmark datasets demonstrate that DFCNet achieves superior performance across multiple image quality metrics while delivering visually clearer and more natural results. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

23 pages, 9748 KB  
Article
Driving Pattern Analysis, Gear Shift Classification, and Fuel Efficiency in Light-Duty Vehicles: A Machine Learning Approach Using GPS and OBD II PID Signals
by Juan José Molina-Campoverde, Juan Zurita-Jara and Paúl Molina-Campoverde
Sensors 2025, 25(13), 4043; https://doi.org/10.3390/s25134043 - 28 Jun 2025
Cited by 1 | Viewed by 4647
Abstract
This study proposes an automatic gear shift classification algorithm in M1 category vehicles using data acquired through the onboard diagnostic system (OBD II) and GPS. The proposed approach is based on the analysis of identification parameters (PIDs), such as manifold absolute pressure (MAP), [...] Read more.
This study proposes an automatic gear shift classification algorithm in M1 category vehicles using data acquired through the onboard diagnostic system (OBD II) and GPS. The proposed approach is based on the analysis of identification parameters (PIDs), such as manifold absolute pressure (MAP), revolutions per minute (RPM), vehicle speed (VSS), torque, power, stall times, and longitudinal dynamics, to determine the efficiency and behavior of the vehicle in each of its gears. In addition, the unsupervised K-means algorithm was implemented to analyze vehicle gear changes, identify driving patterns, and segment the data into meaningful groups. Machine learning techniques, including K-Nearest Neighbors (KNN), decision trees, logistic regression, and Support Vector Machines (SVMs), were employed to classify gear shifts accurately. After a thorough evaluation, the KNN (Fine KNN) model proved to be the most effective, achieving an accuracy of 99.7%, an error rate of 0.3%, a precision of 99.8%, a recall of 99.7%, and an F1-score of 99.8%, outperforming other models in terms of accuracy, robustness, and balance between metrics. A multiple linear regression model was developed to estimate instantaneous fuel consumption (in L/100 km) using the gear predicted by the KNN algorithm and other relevant variables. The model, built on over 66,000 valid observations, achieved an R2 of 0.897 and a root mean square error (RMSE) of 2.06, indicating a strong fit. Results showed that higher gears (3, 4, and 5) are associated with lower fuel consumption. In contrast, a neutral gear presented the highest levels of consumption and variability, especially during prolonged idling periods in heavy traffic conditions. In future work, we propose integrating this algorithm into driver assistance systems (ADAS) and exploring its applicability in autonomous vehicles to enhance real-time decision making. Such integration could optimize gear shift timing based on dynamic factors like road conditions, traffic density, and driver behavior, ultimately contributing to improved fuel efficiency and overall vehicle performance. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

19 pages, 3395 KB  
Article
End-to-End Online Video Stitching and Stabilization Method Based on Unsupervised Deep Learning
by Pengyuan Wang, Pinle Qin, Rui Chai, Jianchao Zeng, Pengcheng Zhao, Zuojun Chen and Bingjie Han
Appl. Sci. 2025, 15(11), 5987; https://doi.org/10.3390/app15115987 - 26 May 2025
Viewed by 2523
Abstract
The limited field of view, cumulative inter-frame jitter, and dynamic parallax interference in handheld video stitching often lead to misalignment and distortion. In this paper, we propose an end-to-end, unsupervised deep-learning framework that jointly performs real-time video stabilization and stitching. First, collaborative optimization [...] Read more.
The limited field of view, cumulative inter-frame jitter, and dynamic parallax interference in handheld video stitching often lead to misalignment and distortion. In this paper, we propose an end-to-end, unsupervised deep-learning framework that jointly performs real-time video stabilization and stitching. First, collaborative optimization architecture allows the stabilization and stitching modules to share parameters and propagate errors through a fully differentiable network, ensuring consistent image alignment. Second, a Markov trajectory smoothing strategy in relative coordinates models inter-frame motion as incremental relationships, effectively reducing cumulative errors. Third, a dynamic attention mask generates spatiotemporal weight maps based on foreground motion prediction, suppressing misalignment caused by dynamic objects. Experimental evaluation on diverse handheld sequences shows that our method achieves higher stitching quality, lower geometric distortion rates, and improved video stability compared to state-of-the-art baselines, while maintaining real-time processing capabilities. Ablation studies validate that relative trajectory modeling substantially mitigates long-term jitter and that the dynamic attention mask enhances stitching accuracy in dynamic scenes. These results demonstrate that the proposed framework provides a robust solution for high-quality, real-time handheld video stitching. Full article
(This article belongs to the Collection Trends and Prospects in Multimedia)
Show Figures

Figure 1

20 pages, 2343 KB  
Article
Robust Single-Cell RNA-Seq Analysis Using Hyperdimensional Computing: Enhanced Clustering and Classification Methods
by Hossein Mohammadi, Maziyar Baranpouyan, Krishnaprasad Thirunarayan and Lingwei Chen
AI 2025, 6(5), 94; https://doi.org/10.3390/ai6050094 - 1 May 2025
Viewed by 1999
Abstract
Background. Single-cell RNA sequencing (scRNA-seq) has transformed genomics by enabling the study of cellular heterogeneity. However, its high dimensionality, noise, and sparsity pose significant challenges for data analysis. Methods. We investigate the use of Hyperdimensional Computing (HDC), a brain-inspired computational framework recognized for [...] Read more.
Background. Single-cell RNA sequencing (scRNA-seq) has transformed genomics by enabling the study of cellular heterogeneity. However, its high dimensionality, noise, and sparsity pose significant challenges for data analysis. Methods. We investigate the use of Hyperdimensional Computing (HDC), a brain-inspired computational framework recognized for its noise robustness and hardware efficiency, to tackle the challenges in scRNA-seq data analysis. We apply HDC to both supervised classification and unsupervised clustering tasks. Results. Our experiments demonstrate that HDC consistently outperforms established methods such as XGBoost, Seurat reference mapping, and scANVI in terms of noise tolerance and scalability. HDC achieves superior accuracy in classification tasks and maintains robust clustering performance across varying noise levels. Conclusions. These results highlight HDC as a promising framework for accurate and efficient single-cell data analysis. Its potential extends to other high-dimensional biological datasets including proteomics, epigenomics, and transcriptomics, with implications for advancing bioinformatics and personalized medicine. Full article
Show Figures

Figure 1

18 pages, 5114 KB  
Article
Mapping Rice Phenology Using MODIS Products in An Giang Province, Mekong River Delta, Vietnam
by Shou-Hao Chiang and Minh-Binh Ton
Remote Sens. 2025, 17(9), 1583; https://doi.org/10.3390/rs17091583 - 29 Apr 2025
Viewed by 2248
Abstract
The Moderate Resolution Imaging Spectroradiometer (MODIS) provides consistent long-term satellite observations that are valuable for rice mapping and production estimation through phenology extraction. This study evaluates the effectiveness of three MODIS products, MOD09GQ (1-day), MOD09Q1 (8-day), and MOD13Q1 (16-day), for mapping rice phenology [...] Read more.
The Moderate Resolution Imaging Spectroradiometer (MODIS) provides consistent long-term satellite observations that are valuable for rice mapping and production estimation through phenology extraction. This study evaluates the effectiveness of three MODIS products, MOD09GQ (1-day), MOD09Q1 (8-day), and MOD13Q1 (16-day), for mapping rice phenology in An Giang Province, a key rice-producing region in Vietnam’s climate-sensitive Mekong River Delta (MRD). The analysis focuses on rice cropping seasons from 2019 to 2021, using time series of the Normalized Difference Vegetation Index (NDVI) to capture temporal and spatial variations in rice growth dynamics. To address data gaps due to persistent cloud cover and sensor-related noises, smoothing techniques, including the Double Logistic Function (DLF) and Savitzky–Golay Filtering (SGF), were applied. Thirteen phenological parameters were extracted and used as inputs to an unsupervised K-Means clustering algorithm, enabling the classification of distinct rice growth patterns. The results show that DLF-processed MOD09GQ data most accurately reconstructed NDVI time series and captured short-term phenological transitions, outperforming coarser-resolution products. The resulting phenology maps could be used to correlate the influence of anthropogenic factors, such as the widespread adoption of short-duration rice varieties and shifts in water management practices. This study provides a robust framework for phenology-based rice mapping to support food security, sustainable agricultural planning, and climate resilience in the MRD. Full article
Show Figures

Graphical abstract

25 pages, 4826 KB  
Article
Enhancing Cross-Domain Remote Sensing Scene Classification by Multi-Source Subdomain Distribution Alignment Network
by Yong Wang, Zhehao Shu, Yinzhi Feng, Rui Liu, Qiusheng Cao, Danping Li and Lei Wang
Remote Sens. 2025, 17(7), 1302; https://doi.org/10.3390/rs17071302 - 5 Apr 2025
Cited by 4 | Viewed by 2145
Abstract
Multi-source domain adaptation (MSDA) in remote sensing (RS) scene classification has recently gained significant attention in the visual recognition community. It leverages multiple well-labeled source domains to train a model capable of achieving strong generalization on the target domain with little to no [...] Read more.
Multi-source domain adaptation (MSDA) in remote sensing (RS) scene classification has recently gained significant attention in the visual recognition community. It leverages multiple well-labeled source domains to train a model capable of achieving strong generalization on the target domain with little to no labeled data from the target domain. However, the distribution shifts among multiple source domains make it more challenging to align the distributions between the target domain and all source domains concurrently. Moreover, relying solely on global alignment risks losing fine-grained information for each class, especially in the task of RS scene classification. To alleviate these issues, we present a Multi-Source Subdomain Distribution Alignment Network (MSSDANet), which introduces novel network structures and loss functions for subdomain-oriented MSDA. By adopting a two-level feature extraction strategy, this model attains better global alignment between the target domain and multiple source domains, as well as alignment at the subdomain level. First, it includes a pre-trained convolutional neural network (CNN) as a common feature extractor to fully exploit the shared invariant features across one target and multiple source domains. Secondly, a dual-domain feature extractor is used after the common feature extractor, which maps the data from each pair of target and source domains to a specific dual-domain feature space and performs subdomain alignment. Finally, a dual-domain feature classifier is employed to make predictions by averaging the outputs from multiple classifiers. Accompanied by the above network, two novel loss functions are proposed to boost the classification performance. Discriminant Semantic Transfer (DST) loss is exploited to force the model to effectively extract semantic information among target and source domain samples, while Class Correlation (CC) loss is introduced to reduce the feature confusion from different classes within the target domain. It is noteworthy that our MSSDANet is developed in an unsupervised manner for domain adaptation, indicating that no label information from the target domain is required during training. Extensive experiments on four common RS image datasets demonstrate that the proposed method achieves state-of-the-art performance for cross-domain RS scene classification. Specifically, in the dual-source and three-source settings, MSSDANet outperforms the second-best algorithm in terms of overall accuracy (OA) by 2.2% and 1.6%, respectively. Full article
Show Figures

Figure 1

27 pages, 42566 KB  
Article
Unsupervised Rural Flood Mapping from Bi-Temporal Sentinel-1 Images Using an Improved Wavelet-Fusion Flood-Change Index (IWFCI) and an Uncertainty-Sensitive Markov Random Field (USMRF) Model
by Amin Mohsenifar, Ali Mohammadzadeh and Sadegh Jamali
Remote Sens. 2025, 17(6), 1024; https://doi.org/10.3390/rs17061024 - 14 Mar 2025
Cited by 4 | Viewed by 2112
Abstract
Synthetic aperture radar (SAR) remote sensing (RS) technology is an ideal tool to map flooded areas on account of its all-time, all-weather imaging capability. Existing SAR data-based change detection approaches lack well-discriminant change indices for reliable floodwater mapping. To resolve this issue, an [...] Read more.
Synthetic aperture radar (SAR) remote sensing (RS) technology is an ideal tool to map flooded areas on account of its all-time, all-weather imaging capability. Existing SAR data-based change detection approaches lack well-discriminant change indices for reliable floodwater mapping. To resolve this issue, an unsupervised change detection approach, made up of two main steps, is proposed for detecting floodwaters from bi-temporal SAR data. In the first step, an improved wavelet-fusion flood-change index (IWFCI) is proposed. The IWFCI modifies the mean-ratio change index (CI) to fuse it with the log-ratio CI using the discrete wavelet transform (DWT). The IWFCI also employs a discriminant feature derived from the co-flood image to enhance the separability between the non-flood and flood areas. In the second step, an uncertainty-sensitive Markov random field (USMRF) model is proposed to diminish the over-smoothness issue in the areas with high uncertainty based on a new Gaussian uncertainty term. To appraise the efficacy of the floodwater detection approach proposed in this study, comparative experiments were conducted in two stages on four datasets, each including a normalized difference water index (NDWI) and pre-and co-flood Sentinel-1 data. In the first stage, the proposed IWFCI was compared to a number of state-of-the-art (SOTA) CIs, and the second stage compared USMRF to the SOTA change detection algorithms. From the experimental results in the first stage, the proposed IWFCI, yielding an average F-score of 86.20%, performed better than SOTA CIs. Likewise, according to the experimental results obtained in the second stage, the USMRF model with an average F-score of 89.27% outperformed the comparative methods in classifying non-flood and flood classes. Accordingly, the proposed floodwater detection approach, combining IWFCI and USMRF, can serve as a reliable tool for detecting flooded areas in SAR data. Full article
Show Figures

Graphical abstract

15 pages, 2047 KB  
Article
SNMatch: An Unsupervised Method for Column Semantic-Type Detection Based on Siamese Network
by Tiezheng Nie, Hanyu Mao, Aolin Liu, Xuliang Wang, Derong Shen and Yue Kou
Mathematics 2025, 13(4), 607; https://doi.org/10.3390/math13040607 - 13 Feb 2025
Cited by 1 | Viewed by 1426
Abstract
Column semantic-type detection is a crucial task for data integration and schema matching, particularly when dealing with large volumes of unlabeled tabular data. Existing methods often rely on supervised learning models, which require extensive labeled data. In this paper, we propose SNMatch, an [...] Read more.
Column semantic-type detection is a crucial task for data integration and schema matching, particularly when dealing with large volumes of unlabeled tabular data. Existing methods often rely on supervised learning models, which require extensive labeled data. In this paper, we propose SNMatch, an unsupervised approach based on a Siamese network for detecting column semantic types without labeled training data. The novelty of SNMatch lies in its ability to generate the semantic embeddings of columns by considering both format and semantic features and clustering them into semantic types. Unlike traditional methods, which typically rely on keyword matching or supervised classification, SNMatch leverages unsupervised learning to tackle the challenges of column semantic detection in massive datasets with limited labeled examples. We demonstrate that SNMatch significantly outperforms current state-of-the-art techniques in terms of clustering accuracy, especially in handling complex and nested semantic types. Extensive experiments on the MACST and VizNet-Manyeyes datasets validate its effectiveness, achieving superior performance in column semantic-type detection compared to methods such as TF-IDF, FastText, and BERT. The proposed method shows great promise for practical applications in data integration, data cleaning, and automated schema mapping, particularly in scenarios where labeled data are scarce or unavailable. Furthermore, our work builds upon recent advances in neural network-based embeddings and unsupervised learning, contributing to the growing body of research in automatic schema matching and tabular data understanding. Full article
Show Figures

Figure 1

17 pages, 3046 KB  
Article
Building Footprint Identification Using Remotely Sensed Images: A Compressed Sensing-Based Approach to Support Map Updating
by Rizwan Ahmed Ansari, Rakesh Malhotra and Mohammed Zakariya Ansari
Geomatics 2025, 5(1), 7; https://doi.org/10.3390/geomatics5010007 - 31 Jan 2025
Cited by 1 | Viewed by 2703
Abstract
Semantic segmentation of remotely sensed images for building footprint recognition has been extensively researched, and several supervised and unsupervised approaches have been presented and adopted. The capacity to do real-time mapping and precise segmentation on a significant scale while considering the intrinsic diversity [...] Read more.
Semantic segmentation of remotely sensed images for building footprint recognition has been extensively researched, and several supervised and unsupervised approaches have been presented and adopted. The capacity to do real-time mapping and precise segmentation on a significant scale while considering the intrinsic diversity of the urban landscape in remotely sensed data has significant consequences. This study presents a novel approach for delineating building footprints by utilizing the compressed sensing and radial basis function technique. At the feature extraction stage, a small set of random features of the built-up areas is extracted from local image windows. The random features are used to train a radial basis neural network to perform building classification; thus, learning and classification are carried out in the compressed sensing domain. By virtue of its ability to represent characteristics in a reduced dimensional space, the scheme shows promise in being robust in the face of variability inherent in urban remotely sensed images. Through a comparison of the proposed method with numerous state-of-the-art approaches utilizing remotely sensed data of different spatial resolutions and building clutter, we establish its robustness and prove its viability. Accuracy assessment is performed for segmented footprints, and comparative analysis is carried out in terms of intersection over union, overall accuracy, precision, recall, and F1 score. The proposed method achieved scores of 93% in overall accuracy, 90.4% in intersection over union, and 91.1% in F1 score, even when dealing with drastically different image features. The results demonstrate that the proposed methodology yields substantial enhancements in classification accuracy and decreases in feature dimensionality. Full article
Show Figures

Figure 1

22 pages, 6345 KB  
Article
Fast Dynamic Time Warping and Hierarchical Clustering with Multispectral and Synthetic Aperture Radar Temporal Analysis for Unsupervised Winter Food Crop Mapping
by Hsuan-Yi Li, James A. Lawarence, Philippa J. Mason and Richard C. Ghail
Agriculture 2025, 15(1), 82; https://doi.org/10.3390/agriculture15010082 - 2 Jan 2025
Cited by 4 | Viewed by 2595
Abstract
Food sustainability has become a major global concern in recent years. Multiple complimentary strategies to deal with this issue have been developed; one of these approaches is regenerative farming. The identification and analysis of crop type phenology are required to achieve sustainable regenerative [...] Read more.
Food sustainability has become a major global concern in recent years. Multiple complimentary strategies to deal with this issue have been developed; one of these approaches is regenerative farming. The identification and analysis of crop type phenology are required to achieve sustainable regenerative faming. Earth Observation (EO) data have been widely applied to crop type identification using supervised Machine Learning (ML) and Deep Learning (DL) classifications, but these methods commonly rely on large amounts of ground truth data, which usually prevent historical analysis and may be impractical in very remote, very extensive or politically unstable regions. Thus, the development of a robust but intelligent unsupervised classification model is attractive for the long-term and sustainable prediction of agricultural yields. Here, we propose FastDTW-HC, a combination of Fast Dynamic Time Warping (DTW) and Hierarchical Clustering (HC), as a significantly improved method that requires no ground truth input for the classification of winter food crop varieties of barley, wheat and rapeseed, in Norfolk, UK. A series of variables is first derived from the EO products, and these include spectral indices from Sentinel-2 multispectral data and backscattered amplitude values at dual polarisations from Sentinel-1 Synthetic Aperture Radar (SAR) data. Then, the phenological patterns of winter barley, winter wheat and winter rapeseed are analysed using the FastDTW-HC applied to the time-series created for each variable, between Nov 2019 and June 2020. Future research will extend this winter food crop mapping analysis using FastDTW-HC modelling to a regional scale. Full article
Show Figures

Figure 1

17 pages, 2635 KB  
Article
Applied Research on Face Image Beautification Based on a Generative Adversarial Network
by Junying Gan and Jianqiang Liu
Electronics 2024, 13(23), 4780; https://doi.org/10.3390/electronics13234780 - 3 Dec 2024
Cited by 2 | Viewed by 2437
Abstract
Generative adversarial networks (GANs) are widely used in image conversion tasks and have shown unique advantages in the context of face image beautification, as they can generate high-resolution face images. When used alongside potential spatial adjustments, it becomes possible to control the diversity [...] Read more.
Generative adversarial networks (GANs) are widely used in image conversion tasks and have shown unique advantages in the context of face image beautification, as they can generate high-resolution face images. When used alongside potential spatial adjustments, it becomes possible to control the diversity of the generated images and learn from small amounts of labeled data or unsupervised data, thus reducing the costs associated with data acquisition and labeling. At present, there are some problems in terms of face image beautification processes, such as poor learning of the details of a beautification style, the use of only one beautification effect, and distortions being present in the generated face image. Therefore, this study proposes the facial image beautification generative adversarial network (FIBGAN) method, in which images with different beautification style intensities are generated with respect to an input face image. First, a feature pyramid network is used to construct a pre-encoder to generate multi-layer feature vectors containing the details of the face image, such that it can learn the beautification details of the face images during the beautification style transmission. Second, the pre-encoder combines the separate style vectors generated with respect to the original image and the style image to transfer the beautification style, such that the generated images have different beautification style intensities. Finally, the weight demodulation method is used as the beautification style transmission module in the generator, and the normalization operation on the feature map is replaced with the convolution weight to eliminate any artifacts from the feature map and reduce distortions in the generated images. The experimental results show that the FIBGAN model not only transmits the beautification style to face images in a detailed manner but also generates face images with different beautification intensities while reducing the distortion of the generated face images. Therefore, it can be widely used in the beauty and fashion industry, advertising, and media production. Full article
Show Figures

Figure 1

14 pages, 5194 KB  
Article
Machine Learning Insights into the Last 400 Years of Etna Lateral Eruptions from Historical Volcanological Data
by Arianna Beatrice Malaguti, Claudia Corradino, Alessandro La Spina, Stefano Branca and Ciro Del Negro
Geosciences 2024, 14(11), 295; https://doi.org/10.3390/geosciences14110295 - 3 Nov 2024
Cited by 3 | Viewed by 3203
Abstract
Volcanic hazard assessment is generally based on past eruptive behavior, assuming that previous activity is representative of future activity. Hazard assessment can be supported by Artificial Intelligence (AI) techniques, such as machine learning, which are used for data exploration to identify features of [...] Read more.
Volcanic hazard assessment is generally based on past eruptive behavior, assuming that previous activity is representative of future activity. Hazard assessment can be supported by Artificial Intelligence (AI) techniques, such as machine learning, which are used for data exploration to identify features of interest in the data. Here, we applied a machine learning technique to automate the analysis of these datasets, handling intricate patterns that are not easily captured by explicit commands. Using the k-means clustering algorithm, we classified effusive eruptions of Mount Etna over the past 400 years based on key parameters, including lava volume, Mean Output Rate (MOR), and eruption duration. Our analysis identified six distinct eruption clusters, each characterized by unique eruption dynamics. Furthermore, spatial analysis revealed significant sectoral variations in eruption activity across Etna’s flanks. These findings, derived through unsupervised learning, offer new insights into Etna’s eruptive behavior and contribute to the development of hazard maps that are essential for long-term spatial planning and risk mitigation. Full article
(This article belongs to the Section Natural Hazards)
Show Figures

Figure 1

Back to TopTop