Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (38)

Search Parameters:
Keywords = RF-based HAR

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 5132 KB  
Article
Leveraging Hybrid RF-VLP for High-Accuracy Indoor Localization with Sparse Anchors
by Bangyan Lu, Yongyun Li, Yimao Sun and Yanbing Yang
Sensors 2025, 25(10), 3074; https://doi.org/10.3390/s25103074 - 13 May 2025
Viewed by 509
Abstract
Indoor low-power positioning systems have received much attention, and visible light positioning (VLP) shows great potential for its high accuracy and low power consumption. However, VLP also exhibits some limitations like small coverage area and the requirement of line of sight. Moreover, most [...] Read more.
Indoor low-power positioning systems have received much attention, and visible light positioning (VLP) shows great potential for its high accuracy and low power consumption. However, VLP also exhibits some limitations like small coverage area and the requirement of line of sight. Moreover, most VLP applications require the receiver to be within the coverage of at least three LEDs simultaneously, which seriously confines the availability of VLP when LEDs are sparsely deployed. Conversely, radio frequency (RF)-based positioning systems provide large coverage area, but suffer from low positioning accuracy due to multipath interference. In this work, we harnessed the complementary strengths of multiple technologies to develop a hybrid RF-VLP indoor positioning system for improving localization accuracy under sparse anchors. The RF-network-assisted visible light positioning enables each receiver to determine its position autonomously, using signals from a single LED anchor and neighboring receivers, and without needing RF anchors. To validate the effectiveness of the proposed method, we utilize commercial off-the-shelf LED and ESP32 to build up a prototype system. Comprehensive experiments are performed to evaluate the performance of the positioning system, and the results show that the proposed system achieves an overall root mean square error (RMSE) of 26.1 cm, representing a 28.5% improvement in positioning accuracy compared to traditional RF-based positioning methods, which makes it highly feasible for deployment. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Graphical abstract

24 pages, 34699 KB  
Article
The Study on Landslide Hazards Based on Multi-Source Data and GMLCM Approach
by Zhifang Zhao, Zhengyu Li, Penghui Lv, Fei Zhao and Lei Niu
Remote Sens. 2025, 17(9), 1634; https://doi.org/10.3390/rs17091634 - 5 May 2025
Viewed by 873
Abstract
The southwest region of China is characterized by numerous rugged mountains and valleys, which create favorable conditions for landslide disasters. The landslide-influencing factors show different sensitivities regionally, which induces the occurrence of disasters to different degrees, especially in small sample areas. This study [...] Read more.
The southwest region of China is characterized by numerous rugged mountains and valleys, which create favorable conditions for landslide disasters. The landslide-influencing factors show different sensitivities regionally, which induces the occurrence of disasters to different degrees, especially in small sample areas. This study constructs a framework for the identification, analysis, and evaluation of landslide hazards in complex mountainous regions within small sample areas. This study utilizes small baseline subset interferometric synthetic aperture radar (SBAS-InSAR) technology and high-resolution optical imagery for a comprehensive interpretation to identify landslide hazards. A geodetector is employed to analyze disaster-inducing factors, and machine-learning models such as random forest (RF), gradient boosting decision tree (GBDT), categorical boosting (CatBoost), logistic regression (LR), and stacking ensemble strategies (Stacking) are applied for landslide sensitivity evaluation. GMLCM stands for geodetector–machine-learning-coupled modeling. The results indicate the following: (1) 172 landslide hazards were identified, primarily concentrated along the banks of the Lancang River. (2) A geodetector analysis shows that the key disaster-inducing factors for landslides include a digital elevation model (DEM) (1321–1857 m), rainfall (1181–1290 mm/a), the distance from roads (0–1285 m), and geological rock formation (soft rock formation). (3) Based on the application of the K-means clustering algorithm and the Bayesian optimization algorithm, the GD-CatBoost model shows excellent performance. High-sensitivity zones were predominantly concentrated along the Lancang River, accounting for 24.2% in the study area. The method for identifying landslide hazards and small-sample sensitivity evaluation can provide guidance and insights for landslide monitoring and harnessing in similar geological environments. Full article
Show Figures

Figure 1

23 pages, 4371 KB  
Article
Soil Moisture Inversion Using Multi-Sensor Remote Sensing Data Based on Feature Selection Method and Adaptive Stacking Algorithm
by Liguo Wang and Ya Gao
Remote Sens. 2025, 17(9), 1569; https://doi.org/10.3390/rs17091569 - 28 Apr 2025
Viewed by 862
Abstract
Soil moisture (SM) profoundly influences crop growth, yield, soil temperature regulation, and ecological balance maintenance and plays a pivotal role in water resources management and regulation. The focal objective of this investigation is to identify feature parameters closely associated with soil moisture through [...] Read more.
Soil moisture (SM) profoundly influences crop growth, yield, soil temperature regulation, and ecological balance maintenance and plays a pivotal role in water resources management and regulation. The focal objective of this investigation is to identify feature parameters closely associated with soil moisture through the implementation of feature selection methods on multi-source remote sensing data. Specifically, three feature selection methods, namely SHApley Additive exPlanations (SHAP), information gain (Info-gain), and Info_gain ∩ SHAP were validated in this study. The multi-source remote sensing data collected from Sentinel-1, Landsat-8, and Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Model (ASTGTM DEM) enabled the derivation of 25 characteristic parameters through sound computational approaches. Subsequently, a stacking algorithm integrating multiple machine-learning (ML) algorithms based on adaptive learning was engineered to accomplish soil moisture prediction. The attained prediction outcomes were then juxtaposed against those of single models, including Random Forest (RF), Adaptive Boosting (AdaBoost), Gradient Boosting Decision Tree (GBDT), Light Gradient Boosting Machine (LightGBM), Extreme Gradient Boosting (XGBoost), and Categorical Boosting (CatBoost). Notably, the adoption of feature factors selected by the Info_gain algorithm in combination with the adaptive stacking (Ada-Stacking) algorithm yielded the most optimal soil moisture prediction results. Specifically, the Mean Absolute Error (MAE) was determined to be 1.86 Vol. %, the Root Mean Square Error (RMSE) amounted to 2.68 Vol. %, and the R-squared (R2) reached 0.95. The multifactor integrated model that harnessed optical remote sensing data, radar backscatter coefficients, and topographic data exhibited remarkable accuracy in soil surface moisture retrieval, thus providing valuable insights for soil moisture inversion studies in the designated study area. Furthermore, the Ada-Stacking algorithm demonstrated its potency in integrating multiple models, thereby elevating retrieval accuracy and overcoming the limitations inherent in a single ML model. Full article
Show Figures

Figure 1

23 pages, 5451 KB  
Article
New Framework for Human Activity Recognition for Wearable Gait Rehabilitation Systems
by A. Moawad, Mohamed A. El-Khoreby, Shereen I. Fawaz, Hanady H. Issa, Mohammed I. Awad and A. Abdellatif
Appl. Syst. Innov. 2025, 8(2), 53; https://doi.org/10.3390/asi8020053 - 15 Apr 2025
Viewed by 1378
Abstract
This paper presents a novel Human Activity Recognition (HAR) framework using wearable sensors, specifically targeting applications in gait rehabilitation and assistive robots. The new methodology includes the usage of an open-source dataset. This dataset includes surface electromyography (sEMG) and inertial measurement units (IMUs) [...] Read more.
This paper presents a novel Human Activity Recognition (HAR) framework using wearable sensors, specifically targeting applications in gait rehabilitation and assistive robots. The new methodology includes the usage of an open-source dataset. This dataset includes surface electromyography (sEMG) and inertial measurement units (IMUs) signals for the lower limb of 22 healthy subjects. Several activities of daily living (ADLs) were included, such as walking, stairs up/down and ramp walking. A new framework for signal conditioning, denoising, filtering, feature extraction and activity classification is proposed. After testing several signal conditioning approaches, such as Wavelet transform (WT), Principal Component Analysis (PCA) and Empirical Mode Decomposition (EMD), an autocepstrum analysis (ACA)-based approach is chosen. Such a complex and effective approach enables the usage of supervised classifiers like K-nearest neighbor (KNN), neural networks (NN) and random forest (RF). The random forest classifier has shown the best results with an accuracy of 97.63% for EMG signals extracted from the soleus muscle. Additionally, RF has shown the best results for IMU signals with 98.52%. These results emphasize the potential of the new framework of wearable HAR systems in gait rehabilitation, paving the way for real-time implementation in lower limb assistive devices. Full article
Show Figures

Figure 1

17 pages, 3121 KB  
Article
Real-Time Radar Classification Based on Software-Defined Radio Platforms: Enhancing Processing Speed and Accuracy with Graphics Processing Unit Acceleration
by Seckin Oncu, Mehmet Karakaya, Yaser Dalveren, Ali Kara and Mohammad Derawi
Sensors 2024, 24(23), 7776; https://doi.org/10.3390/s24237776 - 4 Dec 2024
Viewed by 2055
Abstract
This paper presents a comprehensive evaluation of real-time radar classification using software-defined radio (SDR) platforms. The transition from analog to digital technologies, facilitated by SDR, has revolutionized radio systems, offering unprecedented flexibility and reconfigurability through software-based operations. This advancement complements the role of [...] Read more.
This paper presents a comprehensive evaluation of real-time radar classification using software-defined radio (SDR) platforms. The transition from analog to digital technologies, facilitated by SDR, has revolutionized radio systems, offering unprecedented flexibility and reconfigurability through software-based operations. This advancement complements the role of radar signal parameters, encapsulated in the pulse description words (PDWs), which play a pivotal role in electronic support measure (ESM) systems, enabling the detection and classification of threat radars. This study proposes an SDR-based radar classification system that achieves real-time operation with enhanced processing speed. Employing the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm as a robust classifier, the system harnesses Graphical Processing Unit (GPU) parallelization for efficient radio frequency (RF) parameter extraction. The experimental results highlight the efficiency of this approach, demonstrating a notable improvement in processing speed while operating at a sampling rate of up to 200 MSps and achieving an accuracy of 89.7% for real-time radar classification. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

19 pages, 1774 KB  
Article
Effective Machine Learning Techniques for Dealing with Poor Credit Data
by Dumisani Selby Nkambule, Bhekisipho Twala and Jan Harm Christiaan Pretorius
Risks 2024, 12(11), 172; https://doi.org/10.3390/risks12110172 - 30 Oct 2024
Cited by 2 | Viewed by 1956
Abstract
Credit risk is a crucial component of daily financial services operations; it measures the likelihood that a borrower will default on a loan, incurring an economic loss. By analysing historical data for assessment of the creditworthiness of a borrower, lenders can reduce credit [...] Read more.
Credit risk is a crucial component of daily financial services operations; it measures the likelihood that a borrower will default on a loan, incurring an economic loss. By analysing historical data for assessment of the creditworthiness of a borrower, lenders can reduce credit risk. Data are vital at the core of the credit decision-making processes. Decision-making depends heavily on accurate, complete data, and failure to harness high-quality data would impact credit lenders when assessing the loan applicants’ risk profiles. In this paper, an empirical comparison of the robustness of seven machine learning algorithms to credit risk, namely support vector machines (SVMs), naïve base, decision trees (DT), random forest (RF), gradient boosting (GB), K-nearest neighbour (K-NN), and logistic regression (LR), is carried out using the Lending Club credit data from Kaggle. This task uses seven performance measures, including the F1 Score (recall, accuracy, and precision), ROC-AUC, and HL and MCC metrics. Then, the harnessing of generative adversarial networks (GANs) simulation to enhance the robustness of the single machine learning classifiers for predicting credit risk is proposed. The results show that when GANs imputation is incorporated, the decision tree is the best-performing classifier with an accuracy rate of 93.01%, followed by random forest (92.92%), gradient boosting (92.33%), support vector machine (90.83%), logistic regression (90.76%), and naïve Bayes (89.29%), respectively. The classifier is the worst-performing method with a k-NN (88.68%) accuracy rate. Subsequently, when GANs are optimised, the accuracy rate of the naïve Bayes classifier improves significantly to (90%) accuracy rate. Additionally, the average error rate for these classifiers is over 9%, which implies that the estimates are not far from the actual values. In summary, most individual classifiers are more robust to missing data when GANs are used as an imputation technique. The differences in performance of all seven machine learning algorithms are significant at the 95% level. Full article
(This article belongs to the Special Issue Financial Analysis, Corporate Finance and Risk Management)
Show Figures

Figure 1

16 pages, 6949 KB  
Article
Study on the Method of Vineyard Information Extraction Based on Spectral and Texture Features of GF-6 Satellite Imagery
by Xuemei Han, Huichun Ye, Yue Zhang, Chaojia Nie and Fu Wen
Agronomy 2024, 14(11), 2542; https://doi.org/10.3390/agronomy14112542 - 28 Oct 2024
Viewed by 1050
Abstract
Accurately identifying the distribution of vineyard cultivation is of great significance for the development of the grape industry and the optimization of planting structures. Traditional remote sensing techniques for vineyard identification primarily depend on machine learning algorithms based on spectral features. However, the [...] Read more.
Accurately identifying the distribution of vineyard cultivation is of great significance for the development of the grape industry and the optimization of planting structures. Traditional remote sensing techniques for vineyard identification primarily depend on machine learning algorithms based on spectral features. However, the spectral reflectance similarities between grapevines and other orchard vegetation lead to persistent misclassification and omission errors across various machine learning algorithms. As a perennial vine plant, grapes are cultivated using trellis systems, displaying regular row spacing and distinctive strip-like texture patterns in high-resolution satellite imagery. This study selected the main oasis area of Turpan City in Xinjiang, China, as the research area. First, this study extracted both spectral and texture features based on GF-6 satellite imagery, subsequently employing the Boruta algorithm to discern the relative significance of these remote sensing features. Then, this study constructed vineyard information extraction models by integrating spectral and texture features, using machine learning algorithms including Naive Bayes (NB), Support Vector Machines (SVMs), and Random Forests (RFs). The efficacy of various machine learning algorithms and remote sensing features in extracting vineyard information was subsequently evaluated and compared. The results indicate that three spectral features and five texture features under a 7 × 7 window have significant sensitivity to vineyard recognition. These spectral features include the Normalized Difference Vegetation Index (NDVI), Enhanced Vegetation Index (EVI), and Normalized Difference Water Index (NDWI), while texture features include contrast statistics in the near-infrared band (B4_CO) and the variance statistic, contrast statistic, heterogeneity statistic, and correlation statistic derived from NDVI images (NDVI_VA, NDVI_CO, NDVI_DI, and NDVI_COR). The RF algorithm significantly outperforms both the NB and SVM models in extracting vineyard information, boasting an impressive accuracy of 93.89% and a Kappa coefficient of 0.89. This marks a 12.25% increase in accuracy and a 0.11 increment in the Kappa coefficient over the NB model, as well as an 8.02% enhancement in accuracy and a 0.06 rise in the Kappa coefficient compared to the SVM model. Moreover, the RF model, which amalgamates spectral and texture features, exhibits a notable 13.59% increase in accuracy versus the spectral-only model and a 14.92% improvement over the texture-only model. This underscores the efficacy of the RF model in harnessing the spectral and textural attributes of GF-6 imagery for the precise extraction of vineyard data, offering valuable theoretical and methodological insights for future vineyard identification and information retrieval efforts. Full article
(This article belongs to the Special Issue Crop Production Parameter Estimation through Remote Sensing Data)
Show Figures

Figure 1

22 pages, 1476 KB  
Article
An Optimal Feature Selection Method for Human Activity Recognition Using Multimodal Sensory Data
by Tazeem Haider, Muhammad Hassan Khan and Muhammad Shahid Farid
Information 2024, 15(10), 593; https://doi.org/10.3390/info15100593 - 29 Sep 2024
Cited by 3 | Viewed by 1980
Abstract
Recently, the research community has taken great interest in human activity recognition (HAR) due to its wide range of applications in different fields of life, including medicine, security, and gaming. The use of sensory data for HAR systems is most common because the [...] Read more.
Recently, the research community has taken great interest in human activity recognition (HAR) due to its wide range of applications in different fields of life, including medicine, security, and gaming. The use of sensory data for HAR systems is most common because the sensory data are collected from a person’s wearable device sensors, thus overcoming the privacy issues being faced in data collection through video cameras. Numerous systems have been proposed to recognize some common activities of daily living (ADLs) using different machine learning, image processing, and deep learning techniques. However, the existing techniques are computationally expensive, limited to recognizing short-term activities, or require large datasets for training purposes. Since an ADL is made up of a sequence of smaller actions, recognizing them directly from raw sensory data is challenging. In this paper, we present a computationally efficient two-level hierarchical framework for recognizing long-term (composite) activities, which does not require a very large dataset for training purposes. First, the short-term (atomic) activities are recognized from raw sensory data, and the probabilistic atomic score of each atomic activity is calculated relative to the composite activities. In the second step, the optimal features are selected based on atomic scores for each composite activity and passed to the two classification algorithms: random forest (RF) and support vector machine (SVM) due to their well-documented effectiveness for human activity recognition. The proposed method was evaluated on the publicly available CogAge dataset that contains 890 instances of 7 composite and 9700 instances of 61 atomic activities. The data were collected from eight sensors of three wearable devices: a smartphone, a smartwatch, and smart glasses. The proposed method achieved the accuracy of 96.61% and 94.1% by random forest and SVM classifiers, respectively, which shows a remarkable increase in the classification accuracy of existing HAR systems for this dataset. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Graphical abstract

6 pages, 2507 KB  
Proceeding Paper
Satellite-Based Crop Typology Mapping with Google Earth Engine
by Alapati Renuka, Manne Suneetha and Prathipati Vasavi
Eng. Proc. 2024, 66(1), 49; https://doi.org/10.3390/engproc2024066049 - 24 Sep 2024
Viewed by 1044
Abstract
Crop classification plays a pivotal role in agricultural remote sensing, offering critical insights into planting areas, growth monitoring, and yield evaluation. Leveraging the power of Google Earth Engine, this paper centers on the agricultural landscape of Krishna District as its study region. It [...] Read more.
Crop classification plays a pivotal role in agricultural remote sensing, offering critical insights into planting areas, growth monitoring, and yield evaluation. Leveraging the power of Google Earth Engine, this paper centers on the agricultural landscape of Krishna District as its study region. It explores the efficacy of multiple machine learning approaches, specifically Random Forest (RF), Classification and Regression Tree (CART), Naive Bayes, and Support Vector Machine (SVM), in composition of Sentinel-1 and Sentinel-2 satellite imagery for crop categorization. By meticulously assessing and contrasting the evaluations of these four classification methods, the results highlight the efficacy of RF. The overall accuracy (OA) regarding RF classification reaches 0.86, surpassing the results obtained by Naive Bayes (OA = 0.68), CART (OA = 0.63), and SVM (OA = 0.78). This scalable and straightforward classification methodology harnesses the advantages of cloud-based platforms for data handling and analysis. The timely and precise identification in crop typing holds immense importance for monitoring alterations in harvest patterns, estimating yields, and issuing crop safety alerts in the Krishna District and beyond. This paper contributes to the agricultural geospatial sensing domain by providing an innovative approach for accurate crop classification, with broad applications in precision farming and crop management. Full article
Show Figures

Figure 1

15 pages, 1063 KB  
Article
Predicting Postoperative Length of Stay in Patients Undergoing Laparoscopic Right Hemicolectomy for Colon Cancer: A Machine Learning Approach Using SICE (Società Italiana di Chirurgia Endoscopica) CoDIG Data
by Gabriele Anania, Matteo Chiozza, Emma Pedarzani, Giuseppe Resta, Alberto Campagnaro, Sabrina Pedon, Giorgia Valpiani, Gianfranco Silecchia, Pietro Mascagni, Diego Cuccurullo, Rossella Reddavid, Danila Azzolina and On behalf of SICE CoDIG (ColonDx Italian Group)
Cancers 2024, 16(16), 2857; https://doi.org/10.3390/cancers16162857 - 16 Aug 2024
Cited by 2 | Viewed by 1611
Abstract
The evolution of laparoscopic right hemicolectomy, particularly with complete mesocolic excision (CME) and central vascular ligation (CVL), represents a significant advancement in colon cancer surgery. The CoDIG 1 and CoDIG 2 studies highlighted Italy’s progressive approach, providing useful findings for optimizing patient outcomes [...] Read more.
The evolution of laparoscopic right hemicolectomy, particularly with complete mesocolic excision (CME) and central vascular ligation (CVL), represents a significant advancement in colon cancer surgery. The CoDIG 1 and CoDIG 2 studies highlighted Italy’s progressive approach, providing useful findings for optimizing patient outcomes and procedural efficiency. Within this context, accurately predicting postoperative length of stay (LoS) is crucial for improving resource allocation and patient care, yet its determination through machine learning techniques (MLTs) remains underexplored. This study aimed to harness MLTs to forecast the LoS for patients undergoing right hemicolectomy for colon cancer, using data from the CoDIG 1 (1224 patients) and CoDIG 2 (788 patients) studies. Multiple MLT algorithms, including random forest (RF) and support vector machine (SVM), were trained to predict LoS, with CoDIG 1 data used for internal validation and CoDIG 2 data for external validation. The RF algorithm showed a strong internal validation performance, achieving the best performances and a 0.92 ROC in predicting long-term stays (more than 5 days). External validation using the SVM model demonstrated 75% ROC values. Factors such as fast-track protocols, anastomosis, and drainage emerged as key predictors of LoS. Integrating MLTs into predicting postoperative LOS in colon cancer surgery offers a promising avenue for personalized patient care and improved surgical management. Using intraoperative features in the algorithm enables the profiling of a patient’s stay based on the planned intervention. This issue is important for tailoring postoperative care to individual patients and for hospitals to effectively plan and manage long-term stays for more critical procedures. Full article
Show Figures

Figure 1

24 pages, 7830 KB  
Article
Novel Learning of Bathymetry from Landsat 9 Imagery Using Machine Learning, Feature Extraction and Meta-Heuristic Optimization in a Shallow Turbid Lagoon
by Hang Thi Thuy Tran, Quang Hao Nguyen, Ty Huu Pham, Giang Thi Huong Ngo, Nho Tran Dinh Pham, Tung Gia Pham, Chau Thi Minh Tran and Thang Nam Ha
Geosciences 2024, 14(5), 130; https://doi.org/10.3390/geosciences14050130 - 11 May 2024
Cited by 3 | Viewed by 2523
Abstract
Bathymetry data is indispensable for a variety of aquatic field studies and benthic resource inventories. Determining water depth can be accomplished through an echo sounding system or remote estimation utilizing space-borne and air-borne data across diverse environments, such as lakes, rivers, seas, or [...] Read more.
Bathymetry data is indispensable for a variety of aquatic field studies and benthic resource inventories. Determining water depth can be accomplished through an echo sounding system or remote estimation utilizing space-borne and air-borne data across diverse environments, such as lakes, rivers, seas, or lagoons. Despite being a common option for bathymetry mapping, the use of satellite imagery faces challenges due to the complex inherent optical properties of water bodies (e.g., turbid water), satellite spatial resolution limitations, and constraints in the performance of retrieval models. This study focuses on advancing the remote sensing based method by harnessing the non-linear learning capabilities of the machine learning (ML) model, employing advanced feature selection through a meta-heuristic algorithm, and using image extraction techniques (i.e., band ratio, gray scale morphological operation, and morphological multi-scale decomposition). Herein, we validate the predictive capabilities of six ML models: Random Forest (RF), Support Vector Machine (SVM), CatBoost (CB), Extreme Gradient Boost (XGB), Light Gradient Boosting Machine (LGBM), and KTBoost (KTB) models, both with and without the application of meta-heuristic optimization (i.e., Dragon Fly, Particle Swarm Optimization, and Grey Wolf Optimization), to accurately ascertain water depth. This is achieved using a diverse input dataset derived from multi-spectral Landsat 9 imagery captured on a cloud-free day (19 September 2023) in a shallow, turbid lagoon. Our findings indicate the superior performance of LGBM coupled with Particle Swamp Optimization (R2 = 0.908, RMSE = 0.31 m), affirming the consistency and reliability of the feature extraction and selection-based framework, while offering novel insights into the expansion of bathymetric mapping in complex aquatic environments. Full article
Show Figures

Figure 1

13 pages, 951 KB  
Article
Human Activity Recognition Algorithm with Physiological and Inertial Signals Fusion: Photoplethysmography, Electrodermal Activity, and Accelerometry
by Justin Gilmore and Mona Nasseri
Sensors 2024, 24(10), 3005; https://doi.org/10.3390/s24103005 - 9 May 2024
Cited by 7 | Viewed by 2407
Abstract
Inertial signals are the most widely used signals in human activity recognition (HAR) applications, and extensive research has been performed on developing HAR classifiers using accelerometer and gyroscope data. This study aimed to investigate the potential enhancement of HAR models through the fusion [...] Read more.
Inertial signals are the most widely used signals in human activity recognition (HAR) applications, and extensive research has been performed on developing HAR classifiers using accelerometer and gyroscope data. This study aimed to investigate the potential enhancement of HAR models through the fusion of biological signals with inertial signals. The classification of eight common low-, medium-, and high-intensity activities was assessed using machine learning (ML) algorithms, trained on accelerometer (ACC), blood volume pulse (BVP), and electrodermal activity (EDA) data obtained from a wrist-worn sensor. Two types of ML algorithms were employed: a random forest (RF) trained on features; and a pre-trained deep learning (DL) network (ResNet-18) trained on spectrogram images. Evaluation was conducted on both individual activities and more generalized activity groups, based on similar intensity. Results indicated that RF classifiers outperformed corresponding DL classifiers at both individual and grouped levels. However, the fusion of EDA and BVP signals with ACC data improved DL classifier performance compared to a baseline DL model with ACC-only data. The best performance was achieved by a classifier trained on a combination of ACC, EDA, and BVP images, yielding F1-scores of 69 and 87 for individual and grouped activity classifications, respectively. For DL models trained with additional biological signals, almost all individual activity classifications showed improvement (p-value < 0.05). In grouped activity classifications, DL model performance was enhanced for low- and medium-intensity activities. Exploring the classification of two specific activities, ascending/descending stairs and cycling, revealed significantly improved results using a DL model trained on combined ACC, BVP, and EDA spectrogram images (p-value < 0.05). Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

24 pages, 18909 KB  
Article
Innovative Decision Fusion for Accurate Crop/Vegetation Classification with Multiple Classifiers and Multisource Remote Sensing Data
by Shuang Shuai, Zhi Zhang, Tian Zhang, Wei Luo, Li Tan, Xiang Duan and Jie Wu
Remote Sens. 2024, 16(9), 1579; https://doi.org/10.3390/rs16091579 - 29 Apr 2024
Cited by 5 | Viewed by 2223
Abstract
Obtaining accurate and real-time spatial distribution information regarding crops is critical for enabling effective smart agricultural management. In this study, innovative decision fusion strategies, including Enhanced Overall Accuracy Index (E-OAI) voting and the Overall Accuracy Index-based Majority Voting (OAI-MV), were introduced to optimize [...] Read more.
Obtaining accurate and real-time spatial distribution information regarding crops is critical for enabling effective smart agricultural management. In this study, innovative decision fusion strategies, including Enhanced Overall Accuracy Index (E-OAI) voting and the Overall Accuracy Index-based Majority Voting (OAI-MV), were introduced to optimize the use of diverse remote sensing data and various classifiers, thereby improving the accuracy of crop/vegetation identification. These strategies were utilized to integrate crop/vegetation classification outcomes from distinct feature sets (including Gaofen-6 reflectance, Sentinel-2 time series of vegetation indices, Sentinel-2 time series of biophysical variables, Sentinel-1 time series of backscatter coefficients, and their combinations) using distinct classifiers (Random Forests (RFs), Support Vector Machines (SVMs), Maximum Likelihood (ML), and U-Net), taking two grain-producing areas (Site #1 and Site #2) in Haixi Prefecture, Qinghai Province, China, as the research area. The results indicate that employing U-Net on feature-combined sets yielded the highest overall accuracy (OA) of 81.23% and 91.49% for Site #1 and Site #2, respectively, in the single classifier experiments. The E-OAI strategy, compared to the original OAI strategy, boosted the OA by 0.17% to 6.28%. Furthermore, the OAI-MV strategy achieved the highest OA of 86.02% and 95.67% for the respective study sites. This study highlights the distinct strengths of various remote sensing features and classifiers in discerning different crop and vegetation types. Additionally, the proposed OAI-MV and E-OAI strategies effectively harness the benefits of diverse classifiers and multisource remote sensing features, significantly enhancing the accuracy of crop/vegetation classification. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Graphical abstract

23 pages, 1916 KB  
Article
Strategic Machine Learning Optimization for Cardiovascular Disease Prediction and High-Risk Patient Identification
by Konstantina-Vasiliki Tompra, George Papageorgiou and Christos Tjortjis
Algorithms 2024, 17(5), 178; https://doi.org/10.3390/a17050178 - 26 Apr 2024
Cited by 12 | Viewed by 4461
Abstract
Despite medical advancements in recent years, cardiovascular diseases (CVDs) remain a major factor in rising mortality rates, challenging predictions despite extensive expertise. The healthcare sector is poised to benefit significantly from harnessing massive data and the insights we can derive from it, underscoring [...] Read more.
Despite medical advancements in recent years, cardiovascular diseases (CVDs) remain a major factor in rising mortality rates, challenging predictions despite extensive expertise. The healthcare sector is poised to benefit significantly from harnessing massive data and the insights we can derive from it, underscoring the importance of integrating machine learning (ML) to improve CVD prevention strategies. In this study, we addressed the major issue of class imbalance in the Behavioral Risk Factor Surveillance System (BRFSS) 2021 heart disease dataset, including personal lifestyle factors, by exploring several resampling techniques, such as the Synthetic Minority Oversampling Technique (SMOTE), Adaptive Synthetic Sampling (ADASYN), SMOTE-Tomek, and SMOTE-Edited Nearest Neighbor (SMOTE-ENN). Subsequently, we trained, tested, and evaluated multiple classifiers, including logistic regression (LR), decision trees (DTs), random forest (RF), gradient boosting (GB), XGBoost (XGB), CatBoost, and artificial neural networks (ANNs), comparing their performance with a primary focus on maximizing sensitivity for CVD risk prediction. Based on our findings, the hybrid resampling techniques outperformed the alternative sampling techniques, and our proposed implementation includes SMOTE-ENN coupled with CatBoost optimized through Optuna, achieving a remarkable 88% rate for recall and 82% for the area under the receiver operating characteristic (ROC) curve (AUC) metric. Full article
Show Figures

Figure 1

11 pages, 1975 KB  
Article
Fully Reconfigurable Photonic Filter for Flexible Payloads
by Annarita di Toma, Giuseppe Brunetti, Nabarun Saha and Caterina Ciminelli
Appl. Sci. 2024, 14(2), 488; https://doi.org/10.3390/app14020488 - 5 Jan 2024
Cited by 4 | Viewed by 1606
Abstract
Reconfigurable photonic filters represent cutting-edge technology that enhances the capabilities of space payloads. These advanced devices harness the unique properties of light to deliver superior performance in signal processing, filtering, and frequency selection. They offer broad filtering capabilities, allowing for the selection of [...] Read more.
Reconfigurable photonic filters represent cutting-edge technology that enhances the capabilities of space payloads. These advanced devices harness the unique properties of light to deliver superior performance in signal processing, filtering, and frequency selection. They offer broad filtering capabilities, allowing for the selection of specific frequency ranges while significantly reducing Size, Weight, and Power (SWaP). In scenarios where satellite communication channels are crowded with various signals sharing the same bandwidth, reconfigurable photonic filters enable efficient spectrum management and interference mitigation, ensuring reliable signal transmission. Furthermore, reconfigurable photonic filters demonstrate their ability to adapt to the dynamic space environment, withstanding extreme temperatures, radiation exposure, and mechanical stress while maintaining stable and reliable performance. Leveraging the inherent speed of light, these filters enable high-speed signal processing operations, paving the way to various space payload applications, such as agile frequency channelization. This capability allows for the simultaneous processing and analysis of different frequency bands. In this theoretical study, we introduce a fully reconfigurable filter comprising two decoupled ring resonators, each with the same radius. Each resonator can be independently thermally tuned to achieve reconfigurability in both central frequency and bandwidth. The precise reconfiguration of both central frequency and bandwidth is achieved by using the thermo-optic effect along the whole ring resonator path. A stopband rejection of 45 dB, with a reconfigurable bandwidth and central frequency of 20 MHz and 180 MHz, respectively, has been numerically achieved, with a maximum electrical power of 11.50 mW and a reconfiguration time of 9.20 µs, by using the scattering matrix approach, where the elements have been calculated through Finite Element Method-based and Beam Propagation Method-based simulations. This performance makes the proposed device suitable as key building block of RF optical filters, useful in the next-generation telecommunication payload domain. Full article
Show Figures

Figure 1

Back to TopTop