Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (14)

Search Parameters:
Keywords = metric tree embedding

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 3545 KiB  
Article
Leveraging Advanced Data-Driven Approaches to Forecast Daily Floods Based on Rainfall for Proactive Prevention Strategies in Saudi Arabia
by Anwar Ali Aldhafiri, Mumtaz Ali and Abdulhaleem H. Labban
Water 2025, 17(11), 1699; https://doi.org/10.3390/w17111699 - 3 Jun 2025
Viewed by 432
Abstract
Accurate flood forecasts are imperative to supervise and prepare for extreme events to assess the risks and develop proactive prevention strategies. The flood time-series data exhibit both spatial and temporal structures and make it challenging for the models to fully capture the embedded [...] Read more.
Accurate flood forecasts are imperative to supervise and prepare for extreme events to assess the risks and develop proactive prevention strategies. The flood time-series data exhibit both spatial and temporal structures and make it challenging for the models to fully capture the embedded features due to their complex stochastic nature. This paper proposed a new approach for the first time using variational mode decomposition (VMD) hybridized with Gaussian process regression (GPR) to design the VMD-GPR model for daily flood forecasting. First, the VMD model decomposed the (t − 1) lag into several signals called intrinsic mode functions (IMFs). The VMD has the ability to improve noise robustness, better mode separation, reduced mode aliasing, and end effects. Then, the partial auto-correlation function (PACF) was applied to determine the significant lag (t − 1). Finally, the PACF-based decomposed IMFs were sent into the GPR to forecast the daily flood index at (t − 1) for Jeddah and Jazan stations in Saudi Arabia. The long short-term memory (LSTM) boosted regression tree (BRT) and cascaded forward neural network (CFNN) models were combined with VMD to compare along with the standalone versions. The proposed VMD-GPR outperformed the comparing model to forecast daily floods for both stations using a set of performance metrics. The VMD-GPR outperformed comparing models by achieving R = 0.9825, RMSE = 0.0745, MAE = 0.0088, ENS = 0.9651, KGE = 0.9802, IA = 0.9911, U95% = 0.2065 for Jeddah station, and R = 0.9891, RMSE = 0.0945, MAE = 0.0189, ENS = 0.9781, KGE = 0.9849, IA = 0.9945, U95% = 0.2621 for Jazan station. The proposed VMD-GPR method efficiently analyzes flood events to forecast in these two stations to facilitate flood forecasting for disaster mitigation and enable the efficient use of water resources. The VMD-GPR model can help policymakers in strategic planning flood management to undertake mandatory risk mitigation measures. Full article
(This article belongs to the Section Hydrology)
Show Figures

Figure 1

19 pages, 8990 KiB  
Article
Optimizing Image Watermarking with Dual-Tree Complex Wavelet Transform and Particle Swarm Intelligence for Secure and High-Quality Protection
by Abed Al Raoof Bsoul and Alaa Bani Ismail
Appl. Sci. 2025, 15(3), 1315; https://doi.org/10.3390/app15031315 - 27 Jan 2025
Cited by 2 | Viewed by 1082
Abstract
Watermarking is a technique used to address issues related to the widespread use of the internet, such as copyright protection, tamper localization, and authentication. However, most watermarking approaches negatively affect the quality of the original image. In this research, we propose an optimized [...] Read more.
Watermarking is a technique used to address issues related to the widespread use of the internet, such as copyright protection, tamper localization, and authentication. However, most watermarking approaches negatively affect the quality of the original image. In this research, we propose an optimized image watermarking approach that utilizes the dual-tree complex wavelet transform and particle swarm optimization algorithm. Our approach focuses on maintaining the highest possible quality of the watermarked image by minimizing any noticeable changes. During the embedding phase, we break down the original image using a technique called dual-tree complex wavelet transform (DTCWT) and then use particle swarm optimization (PSO) to choose specific coefficients. We embed the bits of a binary logo into the least significant bits of these selected coefficients, creating the watermarked image. To extract the watermark, we reverse the embedding process by first decomposing both versions of the input image using DTCWT and extracting the same coefficients to retrieve those corresponding bits (watermark). In our experiments, we used a common dataset from watermarking research to demonstrate the functionality against various watermarked copies and peak signal-to-noise ratio (PSNR) and normalized cross-correlation (NCC) metrics. The PSNR is a measure of how well the watermarked image maintains its original quality, and the NCC reflects how accurately the watermark can be extracted. Our method gives mean PSNR and NCC of 80.50% and 92.51%, respectively. Full article
(This article belongs to the Special Issue Digital Image Processing: Technologies and Applications)
Show Figures

Figure 1

23 pages, 5243 KiB  
Article
Urban Mobility Pattern Detection: Development of a Classification Algorithm Based on Machine Learning and GPS
by Juan José Molina-Campoverde, Néstor Rivera-Campoverde, Paúl Andrés Molina Campoverde and Andrea Karina Bermeo Naula
Sensors 2024, 24(12), 3884; https://doi.org/10.3390/s24123884 - 15 Jun 2024
Cited by 11 | Viewed by 2810
Abstract
This study introduces an innovative algorithm for classifying transportation modes. It categorizes modes such as walking, biking, tram, bus, taxi, and private vehicles based on data collected through sensors embedded in smartphones. The data include date, time, latitude, longitude, altitude, and speed, gathered [...] Read more.
This study introduces an innovative algorithm for classifying transportation modes. It categorizes modes such as walking, biking, tram, bus, taxi, and private vehicles based on data collected through sensors embedded in smartphones. The data include date, time, latitude, longitude, altitude, and speed, gathered using a mobile application specifically designed for this project. These data were collected through the smartphone’s GPS to enhance the accuracy of the analysis. The stopping times of each transport mode, as well as the distance traveled and average speed, are analyzed to identify patterns and distinctive features. Conducted in Cuenca, Ecuador, the study aims to develop and validate an algorithm to enhance urban planning. It extracts significant features from mobility patterns, including speed, acceleration, and over-acceleration, and applies longitudinal dynamics to train the classification model. The classification algorithm relies on a decision tree model, achieving a high accuracy of 94.6% in validation and 94.9% in testing, demonstrating the effectiveness of the proposed approach. Additionally, the precision metric of 0.8938 signifies the model’s ability to make correct positive predictions, with nearly 90% of positive instances correctly identified. Furthermore, the recall metric at 0.83084 highlights the model’s capability to identify real positive instances within the dataset, capturing over 80% of positive instances. The calculated F1-score of 0.86117 indicates a harmonious balance between precision and recall, showcasing the models robust and well-rounded performance in classifying transport modes effectively. The study discusses the potential applications of this method in urban planning, transport management, public transport route optimization, and urban traffic monitoring. This research represents a preliminary stage in generating an origin–destination (OD) matrix to better understand how people move within the city. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

14 pages, 2747 KiB  
Article
Pine Wilt Disease Segmentation with Deep Metric Learning Species Classification for Early-Stage Disease and Potential False Positive Identification
by Nikhil Thapa, Ridip Khanal, Bhuwan Bhattarai and Joonwhoan Lee
Electronics 2024, 13(10), 1951; https://doi.org/10.3390/electronics13101951 - 16 May 2024
Cited by 5 | Viewed by 1663
Abstract
Pine Wilt Disease poses a significant global threat to forests, necessitating swift detection methods. Conventional approaches are resource-intensive but utilizing deep learning on ortho-mapped images obtained from Unmanned Aerial Vehicles offers cost-effective and scalable solutions. This study presents a novel method for Pine [...] Read more.
Pine Wilt Disease poses a significant global threat to forests, necessitating swift detection methods. Conventional approaches are resource-intensive but utilizing deep learning on ortho-mapped images obtained from Unmanned Aerial Vehicles offers cost-effective and scalable solutions. This study presents a novel method for Pine Wilt Disease detection and classification using YOLOv8 for segmenting diseased areas, followed by cropping the diseased regions from the original image and applying Deep Metric Learning for classification. We trained a ResNet50 model using semi-hard triplet loss to obtain embeddings, and subsequently trained a Random Forest classifier tasked with identifying tree species and distinguishing false positives. Segmentation was favored over object detection due to its ability to provide pixel-level information, enabling the flexible extension of subsequent bounding boxes. Deep Metric Learning-based classification after segmentation was chosen for its effectiveness in handling visually similar images. The results indicate a mean Intersection over Union of 83.12% for segmentation, with classification accuracies of 98.7% and 90.7% on the validation and test sets, respectively. Full article
(This article belongs to the Special Issue Revolutionizing Medical Image Analysis with Deep Learning)
Show Figures

Figure 1

21 pages, 712 KiB  
Article
Comparison of the Effectiveness of Various Classifiers for Breast Cancer Detection Using Data Mining Methods
by Noor Kamal Al-Qazzaz, Iyden Kamil Mohammed, Halah Kamal Al-Qazzaz, Sawal Hamid Bin Mohd Ali and Siti Anom Ahmad
Appl. Sci. 2023, 13(21), 12012; https://doi.org/10.3390/app132112012 - 3 Nov 2023
Cited by 2 | Viewed by 1585
Abstract
Countless women and men worldwide have lost their lives to breast cancer (BC). Although researchers from around the world have proposed various diagnostic methods for detecting this disease, there is still room for improvement in the accuracy and efficiency with which they can [...] Read more.
Countless women and men worldwide have lost their lives to breast cancer (BC). Although researchers from around the world have proposed various diagnostic methods for detecting this disease, there is still room for improvement in the accuracy and efficiency with which they can be used. A novel approach has been proposed for the early detection of BC by applying data mining techniques to the levels of prolactin (P), testosterone (T), cortisol (C), and human chorionic gonadotropin (HCG) in the blood and saliva of 20 women with histologically confirmed BC, 20 benign subjects, and 20 age-matched control women. In the proposed method, blood and saliva were used to categorize the severity of the BC into normal, benign, and malignant cases. Ten statistical features were collected to identify the severity of the BC using three different classification schemes—a decision tree (DT), a support vector machine (SVM), and k-nearest neighbors (KNN) were evaluated. Moreover, dimensionality reduction techniques using factor analysis (FA) and t-stochastic neighbor embedding (t-SNE) have been computed to obtain the best hyperparameters. The model has been validated using the k-fold cross-validation method in the proposed approach. Metrics for gauging a model’s effectiveness were applied. Dimensionality reduction approaches for salivary biomarkers enhanced the results, particularly with the DT, thereby increasing the classification accuracy from 66.67% to 93.3% and 90%, respectively, by utilizing t-SNE and FA. Furthermore, dimensionality reduction strategies for blood biomarkers enhanced the results, particularly with the DT, thereby increasing the classification accuracy from 60% to 80% and 93.3%, respectively, by utilizing FA and t-SNE. These findings point to t-SNE as a potentially useful feature selection for aiding in the identification of patients with BC, as it consistently improves the discrimination of benign, malignant, and control healthy subjects, thereby promising to aid in the improvement of breast tumour early detection. Full article
Show Figures

Figure 1

19 pages, 1581 KiB  
Article
Representing Hierarchical Structured Data Using Cone Embedding
by Daisuke Takehara and Kei Kobayashi
Mathematics 2023, 11(10), 2294; https://doi.org/10.3390/math11102294 - 15 May 2023
Viewed by 1880
Abstract
Extracting hierarchical structure in graph data is becoming an important problem in fields such as natural language processing and developmental biology. Hierarchical structures can be extracted by embedding methods in non-Euclidean spaces, such as Poincaré embedding and Lorentz embedding, and it is now [...] Read more.
Extracting hierarchical structure in graph data is becoming an important problem in fields such as natural language processing and developmental biology. Hierarchical structures can be extracted by embedding methods in non-Euclidean spaces, such as Poincaré embedding and Lorentz embedding, and it is now possible to learn efficient embedding by taking advantage of the structure of these spaces. In this study, we propose embedding into another type of metric space called a metric cone by learning an only one-dimensional coordinate variable added to the original vector space or a pre-trained embedding space. This allows for the extraction of hierarchical information while maintaining the properties of the pre-trained embedding. The metric cone is a one-dimensional extension of the original metric space and has the advantage that the curvature of the space can be easily adjusted by a parameter even when the coordinates of the original space are fixed. Through an extensive empirical evaluation we have corroborated the effectiveness of the proposed cone embedding model. In the case of randomly generated trees, cone embedding demonstrated superior performance in extracting hierarchical structures compared to existing techniques, particularly in high-dimensional settings. For WordNet embeddings, cone embedding exhibited a noteworthy correlation between the extracted hierarchical structures and human evaluation outcomes. Full article
(This article belongs to the Special Issue Advances of Intelligent Systems and Computing)
Show Figures

Figure 1

17 pages, 4796 KiB  
Article
Terahertz Time-Domain Spectroscopy of Glioma Patient Blood Plasma: Diagnosis and Treatment
by Olga Cherkasova, Denis Vrazhnov, Anastasia Knyazkova, Maria Konnikova, Evgeny Stupak, Vadim Glotov, Vyacheslav Stupak, Nazar Nikolaev, Andrey Paulish, Yan Peng, Yury Kistenev and Alexander Shkurinov
Appl. Sci. 2023, 13(9), 5434; https://doi.org/10.3390/app13095434 - 27 Apr 2023
Cited by 10 | Viewed by 2905
Abstract
Gliomas, one of the most severe malignant tumors of the central nervous system, have a high mortality rate and an increased risk of recurrence. Therefore, early glioma diagnosis and the control of treatment have great significance. The blood plasma samples of glioma patients, [...] Read more.
Gliomas, one of the most severe malignant tumors of the central nervous system, have a high mortality rate and an increased risk of recurrence. Therefore, early glioma diagnosis and the control of treatment have great significance. The blood plasma samples of glioma patients, patients with skull craniectomy defects, and healthy donors were studied using terahertz time-domain spectroscopy (THz-TDS). An analysis of experimental THz data was performed by machine learning (ML). The ML pipeline included (i) THz spectra smoothing using the Savitzky–Golay filter, (ii) dimension reduction with principal component analysis and t-distribution stochastic neighborhood embedding methods; (iii) data separability analyzed using Support Vector Machine (SVM), Random Forest (RF), and Extreme Gradient Boosting (XGBoost). The ML models’ performance was evaluated by a k-fold cross validation technique using ROC-AUC, sensitivity, and specificity metrics. It was shown that tree-based ensemble methods work more accurately than SVM. RF and XGBoost provided a better differentiation of the group of patients with glioma from healthy donors and patients with skull craniectomy defects. THz-TDS combined with ML was shown to make it possible to separate the blood plasma of patients before and after tumor removal surgery (AUC = 0.92). Thus, the applicability of THz-TDS and ML for the diagnosis of glioma and treatment monitoring has been shown. Full article
(This article belongs to the Special Issue Terahertz Spectroscopy: Instruments, Methods, and Application)
Show Figures

Figure 1

24 pages, 6083 KiB  
Article
Learning Hyperbolic Embedding for Phylogenetic Tree Placement and Updates
by Yueyu Jiang, Puoya Tabaghi and Siavash Mirarab
Biology 2022, 11(9), 1256; https://doi.org/10.3390/biology11091256 - 24 Aug 2022
Cited by 11 | Viewed by 4011
Abstract
Phylogenetic placement, used widely in ecological analyses, seeks to add a new species to an existing tree. A deep learning approach was previously proposed to estimate the distance between query and backbone species by building a map from gene sequences to a high-dimensional [...] Read more.
Phylogenetic placement, used widely in ecological analyses, seeks to add a new species to an existing tree. A deep learning approach was previously proposed to estimate the distance between query and backbone species by building a map from gene sequences to a high-dimensional space that preserves species tree distances. They then use a distance-based placement method to place the queries on that species tree. In this paper, we examine the appropriate geometry for faithfully representing tree distances while embedding gene sequences. Theory predicts that hyperbolic spaces should provide a drastic reduction in distance distortion compared to the conventional Euclidean space. Nevertheless, hyperbolic embedding imposes its own unique challenges related to arithmetic operations, exponentially-growing functions, and limited bit precision, and we address these challenges. Our results confirm that hyperbolic embeddings have substantially lower distance errors than Euclidean space. However, these better-estimated distances do not always lead to better phylogenetic placement. We then show that the deep learning framework can be used not just to place on a backbone tree but to update it to obtain a fully resolved tree. With our hyperbolic embedding framework, species trees can be updated remarkably accurately with only a handful of genes. Full article
Show Figures

Figure 1

22 pages, 5949 KiB  
Article
Choosing Feature Selection Methods for Spatial Modeling of Soil Fertility Properties at the Field Scale
by Caner Ferhatoglu and Bradley A. Miller
Agronomy 2022, 12(8), 1786; https://doi.org/10.3390/agronomy12081786 - 29 Jul 2022
Cited by 10 | Viewed by 2971
Abstract
With the growing availability of environmental covariates, feature selection (FS) is becoming an essential task for applying machine learning (ML) in digital soil mapping (DSM). In this study, the effectiveness of six types of FS methods from four categories (filter, wrapper, embedded, and [...] Read more.
With the growing availability of environmental covariates, feature selection (FS) is becoming an essential task for applying machine learning (ML) in digital soil mapping (DSM). In this study, the effectiveness of six types of FS methods from four categories (filter, wrapper, embedded, and hybrid) were compared. These FS algorithms chose relevant covariates from an exhaustive set of 1049 environmental covariates for predicting five soil fertility properties in ten fields, in combination with ten different ML algorithms. Resulting model performance was compared by three different metrics (R2 of 10-fold cross validation (CV), robustness ratio (RR; developed in this study), and independent validation with Lin’s concordance correlation coefficient (IV-CCC)). FS improved CV, RR, and IV-CCC compared to the models built without FS for most fields and soil properties. Wrapper (BorutaShap) and embedded (Lasso-FS, Random forest-FS) methods usually led to the optimal models. The filter-based ANOVA-FS method mostly led to overfit models, especially for fields with smaller sample quantities. Decision-tree based models were usually part of the optimal combination of FS and ML. Considering RR helped identify optimal combinations of FS and ML that can improve the performance of DSM compared to models produced from full covariate stacks. Full article
Show Figures

Figure 1

22 pages, 66210 KiB  
Article
Evaluation of Machine Learning Algorithms for Classification of EEG Signals
by Francisco Javier Ramírez-Arias, Enrique Efren García-Guerrero, Esteban Tlelo-Cuautle, Juan Miguel Colores-Vargas, Eloisa García-Canseco, Oscar Roberto López-Bonilla, Gilberto Manuel Galindo-Aldana and Everardo Inzunza-González
Technologies 2022, 10(4), 79; https://doi.org/10.3390/technologies10040079 - 30 Jun 2022
Cited by 26 | Viewed by 11308
Abstract
In brain–computer interfaces (BCIs), it is crucial to process brain signals to improve the accuracy of the classification of motor movements. Machine learning (ML) algorithms such as artificial neural networks (ANNs), linear discriminant analysis (LDA), decision tree (D.T.), K-nearest neighbor (KNN), naive Bayes [...] Read more.
In brain–computer interfaces (BCIs), it is crucial to process brain signals to improve the accuracy of the classification of motor movements. Machine learning (ML) algorithms such as artificial neural networks (ANNs), linear discriminant analysis (LDA), decision tree (D.T.), K-nearest neighbor (KNN), naive Bayes (N.B.), and support vector machine (SVM) have made significant progress in classification issues. This paper aims to present a signal processing analysis of electroencephalographic (EEG) signals among different feature extraction techniques to train selected classification algorithms to classify signals related to motor movements. The motor movements considered are related to the left hand, right hand, both fists, feet, and relaxation, making this a multiclass problem. In this study, nine ML algorithms were trained with a dataset created by the feature extraction of EEG signals.The EEG signals of 30 Physionet subjects were used to create a dataset related to movement. We used electrodes C3, C1, CZ, C2, and C4 according to the standard 10-10 placement. Then, we extracted the epochs of the EEG signals and applied tone, amplitude levels, and statistical techniques to obtain the set of features. LabVIEW™2015 version custom applications were used for reading the EEG signals; for channel selection, noise filtering, band selection, and feature extraction operations; and for creating the dataset. MATLAB 2021a was used for training, testing, and evaluating the performance metrics of the ML algorithms. In this study, the model of Medium-ANN achieved the best performance, with an AUC average of 0.9998, Cohen’s Kappa coefficient of 0.9552, a Matthews correlation coefficient of 0.9819, and a loss of 0.0147. These findings suggest the applicability of our approach to different scenarios, such as implementing robotic prostheses, where the use of superficial features is an acceptable option when resources are limited, as in embedded systems or edge computing devices. Full article
(This article belongs to the Special Issue Image and Signal Processing)
Show Figures

Figure 1

32 pages, 5896 KiB  
Article
The Methodology for Identifying Secondary Succession in Non-Forest Natura 2000 Habitats Using Multi-Source Airborne Remote Sensing Data
by Katarzyna Osińska-Skotak, Aleksandra Radecka, Wojciech Ostrowski, Dorota Michalska-Hejduk, Jakub Charyton, Krzysztof Bakuła and Hubert Piórkowski
Remote Sens. 2021, 13(14), 2803; https://doi.org/10.3390/rs13142803 - 16 Jul 2021
Cited by 3 | Viewed by 3314
Abstract
The succession process of trees and shrubs is considered as one of the threats to non-forest Natura 2000 habitats. Poland, as a member of the European Union, is obliged to monitor these habitats and preserve them in the best possible condition. If threats [...] Read more.
The succession process of trees and shrubs is considered as one of the threats to non-forest Natura 2000 habitats. Poland, as a member of the European Union, is obliged to monitor these habitats and preserve them in the best possible condition. If threats are identified, it is necessary to take action—as part of the so-called active protection—that will ensure the preservation of habitats in a non-deteriorated condition. At present, monitoring of Natura 2000 habitats is carried out in expert terms, i.e., the habitat conservation status is determined during field visits. This process is time- and cost-intensive, and it is subject to the subjectivism of the person performing the assessment. As a result of the research, a methodology for the identification and monitoring of the succession process in non-forest Natura 2000 habitats was developed, in which multi-sensor remote sensing data are used—airborne laser scanner (ALS) and hyperspectral (HS) data. The methodology also includes steps required to analyse the dynamics of the succession process in the past, which is done using archival photogrammetric data (aerial photographs and ALS data). The algorithms implemented within the methodology include structure from motion and dense image matching for processing the archival images, segmentation and Voronoi tessellation for delineating the spatial extent of succession, machine learning random forest classifier, recursive feature elimination and t-distributed stochastic neighbour embedding algorithms for succession species differentiation, as well as landscape metrics used for threat level analysis. The proposed methodology has been automated and enables a rapid assessment of the level of threat for a whole given area, as well as in relation to individual Natura 2000 habitats. The prepared methodology was successfully tested on seven research areas located in Poland. Full article
(This article belongs to the Special Issue Remote Sensing for Habitat Mapping)
Show Figures

Figure 1

16 pages, 1611 KiB  
Article
Comparison of Machine Learning and Deep Learning Models for Network Intrusion Detection Systems
by Niraj Thapa, Zhipeng Liu, Dukka B. KC, Balakrishna Gokaraju and Kaushik Roy
Future Internet 2020, 12(10), 167; https://doi.org/10.3390/fi12100167 - 30 Sep 2020
Cited by 79 | Viewed by 10971
Abstract
The development of robust anomaly-based network detection systems, which are preferred over static signal-based network intrusion, is vital for cybersecurity. The development of a flexible and dynamic security system is required to tackle the new attacks. Current intrusion detection systems (IDSs) suffer to [...] Read more.
The development of robust anomaly-based network detection systems, which are preferred over static signal-based network intrusion, is vital for cybersecurity. The development of a flexible and dynamic security system is required to tackle the new attacks. Current intrusion detection systems (IDSs) suffer to attain both the high detection rate and low false alarm rate. To address this issue, in this paper, we propose an IDS using different machine learning (ML) and deep learning (DL) models. This paper presents a comparative analysis of different ML models and DL models on Coburg intrusion detection datasets (CIDDSs). First, we compare different ML- and DL-based models on the CIDDS dataset. Second, we propose an ensemble model that combines the best ML and DL models to achieve high-performance metrics. Finally, we benchmarked our best models with the CIC-IDS2017 dataset and compared them with state-of-the-art models. While the popular IDS datasets like KDD99 and NSL-KDD fail to represent the recent attacks and suffer from network biases, CIDDS, used in this research, encompasses labeled flow-based data in a simulated office environment with both updated attacks and normal usage. Furthermore, both accuracy and interpretability must be considered while implementing AI models. Both ML and DL models achieved an accuracy of 99% on the CIDDS dataset with a high detection rate, low false alarm rate, and relatively low training costs. Feature importance was also studied using the Classification and regression tree (CART) model. Our models performed well in 10-fold cross-validation and independent testing. CART and convolutional neural network (CNN) with embedding achieved slightly better performance on the CIC-IDS2017 dataset compared to previous models. Together, these results suggest that both ML and DL methods are robust and complementary techniques as an effective network intrusion detection system. Full article
Show Figures

Figure 1

23 pages, 2558 KiB  
Article
Individualized Interaural Feature Learning and Personalized Binaural Localization Model
by Xiang Wu, Dumidu S. Talagala, Wen Zhang and Thushara D. Abhayapala
Appl. Sci. 2019, 9(13), 2682; https://doi.org/10.3390/app9132682 - 30 Jun 2019
Cited by 6 | Viewed by 3314
Abstract
The increasing importance of spatial audio technologies has demonstrated the need and importance of correctly adapting to the individual characteristics of the human auditory system, and illustrates the crucial need for humanoid localization systems for testing these technologies. To this end, this paper [...] Read more.
The increasing importance of spatial audio technologies has demonstrated the need and importance of correctly adapting to the individual characteristics of the human auditory system, and illustrates the crucial need for humanoid localization systems for testing these technologies. To this end, this paper introduces a novel feature analysis and selection approach for binaural localization and builds a probabilistic localization mapping model, especially useful for the vertical dimension localization. The approach uses the mutual information as a metric to evaluate the most significant frequencies of the interaural phase difference and interaural level difference. Then, by using the random forest algorithm and embedding the mutual information as a feature selection criteria, the feature selection procedures are encoded with the training of the localization mapping. The trained mapping model is capable of using interaural features more efficiently, and, because of the multiple-tree-based model structure, the localization model shows robust performance to noise and interference. By integrating the direct path relative transfer function estimation, we propose to devise a novel localization approach that has improved performance in the presence of noise and reverberation. The proposed mapping model is compared with the state-of-the-art manifold learning procedure in different acoustical configurations, and a more accurate and robust output can be observed. Full article
(This article belongs to the Special Issue Mobile Spatial Audio)
Show Figures

Figure 1

15 pages, 1666 KiB  
Article
Estimation of Mental Distress from Photoplethysmography
by Roberto Zangróniz, Arturo Martínez-Rodrigo, María T. López, José Manuel Pastor and Antonio Fernández-Caballero
Appl. Sci. 2018, 8(1), 69; https://doi.org/10.3390/app8010069 - 5 Jan 2018
Cited by 48 | Viewed by 7219
Abstract
This paper introduces the design of a new wearable photoplethysmography (PPG) sensor and its assessment for mental distress estimation. In our design, a PPG sensor obtains blood volume information by means of an optical plethysmogram technique. A number of temporal, morphological and frequency [...] Read more.
This paper introduces the design of a new wearable photoplethysmography (PPG) sensor and its assessment for mental distress estimation. In our design, a PPG sensor obtains blood volume information by means of an optical plethysmogram technique. A number of temporal, morphological and frequency markers are computed using time intervals between adjacent normal cardiac cycles to characterize pulse rate variability (PRV). In order to test the efficiency of the developed wearable for classifying distress versus calmness, the well-known International Affective Picture System has been used to induce different levels of arousal in forty-five healthy participants. The obtained results have shown that temporal features present a single discriminant power between emotional states of calm and stress, ranging from 67 to 72%. Moreover, a discriminant tree-based model is used to assess the possible underlying relationship among parameters. In this case, the combination of temporal parameters reaches 82.35% accuracy. Considering the low difficulty of metrics and methods used in this work, the algorithms are prepared to be embedded into a micro-controller device to work in real-time and in a long-term fashion. Full article
(This article belongs to the Special Issue Socio-Cognitive and Affective Computing)
Show Figures

Graphical abstract

Back to TopTop