Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (90)

Search Parameters:
Keywords = ternary classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 9745 KB  
Article
Adulteration Detection of Multi-Species Vegetable Oils in Camellia Oil Using SICRIT-HRMS and Machine Learning Methods
by Mei Wang, Ting Liu, Han Liao, Xian-Biao Liu, Qi Zou, Hao-Cheng Liu and Xiao-Yin Wang
Foods 2026, 15(3), 434; https://doi.org/10.3390/foods15030434 - 24 Jan 2026
Viewed by 59
Abstract
We aimed to establish a rapid and precise method for identifying and quantifying multi-species vegetable oil (corn oil, olive oil (OLO), soybean oil, and sunflower oil (SUO)) adulterations in camellia oil (CAO), using soft ionization by chemical reaction in transfer–high-resolution mass spectrometry (SICRIT-HRMS) [...] Read more.
We aimed to establish a rapid and precise method for identifying and quantifying multi-species vegetable oil (corn oil, olive oil (OLO), soybean oil, and sunflower oil (SUO)) adulterations in camellia oil (CAO), using soft ionization by chemical reaction in transfer–high-resolution mass spectrometry (SICRIT-HRMS) and machine learning methods. The results showed that SICRIT-HRMS could effectively characterize the volatile profiles of pure and adulterated CAO samples, including binary, ternary, quaternary, and quinary adulteration systems. The low m/z region (especially 100–300) exhibited importance to oil classification in multiple feature-selection methods. For qualitative detection, binary classification models based on convolutional neural networks (CNN), Random Forest (RF), and gradient boosting trees (GBT) algorithms showed high accuracies (98.70–100.00%) for identifying CAO adulteration under no dimensionality reduction (NON), principal component analysis (PCA), and uniform manifold approximation and projection (UMAP) strategies. The RF algorithm exhibited relatively high accuracy (96.25–99.45%) in multiclass classification. Moreover, the five models, including CNN, RF, support vector machines (SVM), logistic regression (LR), and GBT, exhibited different performances in distinguishing pure and adulterated CAO. Among 1093 blind oil samples, under NON, PCA, and UMAP: 10, 5, and 67 samples were misclassified by CNN model; 6, 7, and 41 samples were misclassified by RF model; 8, 9, and 82 samples were misclassified by SVM model; 17, 18, and 78 samples were misclassified by LR model; 7, 9, and 43 samples were misclassified by GBT model. For quantitative prediction, the PCA-CNN model performed optimally in predicting adulteration levels in CAO, especially with respect to OLO and SUO, exhibiting a high coefficient of determination for calibration (RC2, 0.9664–0.9974) and coefficient of determination for prediction (Rp2, 0.9599–0.9963) values, low root mean square error of calibration (RMSEC, 0.9–5.3%) and root mean square error of prediction (RMSEP, 1.1–5.8%) values, and RPD (5.0–16.3) values greater than 3.0. These results indicate that SICRIT-HRMS combined with machine learning can rapidly and accurately identify and quantify multi-species vegetable oil adulterations in CAO, which provides a reference for developing non-targeted and high-throughput detection methods in edible oil authenticity. Full article
Show Figures

Graphical abstract

22 pages, 4982 KB  
Article
Real-Time Analysis of Concrete Placement Progress Using Semantic Segmentation
by Zifan Ye, Linpeng Zhang, Yu Hu, Fengxu Hou, Rui Ma, Danni Luo and Wenqian Geng
Buildings 2026, 16(2), 434; https://doi.org/10.3390/buildings16020434 - 20 Jan 2026
Viewed by 95
Abstract
Concrete arch dams represent a predominant dam type in water conservancy and hydropower projects in China. The control of concrete placement progress during construction directly impacts project quality and construction efficiency. Traditional manual monitoring methods, characterized by delayed response and strong subjectivity, struggle [...] Read more.
Concrete arch dams represent a predominant dam type in water conservancy and hydropower projects in China. The control of concrete placement progress during construction directly impacts project quality and construction efficiency. Traditional manual monitoring methods, characterized by delayed response and strong subjectivity, struggle to meet the demands of modern intelligent construction management. This study introduces machine vision technology to monitor the concrete placement process and establishes an intelligent analysis system for construction scenes based on deep learning. By comparing the performance of U-Net and DeepLabV3+ semantic segmentation models in complex construction environments, the U-Net model, achieving an IoU of 89%, was selected to identify vibrated and non-vibrated concrete areas, thereby optimizing the concrete image segmentation algorithm. A comprehensive real-time analysis method for placement progress was developed, enabling automatic ternary classification and progress calculation for key construction stages, including concrete unloading, spreading, and vibration. In a continuous placement case study of Monolith No. 3 at a project site, the model’s segmentation results showed only an 8.2% error compared with manual annotations, confirming the method’s real-time capability and reliability. The research outcomes provide robust data support for intelligent construction management and hold significant practical value for enhancing the quality and efficiency of hydraulic engineering construction. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

18 pages, 1145 KB  
Article
A Systematic Approach for Selection of Fit-for-Purpose Low-Carbon Concrete for Various Bridge Elements to Reduce the Net Embodied Carbon of a Bridge Project
by Harish Kumar Srivastava, Vanissorn Vimonsatit and Simon Martin Clark
Infrastructures 2025, 10(10), 274; https://doi.org/10.3390/infrastructures10100274 - 13 Oct 2025
Viewed by 1067
Abstract
Australia consumes approximately 29 million m3 of concrete each year with an estimated embodied carbon (EC) of 12 Mt CO2e. High consumption of concrete makes it critical for successful decarbonization to support the achievement of ‘Net Zero 2050’ objectives of [...] Read more.
Australia consumes approximately 29 million m3 of concrete each year with an estimated embodied carbon (EC) of 12 Mt CO2e. High consumption of concrete makes it critical for successful decarbonization to support the achievement of ‘Net Zero 2050’ objectives of the Australian construction industry. Portland cement (PC) constitutes only 12–15% of the concrete mix but is responsible for approximately 90% of concrete’s EC. This necessitates reducing the PC in concrete with supplementary cementitious materials (SCMs) or using alternative binders such as geopolymer concrete. Concrete mixes including a combination of PC and SCMs as a binder have lower embodied carbon (EC) than those with only PC and are termed as low-carbon concrete (LCC). SCM addition to a concrete mix not only reduces EC but also enhances its mechanical and durability properties. Fly ash (FA) and granulated ground blast furnace slag (GGBFS) are the most used SCMs in Australia. It is noted that other SCMs such as limestone, metakaolin or calcinated clay, Delithiated Beta Spodumene (DBS) or lithium slag, etc., are being trialed. This technical paper presents a methodology that enables selecting LCCs with various degrees of SCMs for various elements of bridge structure without compromising their functional performance. The proposed methodology includes controls that need to be applied during the design/selection process of LCC, from material quality control to concrete mix design to EC evaluation for every element of a bridge, to minimize the overall carbon footprint of a bridge. Typical properties of LCC with FA and GGBFS as binary and ternary blends are also included for preliminary design of a fit-for-purpose LCC. An example for a bridge located in the B2 exposure classification zone (exposed to both carbonation on chloride ingress deterioration mechanisms) has also been included to test the methodology, which demonstrates that EC of the bridge may be reduced by up to 53% by use of the proposed methodology. Full article
(This article belongs to the Special Issue Sustainable Bridge Engineering)
Show Figures

Figure 1

33 pages, 3983 KB  
Article
Real-Time EEG Decoding of Motor Imagery via Nonlinear Dimensionality Reduction (Manifold Learning) and Shallow Classifiers
by Hezzal Kucukselbes and Ebru Sayilgan
Biosensors 2025, 15(10), 692; https://doi.org/10.3390/bios15100692 - 13 Oct 2025
Cited by 1 | Viewed by 1175
Abstract
This study introduces a real-time processing framework for decoding motor imagery EEG signals by integrating manifold learning techniques with shallow classifiers. EEG recordings were obtained from six healthy participants performing five distinct wrist and hand motor imagery tasks. To address the challenges of [...] Read more.
This study introduces a real-time processing framework for decoding motor imagery EEG signals by integrating manifold learning techniques with shallow classifiers. EEG recordings were obtained from six healthy participants performing five distinct wrist and hand motor imagery tasks. To address the challenges of high dimensionality and inherent nonlinearity in EEG data, five nonlinear dimensionality reduction methods, t-SNE, ISOMAP, LLE, Spectral Embedding, and MDS, were comparatively evaluated. Each method was combined with three shallow classifiers (k-NN, Naive Bayes, and SVM) to investigate performance across binary, ternary, and five-class classification settings. Among all tested configurations, the t-SNE + k-NN pairing achieved the highest accuracies, reaching 99.7% (two-class), 99.3% (three-class), and 89.0% (five-class). ISOMAP and MDS also delivered competitive results, particularly in multi-class scenarios. The presented approach builds upon our previous work involving EEG datasets from individuals with spinal cord injury (SCI), where the same manifold techniques were examined extensively. Comparative findings between healthy and SCI groups reveal consistent advantages of t-SNE and ISOMAP in preserving class separability, despite higher overall accuracies in healthy subjects due to improved signal quality. The proposed pipeline demonstrates low-latency performance, completing signal processing and classification in approximately 150 ms per trial, thereby meeting real-time requirements for responsive BCI applications. These results highlight the potential of nonlinear dimensionality reduction to enhance real-time EEG decoding, offering a low-complexity yet high-accuracy solution applicable to both healthy users and neurologically impaired individuals in neurorehabilitation and assistive technology contexts. Full article
(This article belongs to the Section Wearable Biosensors)
Show Figures

Figure 1

27 pages, 15617 KB  
Article
Integrated Lithofacies, Diagenesis, and Fracture Control on Reservoir Quality in Ultra-Deep Tight Sandstones: A Case from the Bashijiqike Formation, Kuqa Depression
by Wendan Song, Zhaohui Xu, Huaimin Xu, Lidong Wang and Yanli Wang
Energies 2025, 18(19), 5067; https://doi.org/10.3390/en18195067 - 23 Sep 2025
Viewed by 651
Abstract
Fractured tight sandstone reservoirs pose challenges for gas development due to low matrix porosity and permeability, complex pore structures, and pervasive fractures. This study focuses on the Bashijiqike Formation in the Keshen Gas Field, Kuqa Depression, aiming to clarify the geological controls on [...] Read more.
Fractured tight sandstone reservoirs pose challenges for gas development due to low matrix porosity and permeability, complex pore structures, and pervasive fractures. This study focuses on the Bashijiqike Formation in the Keshen Gas Field, Kuqa Depression, aiming to clarify the geological controls on reservoir quality. Lithofacies, diagenetic facies, and fracture facies were systematically classified by core analyses, thin sections, scanning electron microscopy (SEM), cathodoluminescence (CL), X-ray diffraction (XRD), grain size analyses, mercury intrusion capillary pressure (MICP), well logs and resistivity imaging logging (FMI). Their impacts on porosity, permeability and gas productivity were quantitatively assessed. A ternary reservoir quality assessment model was established by coupling these three factors. Results show that five lithofacies, four diagenetic facies, and four fracture facies jointly control reservoir performance. The high-energy gravelly sandstone facies exhibit an average porosity of 6.0% and average permeability of 0.066 mD, while the fine-grained sandstone shows poor properties due to compaction and clay content. Unstable component dissolution facies enhance secondary porosity to 6.0% and permeability to 0.093 mD. Reticulate and conjugate fracture patterns correspond to gas production rates two to five times higher than those with single fractures. These findings support targeted reservoir classification and improved development strategies for ultra-deep tight gas reservoirs. Full article
Show Figures

Figure 1

12 pages, 965 KB  
Article
SeismicNoiseAnalyzer: A Deep-Learning Tool for Automatic Quality Control of Seismic Stations
by Alessandro Pignatelli, Paolo Casale, Veronica Vignoli and Flavia Tavani
Computers 2025, 14(9), 392; https://doi.org/10.3390/computers14090392 - 16 Sep 2025
Viewed by 784
Abstract
SeismicNoiseAnalyzer 1.0 is a software tool designed to automatically assess the quality of seismic stations through the classification of spectral diagrams. By leveraging convolutional neural networks trained on expert-labeled data, the software emulates human visual inspection of probability density function (PDF) plots. It [...] Read more.
SeismicNoiseAnalyzer 1.0 is a software tool designed to automatically assess the quality of seismic stations through the classification of spectral diagrams. By leveraging convolutional neural networks trained on expert-labeled data, the software emulates human visual inspection of probability density function (PDF) plots. It supports both individual image analysis and batch processing from compressed archives, providing detailed reports that summarize station health. Two classification networks are available: a binary model that distinguishes between working and malfunctioning stations and a ternary model that introduces an intermediate “doubtful” category to capture ambiguous cases. The system demonstrates high agreement with expert evaluations and enables efficient instrumentation control across large seismic networks. Its intuitive graphical interface and automated workflow make it a valuable tool for routine monitoring and data validation. Full article
Show Figures

Graphical abstract

24 pages, 3057 KB  
Article
Spatiotemporal Extraction of Aquaculture Ponds Under Complex Surface Conditions Based on Deep Learning and Remote Sensing Indices
by Weirong Qin, Mohd Hasmadi Ismail, Mohammad Firuz Ramli, Junlin Deng and Ning Wu
Sustainability 2025, 17(16), 7201; https://doi.org/10.3390/su17167201 - 8 Aug 2025
Viewed by 1073
Abstract
The extraction of water surfaces and aquaculture targets from remote sensing imagery has been challenging for operations under different regions and conditions, especially since the model parameters must be optimized manually. This study addresses the requirement for large-scale monitoring of global aquaculture using [...] Read more.
The extraction of water surfaces and aquaculture targets from remote sensing imagery has been challenging for operations under different regions and conditions, especially since the model parameters must be optimized manually. This study addresses the requirement for large-scale monitoring of global aquaculture using the Google Earth Engine (GEE) platform to extract high-accuracy, long-term data series of water surfaces such as aquaculture ponds. A Composite Water Index (CWI) method is proposed to distinguish water surfaces from non-water surfaces with remote sensing data recorded with Sentinel-2 satellite, thereby minimizing manual intervention in aquaculture management. The CWI approach is implemented based on three index algorithms of remote sensing analysis such as the Water Index (WI), the Modified Normalized Difference Water Index (MNDWI) and the Automated Water Extraction Index with Shadow (AWEIsh). The values of the three index methods are obtained from 1000 grid points extracted with an overlaid map with three layers. A ternary regression method is then introduced to generate the coefficients of CWI. Experimental results show that the classification accuracy of the WI is higher than that of the MNDWI and the AWEIsh, leading to a more significant coefficient weight in the ternary regression. When different numbers of mean distribution points are used to calculate the indices, it is found that the highest R2 value can be achieved when using the coefficient value corresponding to 600 points, and an accuracy of 94% can be achieved by the CWI method for water surface classification. The CWI algorithm can also be used to monitor the change in aquaculture ponds in Johor, Malaysia; it was discovered that the total aquaculture area has expanded by 23.27 km from 2016 to 2023. This study provides a potential means for long-term observation and tracking of changes in aquaculture ponds and water surfaces, as well as water management and water protection. Specifically, the proposed Composite Water Index (CWI) model achieved a mean mIoU of 0.84 and an overall pixel accuracy (oPA) of 0.94, which significantly outperformed WI (mIoU = 0.79), MNDWI (mIoU = 0.75), and AWEIsh (mIoU = 0.77), with p-values < 0.01. These improvements demonstrate the robustness and statistical superiority of the proposed approach in aquaculture pond extraction. Full article
Show Figures

Figure 1

23 pages, 3055 KB  
Article
RDPNet: A Multi-Scale Residual Dilated Pyramid Network with Entropy-Based Feature Fusion for Epileptic EEG Classification
by Tongle Xie, Wei Zhao, Yanyouyou Liu and Shixiao Xiao
Entropy 2025, 27(8), 830; https://doi.org/10.3390/e27080830 - 5 Aug 2025
Cited by 2 | Viewed by 1199
Abstract
Epilepsy is a prevalent neurological disorder affecting approximately 50 million individuals worldwide. Electroencephalogram (EEG) signals play a vital role in the diagnosis and analysis of epileptic seizures. However, traditional machine learning techniques often rely on handcrafted features, limiting their robustness and generalizability across [...] Read more.
Epilepsy is a prevalent neurological disorder affecting approximately 50 million individuals worldwide. Electroencephalogram (EEG) signals play a vital role in the diagnosis and analysis of epileptic seizures. However, traditional machine learning techniques often rely on handcrafted features, limiting their robustness and generalizability across diverse EEG acquisition settings, seizure types, and patients. To address these limitations, we propose RDPNet, a multi-scale residual dilated pyramid network with entropy-guided feature fusion for automated epileptic EEG classification. RDPNet combines residual convolution modules to extract local features and a dilated convolutional pyramid to capture long-range temporal dependencies. A dual-pathway fusion strategy integrates pooled and entropy-based features from both shallow and deep branches, enabling robust representation of spatial saliency and statistical complexity. We evaluate RDPNet on two benchmark datasets: the University of Bonn and TUSZ. On the Bonn dataset, RDPNet achieves 99.56–100% accuracy in binary classification, 99.29–99.79% in ternary tasks, and 95.10% in five-class classification. On the clinically realistic TUSZ dataset, it reaches a weighted F1-score of 95.72% across seven seizure types. Compared with several baselines, RDPNet consistently outperforms existing approaches, demonstrating superior robustness, generalizability, and clinical potential for epileptic EEG analysis. Full article
(This article belongs to the Special Issue Complexity, Entropy and the Physics of Information, 2nd Edition)
Show Figures

Figure 1

18 pages, 2000 KB  
Article
Enhancing the Accuracy of Image Classification for Degenerative Brain Diseases with CNN Ensemble Models Using Mel-Spectrograms
by Sang-Ha Sung, Michael Pokojovy, Do-Young Kang, Woo-Yong Bae, Yeon-Jae Hong and Sangjin Kim
Mathematics 2025, 13(13), 2100; https://doi.org/10.3390/math13132100 - 26 Jun 2025
Cited by 1 | Viewed by 842
Abstract
Alzheimer’s disease (AD) and Parkinson’s disease (PD) are prevalent neurodegenerative disorders among the elderly, leading to cognitive decline and motor impairments. As the population ages, the prevalence of these neurodegenerative disorders is increasing, providing motivation for active research in this area. However, most [...] Read more.
Alzheimer’s disease (AD) and Parkinson’s disease (PD) are prevalent neurodegenerative disorders among the elderly, leading to cognitive decline and motor impairments. As the population ages, the prevalence of these neurodegenerative disorders is increasing, providing motivation for active research in this area. However, most studies are conducted using brain imaging, with relatively few studies utilizing voice data. Using voice data offers advantages in accessibility compared to brain imaging analysis. This study introduces a novel ensemble-based classification model that utilizes Mel spectrograms and Convolutional Neural Networks (CNNs) to distinguish between healthy individuals (NM), AD, and PD patients. A total of 700 voice samples were collected under standardized conditions, ensuring data reliability and diversity. The proposed ternary classification algorithm integrates the predictions of binary CNN classifiers through a majority voting ensemble strategy. ResNet, DenseNet, and EfficientNet architectures were employed for model development. The experimental results show that the ensemble model based on ResNet achieves a weighted F1 score of 91.31%, demonstrating superior performance compared to existing approaches. To the best of our knowledge, this is the first large-scale study to perform three-class classification of neurodegenerative diseases using voice data. Full article
(This article belongs to the Special Issue Statistics and Data Science)
Show Figures

Figure 1

27 pages, 8770 KB  
Article
Evaluation of Rural Visual Landscape Quality Based on Multi-Source Affective Computing
by Xinyu Zhao, Lin Lin, Xiao Guo, Zhisheng Wang and Ruixuan Li
Appl. Sci. 2025, 15(9), 4905; https://doi.org/10.3390/app15094905 - 28 Apr 2025
Cited by 2 | Viewed by 1318
Abstract
Assessing the visual quality of rural landscapes is pivotal for quantifying ecological services and preserving cultural heritage; however, conventional ecological indicators neglect emotional and cognitive dimensions. To address this gap, the present study proposes a novel visual quality assessment method for rural landscapes [...] Read more.
Assessing the visual quality of rural landscapes is pivotal for quantifying ecological services and preserving cultural heritage; however, conventional ecological indicators neglect emotional and cognitive dimensions. To address this gap, the present study proposes a novel visual quality assessment method for rural landscapes that integrates multimodal sentiment classification models to strengthen sustainability metrics. Four landscape types were selected from three representative villages in Dalian City, China, and the physiological signals (EEG, EOG) and subjective evaluations (Beauty Assessment and SAM Scales) of students and teachers were recorded. Binary, ternary, and five-category emotion classification models were then developed. Results indicate that the binary and ternary models achieve superior accuracy in emotional valence and arousal, whereas the five-category model performs least effectively. Furthermore, an ensemble learning approach outperforms individual classifiers in both binary and ternary tasks, yielding a 16.54% increase in mean accuracy. Integrating subjective and objective data further enhances ternary classification accuracy by 7.7% compared to existing studies, confirming the value of multi-source features. These findings demonstrate that a multi-source sentiment computing framework can serve as a robust quantitative tool for evaluating emotional quality in rural landscapes and promoting their sustainable development. Full article
Show Figures

Figure 1

6 pages, 1923 KB  
Proceeding Paper
Real-Time Hardness Prediction Using COTS Tactile Sensors in Robotic Grippers
by Yash Sharma, Sina Akhbari, Claire Guo, Pedro Ferreria and Laura Justham
Eng. Proc. 2024, 82(1), 111; https://doi.org/10.3390/ecsa-11-22208 - 23 Apr 2025
Cited by 1 | Viewed by 686
Abstract
In material classification tasks, custom sensors have traditionally been employed to achieve accuracy scores. While numerous studies have reported high accuracy rates, there has been limited discussion on real-time predictions or real-world applications in most research papers. The real-time prediction of object material [...] Read more.
In material classification tasks, custom sensors have traditionally been employed to achieve accuracy scores. While numerous studies have reported high accuracy rates, there has been limited discussion on real-time predictions or real-world applications in most research papers. The real-time prediction of object material properties is crucial for enhancing the tactile sensing capabilities of robotics in industrial settings. This study proposes the use of Commercial Off-The-Shelf (COTS) tactile sensors for hardness classification, utilizing small datasets for model training and real-time prediction. Testing involves evaluating the ability of robotic grippers to accurately predict the hardness of new, unknown objects, categorizing them into two classes (soft, hard) or three classes (hard, soft, flexible). Results obtained from a multiple-algorithm approach reveal an 80% accuracy rate for binary classification, with real-time tests demonstrating two out of three correct predictions for most sensors. For ternary classification, the accuracy rate is around 70%, with two out of three correct predictions from at least one sensor. These findings highlight the capability of COTS sensors to perform real-time hardness classification effectively. This also highlights that COTS sensors have capabilities and flexibility based on their dimensional architecture that can be used in many different robotics applications without investing time in the development of a specific use-case sensor for classification tasks within robotic tactile sensing. Full article
Show Figures

Figure 1

19 pages, 7206 KB  
Article
Optimizing Model Performance and Interpretability: Application to Biological Data Classification
by Zhenyu Huang, Xuechen Mu, Yangkun Cao, Qiufen Chen, Siyu Qiao, Bocheng Shi, Gangyi Xiao, Yan Wang and Ying Xu
Genes 2025, 16(3), 297; https://doi.org/10.3390/genes16030297 - 28 Feb 2025
Cited by 1 | Viewed by 1565
Abstract
This study introduces a novel framework that simultaneously addresses the challenges of performance accuracy and result interpretability in transcriptomic-data-based classification. Background/objectives: In biological data classification, it is challenging to achieve both high performance accuracy and interpretability at the same time. This study [...] Read more.
This study introduces a novel framework that simultaneously addresses the challenges of performance accuracy and result interpretability in transcriptomic-data-based classification. Background/objectives: In biological data classification, it is challenging to achieve both high performance accuracy and interpretability at the same time. This study presents a framework to address both challenges in transcriptomic-data-based classification. The goal is to select features, models, and a meta-voting classifier that optimizes both classification performance and interpretability. Methods: The framework consists of a four-step feature selection process: (1) the identification of metabolic pathways whose enzyme-gene expressions discriminate samples with different labels, aiding interpretability; (2) the selection of pathways whose expression variance is largely captured by the first principal component of the gene expression matrix; (3) the selection of minimal sets of genes, whose collective discerning power covers 95% of the pathway-based discerning power; and (4) the introduction of adversarial samples to identify and filter genes sensitive to such samples. Additionally, adversarial samples are used to select the optimal classification model, and a meta-voting classifier is constructed based on the optimized model results. Results: The framework applied to two cancer classification problems showed that in the binary classification, the prediction performance was comparable to the full-gene model, with F1-score differences of between −5% and 5%. In the ternary classification, the performance was significantly better, with F1-score differences ranging from −2% to 12%, while also maintaining excellent interpretability of the selected feature genes. Conclusions: This framework effectively integrates feature selection, adversarial sample handling, and model optimization, offering a valuable tool for a wide range of biological data classification problems. Its ability to balance performance accuracy and high interpretability makes it highly applicable in the field of computational biology. Full article
(This article belongs to the Section Bioinformatics)
Show Figures

Figure 1

22 pages, 9743 KB  
Article
Machine Learning-Based Tectonic Discrimination Using Basalt Element Geochemical Data: Insights into the Carboniferous–Permian Tectonic Regime of Western Tianshan Orogen
by Hengxu Li, Mengqi Gao, Xiaohui Ji, Zhaochong Zhang, Zhiguo Cheng and M. Santosh
Minerals 2025, 15(2), 122; https://doi.org/10.3390/min15020122 - 26 Jan 2025
Cited by 2 | Viewed by 2306
Abstract
Identifying the tectonic setting of rocks is essential for gaining insights into the geological contexts in which these rocks were formed, aiding in tectonic plate reconstruction and enhancing our comprehensive understanding of the Earth’s history. The application of machine learning algorithms helps identify [...] Read more.
Identifying the tectonic setting of rocks is essential for gaining insights into the geological contexts in which these rocks were formed, aiding in tectonic plate reconstruction and enhancing our comprehensive understanding of the Earth’s history. The application of machine learning algorithms helps identify complex patterns and relationships between big data that may be overlooked by binary or ternary tectonomagmatic discrimination diagrams based on basalt compositions. In this study, three machine learning algorithms, i.e., Support Vector Machine (SVM), Random Forest (RF), and eXtreme Gradient Boosting (XGBoost), were employed to classify the basalts from seven diverse settings, including intraplate basalts, island arc basalts, ocean island basalts, mid-ocean ridge basalts, back-arc basin basalts, oceanic flood basalts, and continental flood basalts. Specifically, for altered and fresh basalt samples, we utilized 22 immobile elements and 35 major and trace elements, respectively, to construct discrimination models. The results indicate that XGBoost demonstrates the best performance in discriminating basalts into seven tectonic settings, achieving accuracies of 85% and 89% for the altered and fresh basalt samples, respectively. A key innovation of our newly developed tectonic discrimination model is the establishment of tailored models for altered and fresh basalts. Moreover, by omitting isotopic features during model construction, the new models offer broader applicability in predicting a wider range of basalt samples in practical scenarios. The classification models were applied to investigate the Carboniferous to Permian evolution in the Western Tianshan Orogen (WTO), revealing that the subduction of Tianshan Ocean ceased at the end of Carboniferous and the WTO evolved into a post-collisional orogenesis during the Permian. Full article
(This article belongs to the Section Mineral Geochemistry and Geochronology)
Show Figures

Figure 1

24 pages, 7437 KB  
Article
Investigation of the Ternary System, Water/Hydrochloric Acid/Polyamide 66, for the Production of Polymeric Membranes by Phase Inversion
by Jocelei Duarte, Camila Suliani Raota, Camila Baldasso, Venina dos Santos and Mara Zeni
Membranes 2025, 15(1), 7; https://doi.org/10.3390/membranes15010007 - 1 Jan 2025
Cited by 2 | Viewed by 3586
Abstract
The starting point for the preparation of polymeric membranes by phase inversion is having a thermodynamically stable solution. Ternary diagrams for the polymer, solvent, and non-solvent can predict this stability by identifying the phase separation and describing the thermodynamic behavior of the membrane [...] Read more.
The starting point for the preparation of polymeric membranes by phase inversion is having a thermodynamically stable solution. Ternary diagrams for the polymer, solvent, and non-solvent can predict this stability by identifying the phase separation and describing the thermodynamic behavior of the membrane formation process. Given the lack of data for the ternary system water (H2O)/hydrochloric acid (HCℓ)/polyamide 66 (PA66), this work employed the Flory–Huggins theory for the construction of the ternary diagrams (H2O/HCℓ/PA66 and H2O/formic acid (FA)/PA66) by comparing the experimental data with theoretical predictions. Pure polymer and the membranes produced by phase inversion were characterized to provide the information required to create the ternary diagrams. PA66/FA and PA66/HCℓ solutions were also evaluated regarding their classification as true solutions, and the universal quasi-chemical functional group activity coefficient (UNIFAC) method was used for determining non-solvent/solvent interaction parameters (g12). Swelling measurements determined the polymer/non-solvent interaction parameter (χ13) for H2O/PA66 and the solvent/polymer interaction parameter (χ23) for PA66/FA and PA66/HCℓ. The theoretical cloud point curve was calculated based on “Boom’s LCP Correlation” and compared to the curve of the experimental cloud point. The ternary Gibbs free energy of mixing and χ23 indicated FA as the best solvent for the PA66. However, for HCℓ, the lower concentration (37–38%), volatility, and fraction volume of dissolved PA66 (ϕ3) indicated that HCℓ is also adequate for PA66 solubilization based on the similar membrane morphology observed when compared to the PA66/FA membrane. Full article
Show Figures

Figure 1

18 pages, 4846 KB  
Article
Epilepsy EEG Seizure Prediction Based on the Combination of Graph Convolutional Neural Network Combined with Long- and Short-Term Memory Cell Network
by Zhejun Kuang, Simin Liu, Jian Zhao, Liu Wang and Yunkai Li
Appl. Sci. 2024, 14(24), 11569; https://doi.org/10.3390/app142411569 - 11 Dec 2024
Cited by 7 | Viewed by 5329
Abstract
With the increasing research of deep learning in the EEG field, it becomes more and more important to fully extract the characteristics of EEG signals. Traditional EEG signal classification prediction neither considers the topological structure between the electrodes of the signal collection device [...] Read more.
With the increasing research of deep learning in the EEG field, it becomes more and more important to fully extract the characteristics of EEG signals. Traditional EEG signal classification prediction neither considers the topological structure between the electrodes of the signal collection device nor the data structure of the Euclidean space to accurately reflect the interaction between signals. Graph neural networks can effectively extract features of non-Euclidean spatial data. Therefore, this paper proposes a feature selection method for epilepsy EEG classification based on graph convolutional neural networks (GCNs) and long short-term memory (LSTM) cells. While enriching the input of LSTM, it also makes full use of the information hidden in the EEG signals. In the automatic detection of epileptic seizures based on neural networks, due to the strong non-stationarity and large background noise of the EEG signal, the analysis and processing of the EEG signal has always been a challenging research. Therefore, experiments were conducted using the preprocessed Boston Children’s Hospital epilepsy EEG dataset, and input it into the GCN-LSTM model for deep feature extraction. The GCN network built by the graph convolution layer learns spatial features, then LSTM extracts sequence information, and the final prediction is performed by fully connected and softmax layers. The introduced method has been experimentally proven to be effective in improving the accuracy of epileptic EEG seizure detection. Experimental results show that the average accuracy of binary classification on the CHB-MIT dataset is 99.39%, and the average accuracy of ternary classification is 98.69%. Full article
Show Figures

Figure 1

Back to TopTop