Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (7,090)

Search Parameters:
Keywords = three vectors

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 5251 KB  
Article
AI-Based Sentiment Analysis of E-Commerce Customer Feedback: A Bilingual Parallel Study on the Fast Food Industry in Turkish and English
by Esra Kahya Özyirmidokuz, Bengisu Molu Elmas and Eduard Alexandru Stoica
J. Theor. Appl. Electron. Commer. Res. 2025, 20(4), 294; https://doi.org/10.3390/jtaer20040294 (registering DOI) - 1 Nov 2025
Abstract
Across digital platforms, large-scale assessment of customer sentiment has become integral to brand management, service recovery, and data-driven marketing in e-commerce. Still, most studies center on single-language settings, with bilingual and culturally diverse environments receiving comparatively limited attention. In this study, a bilingual [...] Read more.
Across digital platforms, large-scale assessment of customer sentiment has become integral to brand management, service recovery, and data-driven marketing in e-commerce. Still, most studies center on single-language settings, with bilingual and culturally diverse environments receiving comparatively limited attention. In this study, a bilingual sentiment analysis of consumer feedback on X (formerly Twitter) was conducted for three global quick-service restaurant (QSR) brands—McDonald’s, Burger King, and KFC—using 145,550 English tweets and 15,537 Turkish tweets. After pre-processing and leakage-safe augmentation for low-resource Turkish data, both traditional machine learning models (Naïve Bayes, Support Vector Machines, Logistic Regression, Random Forest) and a transformer-based deep learning model, BERT (Bidirectional Encoder Representations from Transformers), were evaluated. BERT achieved the highest performance (macro-F1 ≈ 0.88 in Turkish; ≈0.39 in temporally split English), while Random Forest emerged as the strongest ML baseline. An apparent discrepancy was observed between pseudo-label agreement (Accuracy > 0.95) and human-label accuracy (EN: 0.75; TR: 0.49), indicating the limitations of lexicon-derived labels and the necessity of human validation. Beyond methodological benchmarking, linguistic contrasts were identified: English tweets were more polarized (positive/negative), whereas Turkish tweets were overwhelmingly neutral. These differences reflect cultural patterns of online expression and suggest direct managerial implications. The findings indicate that bilingual sentiment analysis yields brand-level insights that can inform strategic and operational decisions. Full article
Show Figures

Figure 1

10 pages, 547 KB  
Article
β-Actin as an Endogenous Control Gene in Real-Time PCR for Detection of West Nile and Usutu Virus in Mosquitoes
by Jeanne Lai, Carlotta Tessarolo, Elisabetta Ercole, Marina Gallo, Monica Lo Faro, Claudia Palmitessa, Valerio Carta, Alessio Ferrari, Alessandra Favole, Mattia Begovoeva, Francesco Ingravalle, Simone Peletto, Nicolò Francesco Fiscella, Roberta Irelli, Eugenia Ciarrocchi, Walter Martelli, Andrea Mosca, Giulia Cagnotti, Cristina Casalone and Cristiano Corona
Microorganisms 2025, 13(11), 2518; https://doi.org/10.3390/microorganisms13112518 (registering DOI) - 31 Oct 2025
Abstract
Mosquito-borne viruses like West Nile virus (WNV) and Usutu virus (USUV) present growing public health concerns, especially with climate change and expanding vector ranges. This study describes the development and validation of a duplex Real-Time RT-PCR assay targeting β-actin (ACTB) mRNA as an [...] Read more.
Mosquito-borne viruses like West Nile virus (WNV) and Usutu virus (USUV) present growing public health concerns, especially with climate change and expanding vector ranges. This study describes the development and validation of a duplex Real-Time RT-PCR assay targeting β-actin (ACTB) mRNA as an endogenous control and a conserved 92 bp region shared by WNV and USUV genomes. Degenerate primers for ACTB ensure RNA extraction quality and PCR performance while enabling simultaneous detection of both viruses. A total of 1002 mosquito pools collected in Piedmont, Italy, during the 2024 vector season under the National Surveillance Plan for Arboviruses (PNA), were tested. The assay showed 100% accuracy—ACTB mRNA was detected in all pools, and six pools tested positive for WNV or USUV (three each). Diagnostic specificity was confirmed on 40 horse and bovine serum samples. Sanger sequencing confirmed ACTB identity across multiple mosquito species. The assay also demonstrated reproducibility across different operators and thermocyclers. The limit of detection (LOD) evaluation showed that the assay is capable of detecting viral RNA at very low concentrations, confirming its high analytical sensitivity. The duplex RT-PCR here developed is a reliable, sensitive, and specific tool for arbovirus surveillance, combining pathogen detection with internal quality control of RNA extraction and amplification, thus improving early warning and rapid response to mosquito-borne disease threats. Full article
(This article belongs to the Special Issue Interactions between Parasites/Pathogens and Vectors)
27 pages, 24393 KB  
Article
FireRisk-Multi: A Dynamic Multimodal Fusion Framework for High-Precision Wildfire Risk Assessment
by Ke Yuan, Zhiruo Zhu, Yutong Pang, Jing Pang, Chunhui Hou and Qian Tang
ISPRS Int. J. Geo-Inf. 2025, 14(11), 426; https://doi.org/10.3390/ijgi14110426 (registering DOI) - 31 Oct 2025
Abstract
Wildfire risk assessment requires integrating heterogeneous geospatial data to capture complex environmental dynamics. This study develops a hierarchical multimodal fusion framework combining high-resolution aerial imagery, historical fire data, topography, meteorology, and vegetation indices within Google Earth Engine. We introduce three progressive fusion levels: [...] Read more.
Wildfire risk assessment requires integrating heterogeneous geospatial data to capture complex environmental dynamics. This study develops a hierarchical multimodal fusion framework combining high-resolution aerial imagery, historical fire data, topography, meteorology, and vegetation indices within Google Earth Engine. We introduce three progressive fusion levels: a single-modality baseline (NAIP-WHP), fixed-weight fusion (FIXED), and a novel geographically adaptive dynamic-weight approach (FUSED) that adjusts feature contributions based on regional characteristics like human activity intensity or aridity. Machine learning benchmarking across 49 U.S. regions reveals that Support Vector Machines (SVM) applied to the FUSED dataset achieve optimal performance, with an AUC-ROC of 92.1%, accuracy of 83.3%, and inference speed of 1.238 milliseconds per sample. This significantly outperforms the fixed-weight fusion approach, which achieved an AUC-ROC of 78.2%, and the single-modality baseline, which achieved 73.8%, representing relative improvements of 17.8% and 24.8%, respectively. The 10 m resolution risk heatmaps demonstrate operational viability, achieving an 86.27% hit rate in Carlsbad Caverns, NM. SHAP-based interpretability analysis reveals terrain dominance and context-dependent vegetation effects, aligning with wildfire ecology principles. Full article
Show Figures

Figure 1

25 pages, 16046 KB  
Article
UAV-Based Multimodal Monitoring of Tea Anthracnose with Temporal Standardization
by Qimeng Yu, Jingcheng Zhang, Lin Yuan, Xin Li, Fanguo Zeng, Ke Xu, Wenjiang Huang and Zhongting Shen
Agriculture 2025, 15(21), 2270; https://doi.org/10.3390/agriculture15212270 (registering DOI) - 31 Oct 2025
Abstract
Tea Anthracnose (TA), caused by fungi of the genus Colletotrichum, is one of the major threats to global tea production. UAV remote sensing has been explored for non-destructive and high-efficiency monitoring of diseases in tea plantations. However, variations in illumination, background, and [...] Read more.
Tea Anthracnose (TA), caused by fungi of the genus Colletotrichum, is one of the major threats to global tea production. UAV remote sensing has been explored for non-destructive and high-efficiency monitoring of diseases in tea plantations. However, variations in illumination, background, and meteorological factors undermine the stability of cross-temporal data. Data processing and modeling complexity further limits model generalizability and practical application. This study introduced a cross-temporal, generalizable disease monitoring approach based on UAV multimodal data coupled with relative-difference standardization. In an experimental tea garden, we collected multispectral, thermal infrared, and RGB images and extracted four classes of features: spectral (Sp), thermal (Th), texture (Te), and color (Co). The Normalized Difference Vegetation Index (NDVI) was used to identify reference areas and standardize features, which significantly reduced the relative differences in cross-temporal features. Additionally, we developed a vegetation–soil relative temperature (VSRT) index, which exhibits higher temporal-phase consistency than the conventional normalized relative canopy temperature (NRCT). A multimodal optimal feature set was constructed through sensitivity analysis based on the four feature categories. For different modality combinations (single and fused), three machine learning algorithms, K-Nearest Neighbors (KNN), Support Vector Machine (SVM), and Multi-layer Perceptron (MLP), were selected to evaluate disease classification performance due to their low computational burden and ease of deployment. Results indicate that the “Sp + Th” combination achieved the highest accuracy (95.51%), with KNN (95.51%) outperforming SVM (94.23%) and MLP (92.95%). Moreover, under the optimal feature combination and KNN algorithm, the model achieved high generalizability (86.41%) on independent temporal data. This study demonstrates that fusing spectral and thermal features with temporal standardization, combined with the simple and effective KNN algorithm, achieves accurate and robust tea anthracnose monitoring, providing a practical solution for efficient and generalizable disease management in tea plantations. Full article
(This article belongs to the Section Crop Protection, Diseases, Pests and Weeds)
Show Figures

Figure 1

35 pages, 5223 KB  
Article
Physics-Based Machine Learning for Vibration Mitigation by Open Buried Trenches
by Luís Pereira, Luís Godinho, Fernando G. Branco, Paulo da Venda Oliveira, Pedro Alves Costa and Aires Colaço
Appl. Sci. 2025, 15(21), 11609; https://doi.org/10.3390/app152111609 - 30 Oct 2025
Abstract
Mitigating ground vibrations from sources like vehicles and construction operations poses significant challenges, often relying on computationally intensive numerical methods such as Finite Element Methods (FEM) or Boundary Element Methods (BEM) for analysis. This study addresses these limitations by developing and evaluating Machine [...] Read more.
Mitigating ground vibrations from sources like vehicles and construction operations poses significant challenges, often relying on computationally intensive numerical methods such as Finite Element Methods (FEM) or Boundary Element Methods (BEM) for analysis. This study addresses these limitations by developing and evaluating Machine Learning (ML) methodologies for the rapid and accurate prediction of Insertion Loss (IL), a critical parameter for assessing the effectiveness of open trenches as vibration barriers. A comprehensive database was systematically generated through high-fidelity numerical simulations, capturing a wide range of geometric, elastic, and physical configurations of a stratified geotechnical system. Three distinct ML strategies—Artificial Neural Networks (ANN), Support Vector Machines (SVM), and Random Forests (RF)—were initially assessed for their predictive capabilities. Subsequently, a Meta-RF stacking ensemble model was developed, integrating the predictions of these base methods. Model performance was rigorously evaluated using complementary statistical metrics (RMSE, MAE, NMAE, R), substantiated by in-depth statistical analyses (normality tests, Bootstrap confidence intervals, Wilcoxon tests) and an analysis of input parameter sensitivity. The results clearly demonstrate the high efficacy of Machine Learning (ML) in accurately predicting IL across diverse, realistic scenarios. While all models performed strongly, the RF and the Meta-RF stacking ensemble models consistently emerged as the most robust and accurate predictors. They exhibited superior generalization capabilities and effectively mitigated the inherent biases found in the ANN and SVM models. This work is intended to function as a proof-of-concept and offers promising avenues for overcoming the significant computational costs associated with traditional simulation methods, thereby enabling rapid design optimization and real-time assessment of vibration mitigation measures in geotechnical engineering. Full article
Show Figures

Figure 1

25 pages, 3502 KB  
Article
Developing a Groundwater Quality Assessment in Mexico: A GWQI-Machine Learning Model
by Hector Ivan Bedolla-Rivera and Mónica del Carmen González-Rosillo
Hydrology 2025, 12(11), 285; https://doi.org/10.3390/hydrology12110285 - 30 Oct 2025
Abstract
Groundwater represents a critical global resource, increasingly threatened by overexploitation and pollution from contaminants such as arsenic (As), fluoride (F), nitrates (NO3), and heavy metals in arid to semi-arid regions like Mexico. Traditional Water Quality Indices ( [...] Read more.
Groundwater represents a critical global resource, increasingly threatened by overexploitation and pollution from contaminants such as arsenic (As), fluoride (F), nitrates (NO3), and heavy metals in arid to semi-arid regions like Mexico. Traditional Water Quality Indices (WQIs), while useful, suffer from subjectivity in assigning weights, which can lead to misinterpretations. This study addresses these limitations by developing a novel, objective Groundwater Quality Index (GWQI) through the seamless integration of Machine Learning (ML) models. Utilizing a database of 775 wells from the Mexican National Water Commission (CONAGUA), Principal Component Analysis (PCA) was applied to achieve significant dimensionality reduction. We successfully reduced the required monitoring parameters from 13 to only three key indicators: total dissolved solids (TDSs), chromium (Cr), and manganese (Mn). This reduction allows for an 87% decrease in the number of indicators, maximizing efficiency and generating potential savings in monitoring resources without compromising water quality prediction accuracy. Six WQI methods and six ML models were evaluated for quality prediction. The Unified Water Quality Index (WQIu) demonstrated the best performance among the WQIs evaluated and exhibited the highest correlation (R2 = 0.85) with the traditional WQI based on WHO criteria. Furthermore, the ML Support Vector Machine with polynomial kernel (svmPoly) model achieved the maximum predictive accuracy for WQIu (R2 = 0.822). This robust GWQI-ML approach establishes an accurate, objective, and efficient tool for large-scale groundwater quality monitoring across Mexico, facilitating informed decision-making for sustainable water management and enhanced public health protection. Full article
Show Figures

Figure 1

18 pages, 4012 KB  
Article
A Sequential Adaptive Linear Kalman Filter Based on the Geophysical Field for Robust MARG Attitude Estimation
by Taoran Zhao, Ziwei Deng, Zhijian Jiang, Menglei Wang, Junfeng Zhou, Yiyang Xu and Xinhua Lin
Appl. Sci. 2025, 15(21), 11593; https://doi.org/10.3390/app152111593 - 30 Oct 2025
Viewed by 37
Abstract
In magnetometer, accelerometer, and rate gyroscope (MARG) attitude and heading reference systems, accelerometers and magnetometers are susceptible to external acceleration and soft/hard magnetic anomalies, which reduce the attitude estimation accuracy. To address this problem, a sequential adaptive Kalman filter algorithm based on the [...] Read more.
In magnetometer, accelerometer, and rate gyroscope (MARG) attitude and heading reference systems, accelerometers and magnetometers are susceptible to external acceleration and soft/hard magnetic anomalies, which reduce the attitude estimation accuracy. To address this problem, a sequential adaptive Kalman filter algorithm based on the geophysical field is proposed for anti-interference MARG attitude estimation. By establishing the linear system model based on the gravitational field and geomagnetic field, the singularity and coupling in other system models are avoided. Additionally, the sequential Sage–Husa adaptive strategy is employed to estimate the measurement noise parameters in real time by a specific force and magnetic vector, which suppresses the impact of external acceleration and the soft/hard magnetic anomalies. To verify the effectiveness and advancement of the proposed algorithm, a series of anti-interference experiments were designed. Experimental results show that, compared with the geophysical-field-based Kalman filter algorithm without an adaptive strategy, the proposed improved algorithm reduces the yaw maximum error by over 94% and inclination maximum error by over 21%, which improves the MARG attitude estimation robustness and makes this algorithm superior to the existing three adaptive strategies and two algorithms. Full article
(This article belongs to the Special Issue Navigation and Positioning Based on Multi-Sensor Fusion Technology)
Show Figures

Figure 1

20 pages, 821 KB  
Article
Tracking Pillar 2 Adjustments Through Macroeconomic Factors: Insights from PCA and BVAR
by Bojan Baškot, Milan Lazarević, Ognjen Erić and Dalibor Tomaš
Risks 2025, 13(11), 207; https://doi.org/10.3390/risks13110207 - 29 Oct 2025
Viewed by 150
Abstract
This paper investigates the systemic macroeconomic determinants of Pillar 2 Requirements (P2R) imposed by the European Central Bank (ECB) under the Single Supervisory Mechanism (SSM). While P2R is formally calibrated at the individual bank level through the Supervisory Review and Evaluation Process (SREP), [...] Read more.
This paper investigates the systemic macroeconomic determinants of Pillar 2 Requirements (P2R) imposed by the European Central Bank (ECB) under the Single Supervisory Mechanism (SSM). While P2R is formally calibrated at the individual bank level through the Supervisory Review and Evaluation Process (SREP), we explore the extent to which common macro-financial shocks influence supervisory capital expectations across banks. Using a panel dataset covering euro area banks between 2021 and 2025, we match bank-level P2R data with country-level macroeconomic indicators. Those variables include real GDP growth, HICP inflation and index levels, government fiscal balance, euro yield curve spreads, net turnover, FDI inflows, construction and industrial production indices, the price-to-income ratio in real estate, and trade balance measures. We apply Principal Component Analysis (PCA) to extract latent variables related to the macroeconomic factors from a broad set of variables, which are then introduced into a Bayesian Vector Autoregression (BVAR) model to assess their dynamic impact on P2R. Our results identify three principal components that capture general macroeconomic cycles, sector-specific real activity, and financial/external imbalances. The impulse response analysis shows that sectoral and external shocks have a more immediate and statistically significant influence on P2R adjustments than broader macroeconomic trends. These findings clearly support the use of systemic macro-financial conditions in supervisory decision-making and support the integration of anticipating macro-prudential analysis into capital requirement frameworks. Full article
Show Figures

Figure 1

24 pages, 4064 KB  
Article
Hardness and Surface Roughness of 3D-Printed ASA Components Subjected to Acetone Vapor Treatment and Different Production Variables: A Multi-Estimation Work via Machine Learning and Deep Learning
by Çağın Bolat, Furkancan Demircan, İlker Gür, Bekir Yalçın, Ramazan Şener and Ali Ercetin
Polymers 2025, 17(21), 2881; https://doi.org/10.3390/polym17212881 - 29 Oct 2025
Viewed by 274
Abstract
This paper analyzes the combined effects of acetone vapor treatment and 3D printing process parameters (layer thickness and infill rate) on the hardness and surface roughness of acrylonitrile styrene acrylate (ASA) components by using different machine learning and deep learning strategies for the [...] Read more.
This paper analyzes the combined effects of acetone vapor treatment and 3D printing process parameters (layer thickness and infill rate) on the hardness and surface roughness of acrylonitrile styrene acrylate (ASA) components by using different machine learning and deep learning strategies for the first time in the technical literature. Considering the high-performance materials and aesthetic requirements of manufacturers, post-processing operations are highly critical for 3D-printed samples. ASA is a promising alternative, especially for the structural parts utilized in outdoor conditions like car outer components, electronic part housing, extreme sports equipment, and construction materials. However, it has to sustain hardness features against outer scratching, peeling, and indentations without losing its gloss. Together with the rising competitiveness in the search for a high-performance design with a perfect outer view, the combination of additive manufacturing and machine learning methods was implemented to enhance the hardness and surface quality properties for the first time in the literature. Concordantly, in this study, four different vaporizing durations (15, 45, 90, and 120 min.), three different layer thicknesses (0.1, 0.2, and 0.4 mm), and three different infill rates (25, 50, and 100%) were determined. According to both experimental and multi-way learning approaches, the results show that the support vector regressor (SVR) combined with one-dimensional convolutional neural networks (1D-CNNs) was the best approach for predictions. Gradient boosting (GB) and recurrent neural networks (RNNs) may also be preferable for low-error forecasting. Moreover, although there was a positive relationship between the layer thickness/infill rate and Shore D hardness outcomes, the highest levels were obtained at 45 min of vaporizing. Full article
(This article belongs to the Special Issue Polymer Composites: Mechanical Characterization)
Show Figures

Figure 1

26 pages, 1854 KB  
Review
Machine Learning Techniques for Battery State of Health Prediction: A Comparative Review
by Leila Mbagaya, Kumeshan Reddy and Annelize Botes
World Electr. Veh. J. 2025, 16(11), 594; https://doi.org/10.3390/wevj16110594 - 28 Oct 2025
Viewed by 288
Abstract
Accurate estimation of the state of health (SOH) of lithium-ion batteries is essential for the safe and efficient operation of electric vehicles (EVs). Conventional approaches, including Coulomb counting, electrochemical impedance spectroscopy, and equivalent circuit models, provide useful insights but face practical limitations such [...] Read more.
Accurate estimation of the state of health (SOH) of lithium-ion batteries is essential for the safe and efficient operation of electric vehicles (EVs). Conventional approaches, including Coulomb counting, electrochemical impedance spectroscopy, and equivalent circuit models, provide useful insights but face practical limitations such as error accumulation, high equipment requirements, and limited applicability across different conditions. These challenges have encouraged the use of machine learning (ML) methods, which can model nonlinear relationships and temporal degradation patterns directly from cycling data. This paper reviews four machine learning algorithms that are widely applied in SOH estimation: support vector regression (SVR), random forest (RF), convolutional neural networks (CNNs), and long short-term memory networks (LSTMs). Their methodologies, advantages, limitations, and recent extensions are discussed with reference to the existing literature. To complement the review, MATLAB-based simulations were carried out using the NASA Prognostics Center of Excellence (PCoE) dataset. Training was performed on three cells (B0006, B0007, B0018), and testing was conducted on an unseen cell (B0005) to evaluate cross-battery generalisation. The results show that the LSTM model achieved the highest accuracy (RMSE = 0.0146, MAE = 0.0118, R2 = 0.980), followed by CNN and RF, both of which provided acceptable accuracy with errors below 2% SOH. SVR performed less effectively (RMSE = 0.0457, MAPE = 4.80%), reflecting its difficulty in capturing sequential dependencies. These outcomes are consistent with findings in the literature, indicating that deep learning models are better suited for modelling long-term battery degradation, while ensemble approaches such as RF remain competitive when supported by carefully engineered features. This review also identifies ongoing and future research directions, including the use of optimisation algorithms for hyperparameter tuning, transfer learning for adaptation across battery chemistries, and explainable AI to improve interpretability. Overall, LSTM and hybrid models that combine complementary methods (e.g., CNN-LSTM) show strong potential for deployment in battery management systems, where reliable SOH prediction is important for safety, cost reduction, and extending battery lifetime. Full article
(This article belongs to the Section Storage Systems)
Show Figures

Figure 1

34 pages, 8515 KB  
Article
Hybrid Approach Using Dynamic Mode Decomposition and Wavelet Scattering Transform for EEG-Based Seizure Classification
by Sreevidya C, Neethu Mohan, Sachin Kumar S and Aravind Harikumar
Informatics 2025, 12(4), 117; https://doi.org/10.3390/informatics12040117 - 28 Oct 2025
Viewed by 269
Abstract
Epilepsy is a brain disorder that affects individuals; hence, preemptive diagnosis is required. Accurate classification of seizures is critical to optimize the treatment of epilepsy. Patients with epilepsy are unable to lead normal lives due to the unpredictable nature of seizures. Thus, developing [...] Read more.
Epilepsy is a brain disorder that affects individuals; hence, preemptive diagnosis is required. Accurate classification of seizures is critical to optimize the treatment of epilepsy. Patients with epilepsy are unable to lead normal lives due to the unpredictable nature of seizures. Thus, developing new methods to help these patients can significantly improve their quality of life and result in huge financial savings for the healthcare industry. This paper presents a hybrid method integrating dynamic mode decomposition (DMD) and wavelet scattering transform (WST) for EEG-based seizure analysis. DMD allows for the breakdown of EEG signals into modes that catch the dynamical structures present in the EEG. Then, WST is applied as it is invariant to time-warping and computes robust hierarchical features at different timescales. DMD-WST combination provides an in-depth multi-scale analysis of the temporal structures present within the EEG data. This process improves the representation quality for feature extraction, which can convey dynamic modes and multi-scale frequency information for improved classification performance. The proposed hybrid approach is validated with three datasets, namely the CHB-MIT PhysioNet dataset, the Bern Barcelona dataset, and the Khas dataset, which can accurately distinguish the seizure and non-seizure states. The proposed method performed classification using different machine learning and deep learning methods, including support vector machine, random forest, k-nearest neighbours, booster algorithm, and bagging. These models were compared in terms of accuracy, precision, sensitivity, Cohen’s kappa, and Matthew’s correlation coefficient. The DMD-WST approach achieved a maximum accuracy of 99% and F1 score of 0.99 on the CHB-MIT dataset, and obtained 100% accuracy and F1 score of 1.00 on both the Bern Barcelona and Khas datasets, outperforming existing methods Full article
Show Figures

Figure 1

24 pages, 3673 KB  
Article
Massively Parallel Lagrangian Relaxation Algorithm for Solving Large-Scale Spatial Optimization Problems Using GPGPU
by Ting L. Lei, Rongrong Wang and Zhen Lei
ISPRS Int. J. Geo-Inf. 2025, 14(11), 419; https://doi.org/10.3390/ijgi14110419 - 26 Oct 2025
Viewed by 243
Abstract
Lagrangian Relaxation (LR) is an effective method for solving spatial optimization problems in geospatial analysis and GIS. Among others, it has been used to solve the classic p-median problem that served as a unified local model in GIS since the 1990s. Despite [...] Read more.
Lagrangian Relaxation (LR) is an effective method for solving spatial optimization problems in geospatial analysis and GIS. Among others, it has been used to solve the classic p-median problem that served as a unified local model in GIS since the 1990s. Despite its efficiency, the LR algorithm has seen limited usage in practice and is not as widely used as off-the-shelf solvers such as OPL/CPLEX or GPLK. This is primarily because of the high cost of development, which includes (i) the cost of developing a full gradient descent algorithm for each optimization model with various tricks and modifications to improve the speed, (ii) the computational cost can be high for large problem instances, (iii) the need to test and choose from different relaxation schemes, and (iv) the need to derive and compute the gradients in a programming language. In this study, we aim to solve the first three issues by utilizing the computational power of GPGPU and existing facilities of modern deep learning (DL) frameworks such as PyTorch. Based on an analysis of the commonalities and differences between DL and general optimization, we adapt DL libraries for solving LR problems. As a result, we can choose from the many gradient descent strategies (known as “optimizers”) in DL libraries rather than reinventing them from scratch. Experiments show that implementing LR in DL libraries is not only feasible but also convenient. Gradient vectors are automatically tracked and computed. Furthermore, the computational power of GPGPU is automatically used to parallelize the optimization algorithm (a long-term difficulty in operations research). Experiments with the classic p-median problem show that we can solve much larger problem instances (of more than 15,000 nodes) optimally or nearly optimally using the GPU-based LR algorithm. Such capabilities allow for a more fine-grained analysis in GIS. Comparisons with the OPL solver and CPU version of the algorithm show that the GPU version achieves speedups of 104 and 12.5, respectively. The GPU utilization rate on an RTX 4090 GPU reaches 90%. We then conclude with a summary of the findings and remarks regarding future work. Full article
Show Figures

Figure 1

18 pages, 4411 KB  
Article
Spectral Index Optimization and Machine Learning for Hyperspectral Inversion of Maize Nitrogen Content
by Yuze Zhang, Caixia Huang, Hongyan Li, Shuai Li and Junsheng Lu
Agronomy 2025, 15(11), 2485; https://doi.org/10.3390/agronomy15112485 - 26 Oct 2025
Viewed by 233
Abstract
Hyperspectral remote sensing provides a powerful tool for crop nutrient monitoring and precision fertilization, yet its application is hindered by high-dimensional redundancy and inter-band collinearity. This study aimed to improve maize nitrogen estimation by constructing three types of two-dimensional full-band spectral indices—Difference Index [...] Read more.
Hyperspectral remote sensing provides a powerful tool for crop nutrient monitoring and precision fertilization, yet its application is hindered by high-dimensional redundancy and inter-band collinearity. This study aimed to improve maize nitrogen estimation by constructing three types of two-dimensional full-band spectral indices—Difference Index (DI), Simple Ratio Index (SRI), and Normalized Difference Index (NDI)—combined with spectral preprocessing methods (raw spectra (RAW), first-order derivative (FD), and second-order derivative (SD)). To optimize feature selection, three strategies were evaluated: Grey Relational Analysis (GRA), Pearson Correlation Coefficient (PCC), and Variable Importance in Projection (VIP). These indices were then integrated into machine learning models, including Backpropagation Neural Network (BP), Random Forest (RF), and Support Vector Regression (SVR). Results revealed that spectral index optimization substantially enhanced model performance. NDI consistently demonstrated robustness, achieving the highest grey relational degree (0.9077) under second-derivative preprocessing and improving BP model predictions. PCC-selected features showed superior adaptability in the RF model, yielding the highest test accuracy under raw spectral input (R2 = 0.769, RMSE = 0.0018). VIP proved most effective for SVR, with the optimal SD–VIP–SVR combination attaining the best predictive performance (test R2 = 0.7593, RMSE = 0.0024). Compared with full-spectrum input, spectral index optimization effectively reduced collinearity and overfitting, improving both reliability and generalization. Spectral index optimization significantly improved inversion accuracy. Among the tested pipelines, RAW-PCC-RF demonstrated robust stability across datasets, while SD-VIP-SVR achieved the highest overall validation accuracy (R2 = 0.7593, RMSE = 0.0024). These results highlight the complementary roles of stability and accuracy in defining the optimal pipeline for maize nitrogen inversion. This study highlights the pivotal role of spectral index optimization in hyperspectral inversion of maize nitrogen content. The proposed framework provides a reliable methodological basis for non-destructive nitrogen monitoring, with broad implications for precision agriculture and sustainable nutrient management. Full article
Show Figures

Figure 1

18 pages, 2276 KB  
Article
ACGAN-Based Multi-Target Elevation Estimation with Vector Sensor Arrays in Low-SNR Environments
by Biao Wang, Ning Shi and Yangyang Xie
Sensors 2025, 25(21), 6581; https://doi.org/10.3390/s25216581 - 25 Oct 2025
Viewed by 397
Abstract
To mitigate the reduced accuracy of direction-of-arrival (DOA) estimation in scenarios with low signal-to-noise ratios (SNR) and multiple interfering sources, this paper proposes an Auxiliary Classifier Generative Adversarial Network (ACGAN) architecture that integrates a Squeeze-and-Excitation (SE) attention mechanism and a Multi-scale Dilated Feature [...] Read more.
To mitigate the reduced accuracy of direction-of-arrival (DOA) estimation in scenarios with low signal-to-noise ratios (SNR) and multiple interfering sources, this paper proposes an Auxiliary Classifier Generative Adversarial Network (ACGAN) architecture that integrates a Squeeze-and-Excitation (SE) attention mechanism and a Multi-scale Dilated Feature Aggregation (MDFA) module. In this neural network, a vector hydrophone array is employed as the receiving unit, capable of simultaneously sensing particle velocity signals in three directions (vx,vy,vz) and acoustic pressure p, thereby providing high directional sensitivity and maintaining robust classification performance under low-SNR conditions. The MDFA module extracts features from multiple receptive fields, effectively capturing cross-scale patterns and enhancing the representation of weak targets in beamforming maps. This helps mitigate estimation bias caused by mutual interference among multiple targets in low-SNR environments. Furthermore, an auxiliary classification branch is incorporated into the discriminator to jointly optimize generation and classification tasks, enabling the model to more effectively identify and separate multiple types of labeled sources. Experimental results indicate that the proposed network is effective and shows improved performance across diverse scenarios. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

11 pages, 886 KB  
Article
Quadratic Spline Fitting for Robust Measurement of Thoracic Kyphosis Using Key Vertebral Landmarks
by Nikola Kirilov and Elena Bischoff
Diagnostics 2025, 15(21), 2703; https://doi.org/10.3390/diagnostics15212703 - 25 Oct 2025
Viewed by 298
Abstract
Objective: The purpose of this study is to present a kyphosis measurement method based on quadratic spline fitting through three key vertebral landmarks: T12, T8 and T4. This approach aims to capture thoracic spine curvature more continuously and accurately than traditional methods such [...] Read more.
Objective: The purpose of this study is to present a kyphosis measurement method based on quadratic spline fitting through three key vertebral landmarks: T12, T8 and T4. This approach aims to capture thoracic spine curvature more continuously and accurately than traditional methods such as the Cobb angle and circle fitting. Methods: A dataset of 560 lateral thoracic spine radiographs was retrospectively analyzed, including cases of postural kyphosis, Scheuermann’s disease, osteoporosis-induced kyphosis and ankylosing spondylitis. Two trained raters independently performed three repeated landmark annotations per image. The kyphosis angle was computed using two methods: (1) a quadratic spline fitted through the three landmarks, with the angle derived from tangent vectors at T12 and T4; and (2) a least-squares circle fit with the angle subtended between T12 and T4. Agreement with reference Cobb angles was evaluated using Pearson correlation, MAE, RMSE, ROC analysis and Bland–Altman plots. Reliability was assessed using intraclass correlation coefficients (ICC). Results: Both methods showed excellent intra- and inter-rater reliability (ICC ≥ 0.967). The spline method achieved lower MAE (5.81°), lower RMSE (8.94°) and smaller bias compared to the circle method. Both methods showed strong correlation with Cobb angles (r ≥ 0.851) and excellent classification performance (AUC > 0.950). Conclusions: Spline-based kyphosis measurement is accurate, reliable and particularly robust in cases with severe spinal deformity. Significance: This method supports automated, reproducible kyphosis assessment and may enhance clinical evaluation of spinal curvature using artificial intelligence-driven image analysis. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

Back to TopTop