Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (953)

Search Parameters:
Keywords = trend feature extraction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 1488 KB  
Review
Explainable Agentic Artificial Intelligence in Healthcare: A Scoping Review
by Bernardo G. Collaco, Srinivasagam Prabha, Cesar A. Gomez-Cabello, Syed Ali Haider, Ariana Genovese, Nadia G. Wood, Narayanan Gopala, Raghunath Raman, Erik O. Hester and Antonio Jorge Forte
Bioengineering 2026, 13(5), 513; https://doi.org/10.3390/bioengineering13050513 (registering DOI) - 28 Apr 2026
Abstract
Background: Agentic artificial intelligence (AI) systems, characterized by autonomous goal-directed behavior, multi-step reasoning, task decomposition, and tool use, are increasingly proposed for healthcare applications. However, their autonomy raises concerns regarding transparency, accountability, and human oversight. While explainable AI (XAI) has been widely [...] Read more.
Background: Agentic artificial intelligence (AI) systems, characterized by autonomous goal-directed behavior, multi-step reasoning, task decomposition, and tool use, are increasingly proposed for healthcare applications. However, their autonomy raises concerns regarding transparency, accountability, and human oversight. While explainable AI (XAI) has been widely studied in traditional predictive models, less is known about how explainability is implemented within agentic architectures. Objective: To map the emerging literature on explainable agentic AI (XAAI) in healthcare and characterize the types, scope, and forms of explainability used in these systems. Methods: A scoping review was conducted following PRISMA-ScR guidelines. PubMed, Embase, IEEE Xplore, and ACM Digital Library were searched through November 2025. Eligible studies described healthcare-related agentic AI systems incorporating explicit explainability mechanisms. Data were extracted on system architecture, explainability type (intrinsic, post hoc, hybrid), explanation scope (local, global), explanation form, and reported clinical outcomes. Results: Nine studies met the inclusion criteria. All systems demonstrated core agentic features, including autonomy, task decomposition, and tool integration, often within multi-agent frameworks. Explainability was predominantly intrinsic and workflow-native, typically delivered through textual reasoning traces and example-based grounding in retrieved clinical evidence. Feature-based and global explanations were comparatively rare and largely confined to hybrid architectures. Across domains including radiology, neurology, psychiatry, and biomedical research, XAAI systems were reported to improve performance and interpretability relative to baseline models in the included studies. However, these findings were derived from heterogeneous, predominantly experimental or retrospective studies, and structured human-in-the-loop oversight was infrequently described. Conclusions: Current XAAI systems appear to emphasize process transparency and evidence grounding rather than mechanistic model-level attribution. The available evidence remains limited and heterogeneous, and findings should be interpreted as early trends rather than established characteristics. Further progress will require standardized evaluation frameworks, clearer reporting of oversight mechanisms, and validation in real-world clinical settings to support safe and trustworthy integration of agentic AI into healthcare practice. Full article
20 pages, 6425 KB  
Article
Integrating Thermodynamic Priors and Spatiotemporal Features into a Physics-Guided Deep Learning Framework for Cloud Radar Clear-Air Echo Identification
by Jiapeng Wang, Shuzhen Hu, Jie Huang, Jiakun Yuan, Ruotong Yan, Qinglei Zhang and Aoli Yang
Remote Sens. 2026, 18(9), 1348; https://doi.org/10.3390/rs18091348 - 28 Apr 2026
Abstract
Accurate echo classification is crucial for Millimeter-wave Cloud Radar (MMCR) data quality control. Existing approaches, however, often struggle to generalize across complex scenes or lack physical interpretability. Here we propose PhySNet, a physics-guided network that combines thermodynamic priors with spatiotemporal radar features, embedding [...] Read more.
Accurate echo classification is crucial for Millimeter-wave Cloud Radar (MMCR) data quality control. Existing approaches, however, often struggle to generalize across complex scenes or lack physical interpretability. Here we propose PhySNet, a physics-guided network that combines thermodynamic priors with spatiotemporal radar features, embedding physical information across the full pipeline from feature extraction to final outputs. Based on the coupling between the lifting condensation level (LCL) and daytime clear-air echo heights, and the lagged correlation between nocturnal clear-air echo heights and their daytime counterparts, we design a physics-constrained gating block (PCGB). The PCGB extracts thermodynamic states and evolution trends from collocated surface observations, generating a clear-air echo probability map that weights the initial radar features. Building on this, we add a parallel regression branch of effective-clutter-height (ECH). This branch fuses thermodynamic features with radar spatiotemporal features, enabling the model to learn to predict the clear-air echo boundary. Finally, we apply an adaptive height filter using the predicted ECH sequence to refine the classification results. Evaluated on a multi-region, multi-season dataset from China, PhySNet achieves a probability of detection (POD) of 98.28% for meteorological echoes and 95.87% for clear-air echoes, outperforming conventional methods. By coupling data-driven learning with physical rules, our approach provides a high-accuracy, interpretable solution for cloud radar clear-air echo identification. Full article
(This article belongs to the Special Issue Radar Technologies for Meteorological and Atmospheric Observations)
Show Figures

Figure 1

14 pages, 1449 KB  
Article
MicroRNA Expression and Carotid Plaque Vulnerability: An Exploratory Tissue-Based Study
by Lucia Scurto, Ottavia Borghese, Giovanni Tinelli, Guido Rindi, Roberto Pola and Yamume Tshomba
J. Pers. Med. 2026, 16(5), 236; https://doi.org/10.3390/jpm16050236 - 28 Apr 2026
Abstract
Background: Reliable preoperative identification of carotid plaque instability remains challenging. Although duplex ultrasound allows early detection of carotid stenosis, it does not consistently predict plaque biological behavior. MicroRNAs (miRNAs) are small non-coding RNAs that regulate gene expression and have been implicated in atherosclerotic [...] Read more.
Background: Reliable preoperative identification of carotid plaque instability remains challenging. Although duplex ultrasound allows early detection of carotid stenosis, it does not consistently predict plaque biological behavior. MicroRNAs (miRNAs) are small non-coding RNAs that regulate gene expression and have been implicated in atherosclerotic progression and plaque destabilization. The tissue-level expression of miRNAs in carotid plaques and their relationship with histological vulnerability remain incompletely defined. Methods: This exploratory, pilot, hypothesis-generating study included patients undergoing carotid endarterectomy for asymptomatic high-grade carotid stenosis (>75% NASCET). Plaque vulnerability was assessed using a multiparametric approach combining preoperative duplex ultrasound features (including Gray Scale Median, GSM), intraoperative macroscopic evaluation, and a validated histological scoring system; only plaques with concordant classification across all three modalities were retained for molecular analysis. Total RNA including small RNA was extracted from plaque tissue and miRNA expression was measured by qRT-PCR on a panel of 47 candidate miRNAs. Data were analyzed descriptively. Results: Twenty-eight patients were initially enrolled; after application of strict vulnerability criteria, five plaques (three unstable, two stable) were selected for miRNA profiling. Among the 47 miRNAs assayed, miR-122 and miR-197 showed a consistent descriptive trend toward higher expression in plaques classified as unstable; these plaques also displayed histological features of vulnerability (lipid-rich necrotic cores and inflammatory infiltrates). Given the extremely limited sample size, no inferential statistical comparisons or multiple-testing corrections were performed. Conclusions: In this small, tissue-based exploratory analysis, miR-122 and miR-197 were more highly expressed in plaques with histological features of instability. Due to the small sample size, the effect estimates are unstable, and the findings should be used solely to inform the design and power calculations of future studies. We outline the need of a clear, pragmatic validation pathway based on replication in independent, larger cohorts with standardized tissue handling and blinded assessment and parallel evaluation of circulating miRNA levels to assess noninvasive biomarker potential. Indeed, these findings are preliminary and strictly hypothesis-generating; validation in larger, prospectively collected cohorts and integration with circulating biomarkers and imaging data are required before clinical application. Full article
(This article belongs to the Section Disease Biomarkers)
Show Figures

Figure 1

18 pages, 1372 KB  
Article
Research on Multi-Timescale Configuration Strategy of Hybrid Energy Storage Based on STL-PDM-VMD Model
by Min Wang, Zimo Liu, Leicheng Pan, Yongzhe Wang, Chunliang Wang, Nan Zhao and Weijie He
Energies 2026, 19(9), 2074; https://doi.org/10.3390/en19092074 (registering DOI) - 24 Apr 2026
Viewed by 89
Abstract
Power systems with high renewable penetration impose multi-dimensional demands on energy storage (ES) regulation. Short-duration ES is required for power balance and frequency support, while medium- and long-duration ES is essential for daily, weekly, and seasonal peak shaving and energy time-shifting. Aiming at [...] Read more.
Power systems with high renewable penetration impose multi-dimensional demands on energy storage (ES) regulation. Short-duration ES is required for power balance and frequency support, while medium- and long-duration ES is essential for daily, weekly, and seasonal peak shaving and energy time-shifting. Aiming at the challenge of multi-timescale configuration of hybrid energy storage (HES) in the initial planning stage of carbon-neutral transition, this paper proposes an optimal configuration strategy combining STL-PDM-VMD. First, the seasonal and trend decomposition using Loess (STL) is used to extract quarterly trends of annual net power for seasonal ES configuration. Then, the Past Decomposable Mixing (PDM) module in the time-mixer model is applied to decouple and mix multi-scale features of the detrended power curve for monthly and weekly configurations. Finally, an improved Variational Mode Decomposition (VMD) is adopted to decompose daily net power fluctuations and optimize intra-day energy storage schemes. Based on actual data from a carbon-neutral transition region, simulations are carried out and compared with the VMD method with decomposition layers optimized by Gurobi. The results show that the proposed STL-PDM-VMD multi-timescale hybrid energy storage configuration strategy can effectively capture the multi-timescale fluctuation characteristics of net load, significantly improve the Renewable Energy (RE) penetration rate, and ensure the power and energy balance of the new power system at multiple timescales. penetration, and maintain power and energy balance in the new-type power system. Full article
24 pages, 8148 KB  
Article
A Quantitative Estimation Method for Cable Deterioration Degree Based on SDP Transform and Reflection Coefficient Spectrum
by Xinyu Song, Zelin Liao, Xiaolong Li, Shuguang Zeng, Junjie Lv, Zhien Zhu and Fanyi Cai
Electronics 2026, 15(8), 1743; https://doi.org/10.3390/electronics15081743 - 20 Apr 2026
Viewed by 141
Abstract
To address the challenges in intuitive feature discrimination and precise quantitative evaluation of cable defects, this paper proposes a diagnostic methodology utilizing the Symmetrized Dot Pattern (SDP) transform and reflection coefficient spectra. The Dung Beetle Optimizer (DBO) is introduced to adaptively optimize the [...] Read more.
To address the challenges in intuitive feature discrimination and precise quantitative evaluation of cable defects, this paper proposes a diagnostic methodology utilizing the Symmetrized Dot Pattern (SDP) transform and reflection coefficient spectra. The Dung Beetle Optimizer (DBO) is introduced to adaptively optimize the SDP transform parameters, employing the Structural Similarity Index Measure (SSIM) as a fitness function to maximize discriminability between deterioration states. Three quantitative features, including the number of effective pixels, the degree of red–blue aliasing, and radial dispersion, are extracted to characterize the physical degradation processes of signal energy accumulation, angular evolution, and path divergence. By incorporating a self-reference calibration mechanism for structural differences, features are fused into a Comprehensive Deterioration Index (CDI). Experimental results on coaxial cables simulating shielding damage and thermal aging demonstrate that SDP images reveal continuous evolution patterns corresponding to defect severity. A regression model based on these patterns effectively characterizes deterioration trends. Compared to complex models, this study achieves intuitive fault identification and preliminary quantitative description of degradation trends through image feature fusion. Although the current sample size is limited, the results validate the feasibility of this method in evaluating cable deterioration severity, offering an efficient new data-processing perspective for cable condition monitoring. Full article
Show Figures

Figure 1

29 pages, 8671 KB  
Article
Data-Driven Multi-Mode Time–Cost Trade-Off Optimization for Construction Project Scheduling Using LightGBM
by Shike Jia, Cuinan Luo, Ruchen Wang, Qiangwen Zong, Yunfeng Wang, Fei Chen, Weiquan Guan and Yong Liao
Processes 2026, 14(8), 1311; https://doi.org/10.3390/pr14081311 - 20 Apr 2026
Viewed by 214
Abstract
Large infrastructure projects frequently experience schedule slippage and cost escalation; however, time–cost planning still relies on expert-assigned activity parameters that fail to reflect the variability induced by construction modes, resource supply, and on-site conditions. This study focuses on activity-level multi-mode time–cost trade-off planning [...] Read more.
Large infrastructure projects frequently experience schedule slippage and cost escalation; however, time–cost planning still relies on expert-assigned activity parameters that fail to reflect the variability induced by construction modes, resource supply, and on-site conditions. This study focuses on activity-level multi-mode time–cost trade-off planning and its dynamic correction during project execution. The proposed methodology is intended for project-level short-term operational scheduling and rolling re-scheduling within a finite project execution horizon, rather than long-term strategic or portfolio-level scheduling. A predict–optimize–update framework is proposed, where light gradient boosting machine (LightGBM) is employed to predict the duration and direct cost of activity–mode pairs using unified features extracted from BIM/IFC records, schedule-resource ledgers, and cost-settlement data, covering engineering quantities, mode and resource decisions, and contextual factors. These predicted parameters are then fed into a time-indexed bi-objective mixed-integer linear program (MILP), which minimizes both project makespan and total cost (including indirect cost) to generate an interpretable Pareto frontier via a weighted-sum approach. Meanwhile, real-time monitoring updates refresh the predictors and re-solve the remaining project network to ensure dynamic adaptability. Validated on a desensitized proprietary enterprise multi-source dataset comprising 25 completed infrastructure projects and 5258 activity–mode samples, the proposed method achieves a mean absolute error (MAE) of 2.7 days and a coefficient of determination (R2) of 0.89 for duration prediction, as well as an MAE of 7.4 × 104 CNY and an R2 of 0.91 for direct-cost prediction. The generated Pareto set exhibits a diminishing return trend: as the project duration is relaxed from 101 to 146 days, the total cost decreases from 45.10 to 40.27 million CNY. A weather-triggered update case demonstrates that the completion forecast is revised from 133 to 128 days, with the total cost reduced from 53.05 to 52.75 million CNY. This framework enables explainable schedule–cost co-control, thereby effectively aiding decision-making for the planning and control of large infrastructure projects. Full article
Show Figures

Figure 1

20 pages, 4214 KB  
Article
Aligning Textual Information with Time Series via Cross-Modal Attention for Time Series Forecasting
by Le Hoang Anh, Dang Thanh Vu, Gwang Hyun Yu, Seungmin Oh, Jo Jung An and Jin Young Kim
Electronics 2026, 15(8), 1734; https://doi.org/10.3390/electronics15081734 - 20 Apr 2026
Viewed by 334
Abstract
News and reports frequently drive future trends, yet traditional Time Series Forecasting often fails to capture these external influences. To integrate textual insights, we introduce Text-Time Cross-Modal Attention (TTCA), a multimodal framework that fuses numerical embeddings with text embeddings extracted from a [...] Read more.
News and reports frequently drive future trends, yet traditional Time Series Forecasting often fails to capture these external influences. To integrate textual insights, we introduce Text-Time Cross-Modal Attention (TTCA), a multimodal framework that fuses numerical embeddings with text embeddings extracted from a pre-trained language model. TTCA employs a cross-attention mechanism that treats time series features as queries and textual features as keys and values. This architecture ensures that semantic context enhances, rather than overshadows, underlying temporal dynamics. Extensive evaluations on the Time-MMD dataset across nine real-world domains demonstrate that TTCA consistently outperforms state-of-the-art unimodal baselines, achieving average improvements of 3.29% in MSE and 9.66% in MAE. Furthermore, TTCA shows moderate performance gains over recent multimodal approaches, particularly in event-driven scenarios. Full article
(This article belongs to the Collection Image and Video Analysis and Understanding)
Show Figures

Figure 1

28 pages, 6001 KB  
Article
Three-Dimensional Analysis of Facial Skeleton Textures in CBCT as an Early Warning Sign of Osteoporosis—A Pilot Study
by Tomasz Wach, Marcin Kozakiewicz, Adam Michcik, Marcin Kociołek, Piotr Hadrowicz, Piotr Szymor, Krzysztof Dowgierd, Michał Podgórski and Raphael Olszewski
Diagnostics 2026, 16(8), 1217; https://doi.org/10.3390/diagnostics16081217 - 19 Apr 2026
Viewed by 232
Abstract
Background: Osteoporosis is a prevalent condition characterized by low bone mass and altered microarchitecture, increasing fracture risk. Early detection remains challenging, as conventional methods such as DXA are limited to specialized settings and often detect disease only after a fracture. Radiomics and [...] Read more.
Background: Osteoporosis is a prevalent condition characterized by low bone mass and altered microarchitecture, increasing fracture risk. Early detection remains challenging, as conventional methods such as DXA are limited to specialized settings and often detect disease only after a fracture. Radiomics and three-dimensional (3D) imaging techniques, such as CBCT, may provide novel approaches for assessing bone quality. Methods: This pilot study analyzed 68 CBCT scans from adult patients (41 females, 27 males; mean age 57 years). Three-dimensional regions of interest (ROIs) were delineated in seven maxillofacial and mandibular sites (total 309 ROIs). Radiomic texture features were extracted and compared with corresponding T-scores from DXA measurements. Additionally, synthetic 3D reference phantoms with controlled variations in density, trabecular connectivity, and structural anisotropy were generated to evaluate the sensitivity of texture features to microarchitectural changes. Results: Several radiomic features, including GLCM-, ARM-, and Gradient-derived parameters, demonstrated consistent monotonic trends correlating with bone density and microstructural deterioration. Differences in feature values were observed across healthy, osteopenic, osteoporotic, and advanced osteoporotic states. Reference phantoms confirmed that the observed trends were attributable to structural differences rather than imaging variability. Features such as Sum Variance and Correlation exhibited potential as early indicators of microarchitectural degradation. Conclusions: Three-dimensional CBCT texture analysis may provide a non-invasive, supplementary tool for assessing bone quality and detecting early osteopenic changes. Further studies with larger cohorts are warranted to validate radiomic markers and develop predictive indices for osteoporosis screening. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

31 pages, 1786 KB  
Article
Optimized CNN–LSTM Modeling for Crisis Event Detection in Noisy Social Media Streams
by Mudasir Ahmad Wani
Mathematics 2026, 14(8), 1369; https://doi.org/10.3390/math14081369 - 19 Apr 2026
Viewed by 192
Abstract
Event detection is crucial for disaster response, public safety, and trend analysis, enabling real-time identification of critical events. Social media platforms provide a vast content source, offering timely and diverse event coverage compared to traditional news reports. However, challenges arise due to the [...] Read more.
Event detection is crucial for disaster response, public safety, and trend analysis, enabling real-time identification of critical events. Social media platforms provide a vast content source, offering timely and diverse event coverage compared to traditional news reports. However, challenges arise due to the informal and noisy nature of the text, along with the limited availability of ground truth data for training models. This study introduces SOCIAL (Social Media Event Classification using Integrated Artificial Learning and Natural Language Processing), a mathematically grounded framework for real-time social media event detection. SOCIAL integrates a formal representation of social media text with a customized CNN–LSTM architecture, combining convolutional operations for local feature extraction with sequential modeling to capture temporal dependencies, thereby enhancing classification accuracy. Generative AI is employed to create synthetic event-related samples, addressing data scarcity and ensuring a balanced dataset, while the design incorporates quantitative principles to guide embedding selection and model optimization. This study systematically evaluates six experimental configurations with TF-IDF and Word2Vec embeddings. The TF-IDF-based CNN–LSTM model achieved top performance with 98.59% accuracy, 98.13% precision, 99.06% recall, and 0.9719 MCC. Additionally, the F0.5, F1, and F2 scores were 98.31%, 98.59%, and 98.87%, respectively, confirming the model’s strong predictive capabilities. TF-IDF integration enhanced event-specific term recognition, reducing misclassifications and improving reliability. These results demonstrate that SOCIAL is not only a fast, accurate, and scalable tool for crisis event detection, but also a formally principled framework for modeling and analyzing social media signals. Full article
(This article belongs to the Special Issue Deep Representation Learning for Social Network Analysis)
Show Figures

Figure 1

22 pages, 10122 KB  
Article
Salient Object Detection with Semantic-Aware Edge Refinement and Edge-Guided Cross-Attention Feature Aggregation
by Yitong Lu and Ziguan Cui
Sensors 2026, 26(8), 2439; https://doi.org/10.3390/s26082439 - 16 Apr 2026
Viewed by 355
Abstract
Hybrid multi-backbone architectures and the utilization of edge cues for auxiliary training have become two major research trends in salient object detection (SOD). It is widely acknowledged that CNNs can effectively model local spatial structures, while Transformers can capture long-range global dependencies. However, [...] Read more.
Hybrid multi-backbone architectures and the utilization of edge cues for auxiliary training have become two major research trends in salient object detection (SOD). It is widely acknowledged that CNNs can effectively model local spatial structures, while Transformers can capture long-range global dependencies. However, the representation discrepancy between CNN and Transformer features, together with boundary-detail degradation during multi-scale fusion, remains a major challenge. In addition, how to effectively leverage edge cues as reliable structural guidance without introducing texture-induced false boundaries or boundary leakages remains an open issue. In this paper, we present SECA-Net, a unified framework that establishes a profound synergy between CNN and Transformer representations. It explicitly bridges their inherent discrepancies through level-dependent interaction strategies, while resolving structural degradation via a sequential “purify-and-guide” mechanism. This approach enables the network to extract and utilize edge cues effectively, thereby alleviating boundary degradation and texture-induced false contours. Specifically, we design a dual-encoder structure to extract features. A level-wise feature interaction (LFI) module is introduced to perform discrepancy-aware fusion across feature levels, stabilizing CNN–Transformer aggregation. Meanwhile, the features extracted from the CNN branch are projected into a semantic-aware edge refinement (SAER) module to produce clean multi-scale edge priors under high-level semantic guidance, suppressing texture-induced spurious edges. Finally, we design an edge-guided cross-attention feature aggregation (ECFA) module, which progressively injects refined edge priors as structural constraints into multi-scale saliency decoding via cascaded cross-attention, enabling effective structural refinement. Overall, LFI reduces cross-branch discrepancy, SAER purifies boundary priors, and ECFA integrates semantics and structure in a progressive decoding manner, forming a unified SECA-Net framework. Extensive experimental results on five benchmark SOD datasets show that SECA-Net outperforms 19 state-of-the-art methods, demonstrating its effectiveness. Specifically, our proposed method ranks first in Fβ and BDE across all datasets, notably improving Fβ by 1.54% on the challenging DUTS-TE dataset. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

21 pages, 11025 KB  
Article
A Multi-Step RUL Prediction Method for Lithium-Ion Batteries Based on Multi-Scale Temporal Features and Frequency-Domain Spectral Interaction
by Ye Tu, Shixiong Xu, Jie Wang and Mengting Jin
Batteries 2026, 12(4), 137; https://doi.org/10.3390/batteries12040137 - 14 Apr 2026
Viewed by 342
Abstract
With the rapid development of new energy vehicles and energy storage systems, accurate prediction of the remaining useful life (RUL) of lithium-ion batteries is of great importance for predictive maintenance and operational safety. However, battery degradation during cycling usually exhibits multi-scale characteristics, including [...] Read more.
With the rapid development of new energy vehicles and energy storage systems, accurate prediction of the remaining useful life (RUL) of lithium-ion batteries is of great importance for predictive maintenance and operational safety. However, battery degradation during cycling usually exhibits multi-scale characteristics, including long-term degradation trends, stage-wise drifts, and stochastic disturbances, which makes existing methods still face significant challenges in multi-step forecasting and cross-domain generalization. To address this issue, this paper proposes a time–frequency fusion model for multi-step RUL prediction, termed TF-RULNet (Time-Frequency RUL Network). The model takes cycle-level feature sequences as input and consists of three components: a multi-scale temporal convolution encoder (MSTC) for parallel extraction of degradation cues at different temporal scales; a multi-head spectral interaction module (MHSI), which performs 1D-FFT along the temporal dimension for each head and further applies adaptive band-wise mask refinement to capture local spectral structures and hierarchical band patterns with a computational complexity of O(LlogL); and a cross-gated fusion module (CGF), which generates gating signals from the summary of one domain to modulate the features of the other domain, thereby enabling dynamic balancing and complementary enhancement of time–frequency information. Experiments are conducted on the NASA dataset (B005/B007) for in-domain evaluation, and further cross-dataset tests from NASA to the Maryland dataset (CS-35/CS-37) are carried out to verify the robustness of the proposed model under distribution shifts. The results show that, compared with the strongest baseline PatchTST, TF-RULNet reduces RMSE and MAE by more than 38.23% and 50.51%, respectively, in cross-dataset generalization, while achieving an additional RMSE reduction of about 24% in in-domain prediction. In summary, TF-RULNet can effectively characterize the multi-scale time–frequency degradation patterns of batteries and improve cross-domain generalization, providing a high-accuracy and scalable modeling solution for practical battery health management and life prognostics. Full article
Show Figures

Figure 1

24 pages, 9284 KB  
Article
Shock-Aware Constrained Optimization of the RAE2822 Transonic Airfoil via a Two-Channel vSDF Surrogate with Closed-Loop CFD Verification
by Yuxin Huo, Bo Wang and Xiaoping Ma
Aerospace 2026, 13(4), 352; https://doi.org/10.3390/aerospace13040352 - 10 Apr 2026
Viewed by 274
Abstract
Shock-aware aerodynamic shape optimization of transonic airfoils requires surrogate models that capture both integral aerodynamic trends and shock-relevant pressure distribution features. This study addresses drag-oriented optimization of the RAE2822 transonic airfoil under a lift-targeted condition with baseline relative thickness feasibility, rather than strict [...] Read more.
Shock-aware aerodynamic shape optimization of transonic airfoils requires surrogate models that capture both integral aerodynamic trends and shock-relevant pressure distribution features. This study addresses drag-oriented optimization of the RAE2822 transonic airfoil under a lift-targeted condition with baseline relative thickness feasibility, rather than strict target pressure inverse design. Each airfoil is parameterized by a 16-dimensional CST vector and mapped to a two-channel vertical signed distance field representation of the upper- and lower-surface Cp curves, from which shock descriptors, including the shock location indicator xs and the pressure jump magnitude ΔCp, are extracted in a deterministic, implementation-consistent manner. To quantify the reliability of surrogate-derived shock metrics, a held-out uncertainty analysis is performed on 500 samples. The surrogate achieves MAE/RMSE values of 0.00474/0.00602 for CL and 4.66×104/6.33×104 for CD, while the recovered shock-related quantities yield 0.00201/0.01598 for xs and 0.00200/0.00336 for ΔCp. Scatter plots and error histograms show tight one-to-one trends for most samples, with limited outliers mainly associated with locally ambiguous pressure gradient patterns. Overall, the surrogate is more reliable for capturing shock intensity trends than for prescribing an exact shock location; accordingly, xs is interpreted as a trend-level descriptor, whereas ΔCp is treated as the more stable engineering indicator inside the optimization loop. The trained surrogate is embedded in a differential evolution optimizer with soft penalties on lift deviation and thickness feasibility violation, and selected designs are re-evaluated through closed-loop SU2 RANS simulations. CFD verification shows that the optimized design reduces drag from CD=0.01463 to CD=0.01229 (a 16.0% reduction) and reduces the shock jump from ΔCp=0.239 to ΔCp=0.046 (an 80.7% reduction). For the optimized design, the prediction-to-CFD differences are ΔCL=+0.0042 and ΔCD=+0.00012. These results support an engineering-oriented and auditable shock-aware closed-loop optimization workflow, with final design conclusions established by CFD verification rather than surrogate-predicted shock location alone. Full article
(This article belongs to the Special Issue Aerodynamic Optimization of Flight Wing)
Show Figures

Figure 1

38 pages, 1937 KB  
Review
Cavitation Monitoring in Rotating Hydraulic Machines Using Machine Learning—A Review
by Elisa Sanchez and Axel Busboom
Appl. Sci. 2026, 16(7), 3566; https://doi.org/10.3390/app16073566 - 6 Apr 2026
Viewed by 465
Abstract
Cavitation in rotating hydraulic machinery—such as industrial pumps and hydropower turbines—can cause blade and casing erosion, excessive vibration, noise and efficiency loss, posing significant operational and economic risks across industrial sectors. Reliable and scalable monitoring strategies are therefore essential, particularly under variable operating [...] Read more.
Cavitation in rotating hydraulic machinery—such as industrial pumps and hydropower turbines—can cause blade and casing erosion, excessive vibration, noise and efficiency loss, posing significant operational and economic risks across industrial sectors. Reliable and scalable monitoring strategies are therefore essential, particularly under variable operating conditions in real-world environments. Recent advances in machine learning (ML) and deep learning (DL) have enabled data-driven approaches for cavitation detection based on operational sensor signals, yet a structured synthesis of these developments is lacking. This scoping review systematically analyzes measurement-based ML and DL approaches for cavitation monitoring, with the aim of identifying key trends, challenges and future research directions. Following PRISMA-ScR and JBI guidelines, 52 peer-reviewed studies published between 1996 and 2025 were evaluated, covering laboratory and field investigations across pumps and turbines and a wide range of model architectures. The analysis reveals that most studies are laboratory-based (∼80%), focus on pumps (∼70%) and rely on single-machine datasets (>80%), limiting generalization across machines and operating conditions. Classical ML approaches remain relevant due to interpretability and robustness with limited data, while DL enables end-to-end learning from raw or time–frequency transformed signals, frequently achieving diagnostic accuracy above 95%. Hybrid frameworks combining DL-based feature extraction with classical classifiers are increasingly adopted. Key limitations across the literature include domain shifts between laboratory and field data, scarce or inconsistent labeling and a predominant focus on categorical cavitation severity levels. Full article
(This article belongs to the Special Issue New Trends in Sustainable Energy Technology)
Show Figures

Figure 1

24 pages, 3942 KB  
Article
Robust Looming Spatial Localization in Dim Light via Daubechies Wavelet-Fused ON/OFF Pathways
by Zefang Chang, Guangrong Wu, Hao Chen, He Zhang, Hao Luan and Zhijian Yang
Biomimetics 2026, 11(4), 244; https://doi.org/10.3390/biomimetics11040244 - 3 Apr 2026
Viewed by 299
Abstract
Computational models of the MLG1 neurons in crab Neohelice granulata have been developed to detect and spatially localize looming stimuli. However, existing models suffer from significant performance degradation in dim scenarios, primarily due to visual signal corruption from stochastic noise such as photon [...] Read more.
Computational models of the MLG1 neurons in crab Neohelice granulata have been developed to detect and spatially localize looming stimuli. However, existing models suffer from significant performance degradation in dim scenarios, primarily due to visual signal corruption from stochastic noise such as photon shot noise. To address this challenge, we propose a computational framework that embeds Daubechies wavelet directly into ON/OFF visual pathways. The ON/OFF mechanism separates the input signals in parallel based on luminance changes to capture dynamic differences between target and background. Embedding Daubechies wavelet enables multi-scale frequency decomposition, allowing the model to suppress high-frequency noise while enhancing low-frequency looming trends. This process extracts low-frequency components and high-frequency details, providing the MLG1 neuron with more discriminative feature inputs. Experimental results demonstrate that the model achieves reliable looming spatial localization under extremely low contrast conditions, offering a robust methodology for bionic vision in extreme dim light environments. Full article
(This article belongs to the Special Issue Bio-Inspired and Biomimetic Intelligence in Robotics: 3rd Edition)
Show Figures

Figure 1

18 pages, 5346 KB  
Article
MFT-PTM: A Multisource-Fused and Temporally-Aware Framework for Evolutionary Analysis of Rare Earth Patent Topics Model
by Haofei Zhang, Jingyu Wang, Jinling Yu and Lixin Liu
Information 2026, 17(4), 345; https://doi.org/10.3390/info17040345 - 2 Apr 2026
Viewed by 350
Abstract
Rare-earth elements are critical to a wide range of high-technology applications, and analyzing patents involving rare-earth elements is essential for understanding technological progress and innovation trends. Traditional topic models cannot fully exploit patent network structures and temporal information, limiting their ability to capture [...] Read more.
Rare-earth elements are critical to a wide range of high-technology applications, and analyzing patents involving rare-earth elements is essential for understanding technological progress and innovation trends. Traditional topic models cannot fully exploit patent network structures and temporal information, limiting their ability to capture the dynamic evolution of technology topics. To overcome these limitations, we propose a novel multisource-fused framework (MFT-PTM), which integrates three types of multisource features: textual, network, and temporal features via the time-aware TemporalK-Means algorithm. Specifically, we use SciBERT to extract text embeddings, TransR to generate network embeddings, and derive temporal scalars from patent data. After fusing and reducing these features with Uniform Manifold Approximation and Projection (UMAP), we apply TemporalK-Means clustering with a time-decay mechanism to capture evolutionary trends. Experiments on 43,322 rare-earth-related patents indicate that the proposed framework achieves improved performance compared with traditional methods such as LDA and BERTopic in terms of topic coherence, cluster quality, and cluster separation. Furthermore, the analysis suggests a noticeable technological transition in rare-earth applications, gradually shifting from environmental catalysis toward advanced energy and biomedical domains. Overall, the framework provides a quantitative approach for integrating multisource patent information and exploring technological evolution patterns. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

Back to TopTop