Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,832)

Search Parameters:
Keywords = compact modeling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1950 KB  
Article
Anomalous Sound Detection by Fusing Spectral Enhancement and Frequency-Gated Attention
by Zhongqin Bi, Jun Jiang, Weina Zhang and Meijing Shan
Mathematics 2026, 14(3), 530; https://doi.org/10.3390/math14030530 (registering DOI) - 2 Feb 2026
Abstract
Unsupervised anomalous sound detection aims to learn acoustic features solely from the operational sounds of normal equipment and identify potential anomalies based on these features. Recent self-supervised classification frameworks based on machine ID metadata have achieved promising results, but they still face two [...] Read more.
Unsupervised anomalous sound detection aims to learn acoustic features solely from the operational sounds of normal equipment and identify potential anomalies based on these features. Recent self-supervised classification frameworks based on machine ID metadata have achieved promising results, but they still face two challenges in industrial acoustic scenarios: Log-Mel spectrograms tend to weaken high-frequency details, leading to insufficient spectral characterization, and when normal sounds from different machine IDs are highly similar, classification constraints alone struggle to form clear intra-class structures and inter-class boundaries, resulting in false positives. To address these issues, this paper proposes FGASpecNet, an anomaly detection model integrating spectral enhancement and frequency-gated attention. For feature modeling, a spectral enhancement branch is designed to explicitly supplement spectral details, while a frequency-gated attention mechanism highlights key frequency bands and temporal intervals conditioned on temporal context. Regarding loss design, a joint training strategy combining classification loss and metric learning loss is adopted. Multi-center prototypes enhance intra-class compactness and inter-class separability, improving detection performance in scenarios with similar machine IDs. Experimental results on the DCASE 2020 Challenge Task 2 for anomalous sound detection demonstrate that FGASpecNet achieves 95.04% average AUC and 89.68% pAUC, validating the effectiveness of the proposed approach. Full article
Show Figures

Figure 1

21 pages, 3364 KB  
Article
Modeling the Performance of Glass-Cover-Free Parabolic Trough Collector Prototypes for Solar Water Disinfection in Rural Off-Grid Communities
by Fernando Aricapa, Jorge L. Gallego, Alejandro Silva-Cortés, Claudia Díaz-Mendoza and Jorgelina Pasqualino
Physchem 2026, 6(1), 9; https://doi.org/10.3390/physchem6010009 (registering DOI) - 2 Feb 2026
Abstract
In regions with abundant solar energy, solar water disinfection (SODIS) offers a sustainable strategy to improve drinking water access, especially in rural, off-grid communities. This study presents a numerical modeling approach to assess the thermal and microbial disinfection performance of glass-free parabolic trough [...] Read more.
In regions with abundant solar energy, solar water disinfection (SODIS) offers a sustainable strategy to improve drinking water access, especially in rural, off-grid communities. This study presents a numerical modeling approach to assess the thermal and microbial disinfection performance of glass-free parabolic trough collectors (PTCs). The model integrates geometric sizing, one-dimensional thermal energy balance, and first-order microbial inactivation kinetics, supported by optical simulations in SolTRACE 3.0. Simulations applied to a representative case in the Colombian Caribbean (Gambote, Bolívar) highlight the influence of rim angle, focal length, and optical properties on system efficiency. Results show that compact PTCs can achieve fluid temperatures above 70 °C and effective pathogen inactivation within short exposure times. Sensitivity analysis identifies key geometric and environmental factors that optimize performance under variable conditions. The model provides a practical tool for guiding the design and local adaptation of SODIS systems, supporting decentralized, low-cost water treatment solutions aligned with sustainable development goals. Furthermore, it offers a framework for future assessments of PTC implementations in different climatic scenarios. Full article
(This article belongs to the Section Thermochemistry)
Show Figures

Graphical abstract

34 pages, 6747 KB  
Article
Lightweight Semantic Segmentation for Fermentation Foam Monitoring: A Comparative Study of U-Net, DeepLabV3+, Fast-SCNN, and SegNet
by Maksym Vihuro, Andriy Malyar, Grzegorz Litawa, Kamila Kluczewska-Chmielarz, Tatiana Konrad and Piotr Migo
Appl. Sci. 2026, 16(3), 1487; https://doi.org/10.3390/app16031487 - 2 Feb 2026
Abstract
This study aims to identify an effective neural network architecture for the task of semantic segmentation of the surface of beer wort at the stage of primary fermentation, using deep learning methodologies. Four contemporary architectures were evaluated and contrasted. The following networks are [...] Read more.
This study aims to identify an effective neural network architecture for the task of semantic segmentation of the surface of beer wort at the stage of primary fermentation, using deep learning methodologies. Four contemporary architectures were evaluated and contrasted. The following networks are presented in both baseline and optimized forms: U-Net, DeepLabV3+, Fast-SCNN, and SegNet. The models were trained on a dataset of images depicting real beer surfaces at the primary fermentation stage. This was followed by the validation of the models using key metrics, including pixel classification accuracy, Mean Intersection over Union (mIoU), Dice Coefficient, inference time per image, and Graphics Processing Unit (GPU) resource utilization. Results indicate that the optimized U-Net achieved the optimal balance between performance and efficiency, attaining a validation accuracy of 88.85%, mIoU of 76.72%, and a Dice score of 86.71%. With an inference time of 49.5 milliseconds per image, coupled with minimal GPU utilization (18%), the model proves suitable for real-time deployment in production environments. Conversely, complex architectures, such as DeepLabV3+, did not yield the anticipated benefits, thereby underscoring the viability of utilizing compact models for highly specialized industrial tasks. This study establishes a novel quantitative metric for the assessment of fermentation. This is based on the characteristics of the foam surface and thus offers an objective alternative to traditional subjective inspections. The findings emphasize the potential of adapting optimized deep learning architectures to quality control tasks within the food industry, particularly in the brewing sector, and they pave the way for further integration into automated computer vision systems. Full article
(This article belongs to the Special Issue Advances in Machine Vision for Industry and Agriculture)
Show Figures

Figure 1

16 pages, 2861 KB  
Article
Parametric Model Order Reduction for Large-Scale Circuit Models Using Extended and Asymmetric Extended Krylov Subspace
by Chrysostomos Chatzigeorgiou, Pavlos Stoikos, George Floros, Nestor Evmorfopoulos and George Stamoulis
Electronics 2026, 15(3), 640; https://doi.org/10.3390/electronics15030640 (registering DOI) - 2 Feb 2026
Abstract
The increasing complexity of modern Very Large-Scale Integration (VLSI) circuits, combined with unavoidable variations in physical and manufacturing parameters, poses significant challenges for accurate and efficient circuit simulation. Parametric model order reduction (PMOR) provides a viable solution by enabling the construction of compact [...] Read more.
The increasing complexity of modern Very Large-Scale Integration (VLSI) circuits, combined with unavoidable variations in physical and manufacturing parameters, poses significant challenges for accurate and efficient circuit simulation. Parametric model order reduction (PMOR) provides a viable solution by enabling the construction of compact reduced-order models that remain valid across a prescribed parameter space. However, the computational cost of generating such models can become prohibitive for large-scale circuits, particularly when high-fidelity projection subspaces are required. In this work, we present an efficient PMOR framework based on the Asymmetric Extended Krylov Subspace (AEKS). The proposed approach exploits structural sparsity imbalances between system matrices to guide the subspace expansion toward computationally favorable directions, thereby significantly reducing the cost of repeated linear system solves. By integrating AEKS within a concatenation-of-basis PMOR strategy, this method enables the rapid construction of accurate parametric reduced-order models for large-scale circuit systems. The proposed AEKS-PMOR framework is evaluated on industrial power distribution network benchmarks, where it demonstrates substantial reductions in model construction time compared to conventional EKS-based PMOR, while maintaining high approximation accuracy over the entire parameter space. Full article
(This article belongs to the Special Issue Modern Circuits and Systems Technologies (MOCAST 2024))
Show Figures

Figure 1

29 pages, 1055 KB  
Article
An Interpretable Multi-Dataset Learning Framework for Breast Cancer Prediction Using Clinical and Biomedical Tabular Data
by Muhammad Ateeb Ather, Abdullah, Zulaikha Fatima, José Luis Oropeza Rodríguez and Grigori Sidorov
Computers 2026, 15(2), 97; https://doi.org/10.3390/computers15020097 (registering DOI) - 2 Feb 2026
Abstract
Despite the numerous advancements that have been made in the treatment and management of breast cancer, it continues to be a source of mortality in millions of female patients across the world each year; thus, there is a need for proper and reliable [...] Read more.
Despite the numerous advancements that have been made in the treatment and management of breast cancer, it continues to be a source of mortality in millions of female patients across the world each year; thus, there is a need for proper and reliable diagnostic assistance tools that are quite effective in the prediction of the disease in its early stages. In our research, in addition to the proposed framework, a comprehensive comparative assessment of traditional machine learning, deep learning, and transformer-based models has been performed to predict breast cancer in a multi-dataset environment. For the purpose of improving diversity and reducing any possible biases in the datasets, our research combined three datasets: breast cancer biopsy morphological (WDBC), biochemical and metabolic properties (Coimbra), and cytological attributes (WBCO), intended to expose the model to heterogeneous feature domains and evaluate robustness under distributional variation. Based on the thorough process conducted in our research involving traditional machine learning models, deep learning models, and transformers, a proposed hybrid architecture referred to as the FT-Transformer-Attention-LSTM-SVM framework has been designed and developed in our research that is compatible and well-suited for the processing and analysis of the given tabular biomedical datasets. The proposed design in the research has an effective performance of 99.90% accuracy in the primary test environment, an average mean accuracy of 99.56% in the 10-fold cross-validation process, and an accuracy of 98.50% in the WBCO test environment, with a considerable margin of significance less than 0.0001 in the paired two-sample t-test comparison process. In our research, we have performed the importance assessment in conjunction with the SHAP and LIME techniques and have demonstrated that its decisions are based upon important attributes such as the values of the attributes of radius, concavity, perimeter, compactness, and texture. Additionally, the research has conducted the ablation test and has proved the importance of the designed FT-Transformer. Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain (3rd Edition))
Show Figures

Figure 1

22 pages, 1267 KB  
Article
Application of a Hybrid Explainable ML–MCDM Approach for the Performance Optimisation of Self-Compacting Concrete Containing Crumb Rubber and Calcium Carbide Residue
by Musa Adamu, Shrirang Madhukar Choudhari, Ashwin Raut, Yasser E. Ibrahim and Sylvia Kelechi
J. Compos. Sci. 2026, 10(2), 76; https://doi.org/10.3390/jcs10020076 (registering DOI) - 2 Feb 2026
Abstract
The combined incorporation of crumb rubber (CR) and calcium carbide residue (CCR) in self-compacting concrete (SCC) induces competing and nonlinear effects on its fresh and hardened properties, making the simultaneous optimisation of workability, strength, durability, and stability challenging. CR reduces density and enhances [...] Read more.
The combined incorporation of crumb rubber (CR) and calcium carbide residue (CCR) in self-compacting concrete (SCC) induces competing and nonlinear effects on its fresh and hardened properties, making the simultaneous optimisation of workability, strength, durability, and stability challenging. CR reduces density and enhances deformability and flow stability but adversely affects strength, whereas CCR improves particle packing, cohesiveness, and early-age strength up to an optimal replacement level. To systematically address these trade-offs, this study proposes an integrated multi-criteria decision-making (MCDM)–explainable machine learning–global optimisation framework for sustainable SCC mix design. A composite performance score encompassing fresh, mechanical, durability, and thermal indicators is constructed using a weighted MCDM scheme and learned through surrogate machine-learning models. Three learners—glmnet, ranger, and xgboost—are tuned using v-fold cross-validation, with xgboost demonstrating the highest predictive fidelity. Given the limited experimental dataset, bootstrap out-of-bag validation is employed to ensure methodological robustness. Model-agnostic interpretability, including permutation importance, SHAP analysis, and partial-dependence plots, provides physical transparency and reveals that CR and CCR exert strong yet opposing influences on the composite response, with CCR partially compensating for CR-induced strength losses through enhanced cohesiveness. Differential Evolution (DEoptim) applied to the trained surrogate identifies optimal material proportions within a continuous design space, favouring mixes with 5–10% CCR and limited CR content. Among the evaluated mixes, 0% CR–5% CCR delivers the best overall performance, while 20% CR–5% CCR offers a balanced strength–ductility compromise. Overall, the proposed framework provides a transparent, interpretable, and scalable data-driven pathway for optimising SCC incorporating circular materials under competing performance requirements. Full article
(This article belongs to the Special Issue Sustainable Cementitious Composites)
Show Figures

Figure 1

22 pages, 967 KB  
Article
GRU-Based Short-Term Forecasting for Microgrid Operation: Modeling and Simulation Using Simulink
by Yu-Kuei Liu, Goran Rafajlovski and Saiful Islam
Algorithms 2026, 19(2), 116; https://doi.org/10.3390/a19020116 - 2 Feb 2026
Abstract
This paper examines how hour-ahead forecasting uncertainty propagates to microgrid operation under intermittent renewable generation. Using hourly public data for Ontario and focusing on the FSA K0K in 2018, we evaluate four representative months (January, April, July, and December) to capture seasonal dynamics. [...] Read more.
This paper examines how hour-ahead forecasting uncertainty propagates to microgrid operation under intermittent renewable generation. Using hourly public data for Ontario and focusing on the FSA K0K in 2018, we evaluate four representative months (January, April, July, and December) to capture seasonal dynamics. We benchmark three univariate forecasting approaches for load demand, photovoltaic (PV) generation, and wind generation under a consistent 24-to-1 input setup, including GRU, LSTM, and a persistence baseline. We report point-forecast metrics (RMSE, MAE, and R2) and also provide 90% prediction intervals (PI90) using conformal calibration to quantify uncertainty. To assess downstream impact, forecasts are coupled with a dual-branch MATLAB/Simulink microgrid model. One branch uses True profiles and the other uses forecast-driven Pred inputs, while both branches share the same rule-based EMS and BESS constraints. System performance is evaluated using time-series comparisons and monthly key performance indicators (KPIs) covering grid import and export, grid peak power, battery throughput, and state-of-charge (SoC) statistics. We further report an illustrative cost sensitivity under a flat tariff and a throughput-based degradation proxy. Results show that forecasting performance is target dependent. GRU achieves the best overall point accuracy for load and PV, whereas wind is strongly driven by short term persistence at the one hour horizon, and in this measurement only setup without meteorological covariates the persistence baseline can match or outperform the deep learning models. In the microgrid simulations, Pred and True trajectories remain qualitatively consistent, and SoC-related indicators and peak power remain comparatively consistent across months. In contrast, energy-flow indicators, especially grid export and battery throughput, show larger deviations and dominate the observed cost sensitivity. Overall, the findings suggest that compact hour-ahead forecasts can be adequate to preserve operational reliability under a constraint-driven EMS, while forecast improvements mainly translate into economic efficiency gains rather than reliability-critical benefits. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

16 pages, 4008 KB  
Article
Novel Titanium Matrix Composite Stator Sleeve for Enhanced Efficiency in Underwater Shaftless Propulsion
by Hanghang Wang, Lina Yang, Junquan Chen, Yapeng Jiang, Xin Jiang and Jinrui Guo
J. Mar. Sci. Eng. 2026, 14(3), 290; https://doi.org/10.3390/jmse14030290 - 1 Feb 2026
Abstract
Shaftless Pump-jet Thrusters (SPTs), which integrate the propulsion motor directly with impellers, provide a compact design and high propulsion efficiency. Despite this, their performance is significantly hampered by eddy current losses in conductive stator sleeves. This study introduces Titanium Matrix Composites (TMC) as [...] Read more.
Shaftless Pump-jet Thrusters (SPTs), which integrate the propulsion motor directly with impellers, provide a compact design and high propulsion efficiency. Despite this, their performance is significantly hampered by eddy current losses in conductive stator sleeves. This study introduces Titanium Matrix Composites (TMC) as superior alternatives to conventional titanium alloys (Ti-6Al-4V, Ti64), leveraging their tailorable anisotropic electromagnetic properties to effectively suppress eddy current losses. Through simulations and experimental validation, the electromagnetic performance of an SPT equipped with a TMC stator sleeve is systematically investigated. Electromagnetic simulations predict a dramatic reduction in eddy current loss of 53.5–79.8% and an improvement in motor efficiency of 5.8–8.5% across the 1500–2900 rpm operational range compared to the Ti64 baseline. Experimental measurements on prototype motors confirm the performance advantage, demonstrating a 3.5–5.7% reduction in input power under equivalent output conditions across the same speed range. After accounting for manufacturing tolerances and control strategies, the refined model demonstrated a markedly improved agreement with the experimental results. This research conclusively establishes TMCs as a high-performance containment sleeve material, which is promising not only for SPTs but also for a broad range of canned motor applications, where an optimal balance between electromagnetic and structural performance is critical. Full article
Show Figures

Figure 1

28 pages, 11414 KB  
Article
Monitoring and Prediction of Subsidence in Mining Areas of Liaoyuan Northern New District Based on InSAR Technology
by Menghao Li, Yichen Zhang, Jiquan Zhang, Zhou Wen, Jintao Huang and Haoying Li
GeoHazards 2026, 7(1), 17; https://doi.org/10.3390/geohazards7010017 - 1 Feb 2026
Abstract
Ground subsidence in mined-out areas has irreversible impacts on residents’ lives and infrastructure, making its monitoring and prediction crucial for ensuring safety, protecting the ecological environment, and promoting sustainable development. This study employed the Small Baseline Subset Interferometric Synthetic Aperture Radar (SBAS-InSAR) technique [...] Read more.
Ground subsidence in mined-out areas has irreversible impacts on residents’ lives and infrastructure, making its monitoring and prediction crucial for ensuring safety, protecting the ecological environment, and promoting sustainable development. This study employed the Small Baseline Subset Interferometric Synthetic Aperture Radar (SBAS-InSAR) technique to process Sentinel-1A satellite images of Liaoyuan’s Northern New District from August 2022 to March 2025, deriving ground deformation data. The SBAS-InSAR results were validated using unmanned aerial vehicle (UAV) measurements. Monitoring revealed deformation rates ranging from −26.80 mm/year (subsidence) to 13.12 mm/year (uplift) in the area, with a maximum cumulative subsidence of 59.59 mm observed near the Xi’an Sixth District. Based on spatiotemporal patterns, most mining-induced subsidence in the study area is in its late stage, primarily caused by progressive compaction of fractured rock masses and voids within the collapse and fracture zones. Using subsidence data from August 2022 to March 2024, three prediction models—LSTM, GRU, and TCN-GRU—were trained and subsequently applied to forecast subsidence from March 2024 to August 2025. Comparisons between the predictions and SBAS-InSAR measurements showed that all models achieved high accuracy. Among them, the TCN-GRU model yielded predictions closest to the actual values, with a correlation coefficient exceeding 0.95, validating its potential for application in time-series settlement monitoring. Full article
Show Figures

Figure 1

20 pages, 5726 KB  
Article
Towards Practical Object Detection with Limited Data: A Feature Distillation Framework
by Wei Liu, Shi Zhang and Shouxu Zhang
J. Mar. Sci. Eng. 2026, 14(3), 289; https://doi.org/10.3390/jmse14030289 - 1 Feb 2026
Abstract
Underwater structural surface defect detection—such as identifying cavities and spalling—faces significant challenges due to complex environments, scarce annotated data, and the reliance of modern detectors on large-scale datasets. While current approaches often combine large-data training with fine-tuning or image enhancement, they still require [...] Read more.
Underwater structural surface defect detection—such as identifying cavities and spalling—faces significant challenges due to complex environments, scarce annotated data, and the reliance of modern detectors on large-scale datasets. While current approaches often combine large-data training with fine-tuning or image enhancement, they still require extensive underwater samples and are typically too computationally heavy for resource-constrained robotic platforms. To address these issues, we introduce a defect detection model based on feature distillation, which achieves high detection accuracy with limited samples. We tackle three key challenges: enhancing sample diversity under data scarcity, selecting and training a baseline model that balances accuracy and efficiency, and improving lightweight model performance using augmented samples under computational constraints. By integrating a feature distillation mechanism with a sample augmentation strategy, we develop a compact detection strategy and framework that delivers notable performance gains in limited data, offering a practical and efficient solution for real-world underwater inspection. Full article
(This article belongs to the Special Issue Intelligent Measurement and Control System of Marine Robots)
Show Figures

Figure 1

19 pages, 1787 KB  
Article
Event-Based Machine Vision for Edge AI Computing
by Paul K. J. Park, Junseok Kim, Juhyun Ko and Yeoungjin Chang
Sensors 2026, 26(3), 935; https://doi.org/10.3390/s26030935 (registering DOI) - 1 Feb 2026
Abstract
Event-based sensors provide sparse, motion-centric measurements that can reduce data bandwidth and enable always-on perception on resource-constrained edge devices. This paper presents an event-based machine vision framework for smart-home AIoT that couples a Dynamic Vision Sensor (DVS) with compute-efficient algorithms for (i) human/object [...] Read more.
Event-based sensors provide sparse, motion-centric measurements that can reduce data bandwidth and enable always-on perception on resource-constrained edge devices. This paper presents an event-based machine vision framework for smart-home AIoT that couples a Dynamic Vision Sensor (DVS) with compute-efficient algorithms for (i) human/object detection, (ii) 2D human pose estimation, (iii) hand posture recognition for human–machine interfaces. The main methodological contributions are timestamp-based, polarity-agnostic recency encoding that preserves moving-edge structure while suppressing static background, and task-specific network optimizations (architectural reduction and mixed-bit quantization) tailored to sparse event images. With a fixed downstream network, the recency encoding improves action recognition accuracy over temporal accumulation (0.908 vs. 0.896). In a 24 h indoor monitoring experiment (640 × 480), the raw DVS stream is about 30× smaller than conventional CMOS video and remains about 5× smaller after standard compression. For human detection, the optimized event processing reduces computation from 5.8 G to 81 M FLOPs and runtime from 172 ms to 15 ms (more than 11× speed-up). For pose estimation, a pruned HRNet reduces model size from 127 MB to 19 MB and inference time from 70 ms to 6 ms on an NVIDIA Titan X while maintaining a comparable accuracy (mAP from 0.95 to 0.94) on MS COCO 2017 using synthetic event streams generated by an event simulator. For hand posture recognition, a compact CNN achieves 99.19% recall and 0.0926% FAR with 14.31 ms latency on a single i5-4590 CPU core using 10-frame sequence voting. These results indicate that event-based sensing combined with lightweight inference is a practical approach to privacy-friendly, real-time perception under strict edge constraints. Full article
(This article belongs to the Special Issue Next-Generation Edge AI in Wearable Devices)
Show Figures

Figure 1

29 pages, 4838 KB  
Article
Braking Force Control for Direct-Drive Brake Units Based on Data-Driven Adaptive Control
by Chunrong He, Xiaoxiang Gong, Haitao He, Huaiyue Zhang, Yu Liu, Haiquan Ye and Chunxi Chen
Machines 2026, 14(2), 163; https://doi.org/10.3390/machines14020163 - 1 Feb 2026
Abstract
To address the increasing demands for faster response and higher control accuracy in the braking systems of electric and intelligent vehicles, a novel brake-by-wire actuation unit and its braking force control methods are proposed. The braking unit employs a permanent-magnet linear motor as [...] Read more.
To address the increasing demands for faster response and higher control accuracy in the braking systems of electric and intelligent vehicles, a novel brake-by-wire actuation unit and its braking force control methods are proposed. The braking unit employs a permanent-magnet linear motor as the driving actuator and utilizes the lever-based force-amplification mechanism to directly generate the caliper force. Compared with the “rotary motor and motion conversion mechanism” configuration in other electromechanical braking systems, the proposed scheme significantly simplifies the force-transmission path, reduces friction and structural complexity, thereby enhancing the overall dynamic response and control accuracy. Due to the strong nonlinearity, time-varying parameters, and significant thermal effects of the linear motor, the braking force is prone to drift. As a result, achieving accurate force control becomes challenging. This paper proposes a model-free adaptive control method based on compact-form dynamic linearization. This method does not require an accurate mathematical model. It achieves dynamic linearization and direct control of complex nonlinear systems by online estimation of pseudo partial derivatives. Finally, the proposed control method is validated through comparative simulations and experiments against the fuzzy PID controller. The results show that the model-free adaptive control method exhibits significantly faster braking force response, smaller steady-state error, and stronger robustness against external disturbances. It enables faster dynamic response and higher braking force tracking accuracy. The study demonstrates that the proposed brake-by-wire scheme and its control method provide a potentially new approach for next-generation high-performance brake-by-wire systems. Full article
(This article belongs to the Section Vehicle Engineering)
Show Figures

Figure 1

63 pages, 6866 KB  
Review
Efficient Feature Extraction for EEG-Based Classification: A Comparative Review of Deep Learning Models
by Louisa Hallal, Jason Rhinelander, Ramesh Venkat and Aaron Newman
AI 2026, 7(2), 50; https://doi.org/10.3390/ai7020050 (registering DOI) - 1 Feb 2026
Abstract
Feature extraction (FE) is an important step in electroencephalogram (EEG)-based classification for brain–computer interface (BCI) systems and neurocognitive monitoring. However, the dynamic and low-signal-to-noise nature of EEG data makes achieving robust FE challenging. Recent deep learning (DL) advances have offered alternatives to traditional [...] Read more.
Feature extraction (FE) is an important step in electroencephalogram (EEG)-based classification for brain–computer interface (BCI) systems and neurocognitive monitoring. However, the dynamic and low-signal-to-noise nature of EEG data makes achieving robust FE challenging. Recent deep learning (DL) advances have offered alternatives to traditional manual feature engineering by enabling end-to-end learning from raw signals. In this paper, we present a comparative review of 88 DL models published over the last decade, focusing on EEG FE. We examine convolutional neural networks (CNNs), Transformer-based mechanisms, recurrent architectures including recurrent neural networks (RNNs) and long short-term memory (LSTM), and hybrid models. Our analysis focuses on architectural adaptations, computational efficiency, and classification performance across EEG tasks. Our findings reveal that efficient EEG FE depends more on architectural design than model depth. Compact CNNs offer the best efficiency–performance trade-offs in data-limited settings, while Transformers and hybrid models improve long-range temporal representation at a higher computational cost. Thus, the field is shifting toward lightweight hybrid designs that balance local FE with global temporal modeling. This review aims to guide BCI developers and future neurotechnology research toward efficient, scalable, and interpretable EEG-based classification frameworks. Full article
Show Figures

Figure 1

16 pages, 5134 KB  
Article
Development of a Compact Laser Collimating and Beam-Expanding Telescope for an Integrated 87Rb Atomic Fountain Clock
by Fan Liu, Hui Zhang, Yang Bai, Jun Ruan, Shaojie Yang and Shougang Zhang
Photonics 2026, 13(2), 142; https://doi.org/10.3390/photonics13020142 - 31 Jan 2026
Viewed by 34
Abstract
In the rubidium-87 atomic fountain clock, the laser collimating and beam-expanding telescope plays a key role in atomic cooling and manipulation, as well as in realizing the cold-atom fountain. To address the bulkiness of conventional laser collimating and beam-expanding telescopes, which limits system [...] Read more.
In the rubidium-87 atomic fountain clock, the laser collimating and beam-expanding telescope plays a key role in atomic cooling and manipulation, as well as in realizing the cold-atom fountain. To address the bulkiness of conventional laser collimating and beam-expanding telescopes, which limits system integration and miniaturization, we design and implement a compact laser collimating and beam-expanding telescope. The design employs a Galilean beam-expanding optical path to shorten the optical path length. Combined with optical modeling and optimization, this approach reduces the mechanical length of the telescope by approximately 50%. We present the mechanical structure of a five-degree-of-freedom (5-DOF) adjustment mechanism for the light source and the associated optical elements and specify the corresponding tolerance ranges to ensure their precise alignment and mounting. Based on this 5-DOF adjustment mechanism, we further propose a method for tuning the output beam characteristics, enabling precise and reproducible control of the emitted beam. The experimental results demonstrate that, after adjustment, the divergence angle of the output beam is better than 0.25 mrad, the coaxiality is better than 0.3 mrad, the centroid offset relative to the mechanical axis is less than 0.1 mm, and the output beam diameter is approximately 35 mm. Furthermore, long-term monitoring over 45 days verified the system’s robustness, maintaining fractional power fluctuations within ±1.2% without manual realignment. Compared with the original telescope, all of these beam characteristics are significantly improved. The proposed telescope therefore has broad application prospects in integrated atomic fountain clocks, atomic gravimeters, and cold-atom interferometric gyroscopes. Full article
(This article belongs to the Special Issue Progress in Ultra-Stable Laser Source and Future Prospects)
Show Figures

Figure 1

28 pages, 12486 KB  
Article
Sustainability-Focused Evaluation of Self-Compacting Concrete: Integrating Explainable Machine Learning and Mix Design Optimization
by Abdulaziz Aldawish and Sivakumar Kulasegaram
Appl. Sci. 2026, 16(3), 1460; https://doi.org/10.3390/app16031460 - 31 Jan 2026
Viewed by 56
Abstract
Self-compacting concrete (SCC) offers significant advantages in construction due to its superior workability; however, optimizing SCC mixture design remains challenging because of complex nonlinear material interactions and increasing sustainability requirements. This study proposes an integrated, sustainability-oriented computational framework that combines machine learning (ML), [...] Read more.
Self-compacting concrete (SCC) offers significant advantages in construction due to its superior workability; however, optimizing SCC mixture design remains challenging because of complex nonlinear material interactions and increasing sustainability requirements. This study proposes an integrated, sustainability-oriented computational framework that combines machine learning (ML), SHapley Additive exPlanations (SHAP), and multi-objective optimization to improve SCC mixture design. A large and heterogeneous publicly available global SCC dataset, originally compiled from 156 independent peer-reviewed studies and further enhanced through a structured three-stage data augmentation strategy, was used to develop robust predictive models for key fresh-state properties. An optimized XGBoost model demonstrated strong predictive accuracy and generalization capability, achieving coefficients of determination of R2=0.835 for slump flow and R2=0.828 for T50 time, with reliable performance on independent industrial SCC datasets. SHAP-based interpretability analysis identified the water-to-binder ratio and superplasticizer dosage as the dominant factors governing fresh-state behavior, providing physically meaningful insights into mixture performance. A cradle-to-gate life cycle assessment was integrated within a multi-objective genetic algorithm to simultaneously minimize embodied CO2 emissions and material costs while satisfying workability constraints. The resulting Pareto-optimal mixtures achieved up to 3.9% reduction in embodied CO2 emissions compared to conventional SCC designs without compromising performance. External validation using independent industrial data confirms the practical reliability and transferability of the proposed framework. Overall, this study presents an interpretable and scalable AI-driven approach for the sustainable optimization of SCC mixture design. Full article
Back to TopTop