Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (48,375)

Search Parameters:
Keywords = automation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 672 KB  
Systematic Review
Carbonation and Chloride Attack in 3D-Printed Cementitious Materials: A Systematic Durability Review
by Rui Reis, Francisca Aroso, Aires Camões, Filipe Brandão, Bruno Figueiredo and Paulo J. S. Cruz
Sci 2026, 8(4), 93; https://doi.org/10.3390/sci8040093 - 20 Apr 2026
Abstract
3D Concrete Printing (3DCP) is increasingly explored as a digital fabrication technology offering design freedom, automation, and material efficiency. Nevertheless, its application in reinforced and long-life structures remains limited by insufficient understanding and poor comparability of durability performance, as previous reviews have not [...] Read more.
3D Concrete Printing (3DCP) is increasingly explored as a digital fabrication technology offering design freedom, automation, and material efficiency. Nevertheless, its application in reinforced and long-life structures remains limited by insufficient understanding and poor comparability of durability performance, as previous reviews have not systematically linked methodologies to transport-related results. This study presents a systematic and critical review of carbonation and chloride ingress in 3DCP cementitious materials, conducted in accordance with the PRISMA methodology. Following a structured database search and two-stage screening process, the selected studies are subjected to qualitative analysis. Experimental methodologies, specimen typologies, exposure conditions, and attack directions are compiled and qualitatively compared. The review highlights pronounced methodological heterogeneity and frequent under-reporting of key parameters, particularly attack direction, sealing conditions, CO2 concentration, and indicator methods, limiting cross-study comparison. Despite these limitations, consistent qualitative trends are identified. Printed specimens generally exhibit inferior durability performance than cast specimens, while cold joints are associated with increased penetration depth and result dispersion. Directional effects are non-negligible, although they are systematically addressed in only a limited number of studies. Overall, the findings emphasise the critical role of process-induced features and the need for harmonised testing methods to enable reliable durability assessment. Full article
(This article belongs to the Section Materials Science)
20 pages, 1296 KB  
Article
CATS: Context-Aware Traffic Signal Control with Road Navigation Service for Connected and Automated Vehicles
by Yiwen Shen
Electronics 2026, 15(8), 1747; https://doi.org/10.3390/electronics15081747 - 20 Apr 2026
Abstract
Urban intersection traffic signals play a crucial role in managing traffic flow and ensuring road safety. However, traditional actuated signal controllers make phase-switching decisions based on limited local traffic information, without leveraging network-wide context from navigation services. In this paper, we propose CATS, [...] Read more.
Urban intersection traffic signals play a crucial role in managing traffic flow and ensuring road safety. However, traditional actuated signal controllers make phase-switching decisions based on limited local traffic information, without leveraging network-wide context from navigation services. In this paper, we propose CATS, a Context-Aware Traffic Signal control system that jointly optimizes intersection signal control and road navigation for Connected and Automated Vehicles (CAVs). CATS integrates two key components: a Best-Combination CTR (BC-CTR) scheme and the Self-Adaptive Interactive Navigation Tool (SAINT). BC-CTR enhances the original Cumulative Travel-Time Responsive (CTR) scheme through a two-step selection procedure: it first identifies the phase with the highest cumulative travel time (CTT) and then selects the compatible phase combination with the greatest group CTT, providing an explicit improvement over the single-combination evaluation of the original CTR that allows for a more accurate response to real-time intersection demand. SAINT provides congestion-aware route guidance via a congestion-contribution step function, directing vehicles away from congested segments while signal timings simultaneously adapt to incoming traffic. Under a 100% CAV penetration setting, SUMO-based simulations across moderate-to-heavy traffic conditions (vehicle inter-arrival times of 5 to 9 s) show that CATS reduces the mean end-to-end travel time by up to 23.72% and improves the throughput by up to 93.19% over three baselines (fixed-time navigation with enhanced signal control, congestion-aware navigation with original signal control, and fixed-time navigation with original signal control), confirming that the co-design of navigation and signal control produces complementary benefits. Full article
33 pages, 1296 KB  
Article
The Hidden Burden of Keywords: Cognitive Load and Language Differences in Novice Python Programming
by Raina Mason and Carolyn Seton
Educ. Sci. 2026, 16(4), 657; https://doi.org/10.3390/educsci16040657 - 20 Apr 2026
Abstract
Keyword recognition represents a fundamental skill in programming, yet little research has examined how novices develop this ability or how language background affects keyword learning. This study investigated cognitive load and keyword recognition accuracy amongst 27 novice programming students (15 English as an [...] Read more.
Keyword recognition represents a fundamental skill in programming, yet little research has examined how novices develop this ability or how language background affects keyword learning. This study investigated cognitive load and keyword recognition accuracy amongst 27 novice programming students (15 English as an additional language [EAL] and 12 English as a native language [ENL]) during an intensive six-week Python course. Students completed a keyword recognition task at Weeks 1 and 6, identifying and classifying 23 Python keywords while reporting cognitive load using the Klepsch instrument. The results revealed no significant improvement in identification accuracy (Week 1: 39.80%; Week 6: 48.16%) or classification accuracy (40% at both time points) despite intensive instruction. The reported extraneous cognitive load significantly increased from Week 1 to Week 6 (p = 0.039, d = 0.99), contradicting Cognitive Load Theory predictions that schema automation reduces extraneous load with experience. EAL students reported a significantly higher intrinsic cognitive load (p = 0.030, d = 0.91) and a marginally lower keyword identification accuracy (p = 0.058, d = −0.54) than ENL students. All students (100%) who identified keywords also missed duplicate instances, indicating universal incomplete processing. These findings challenge assumptions about schema development timelines in programming education and document measurable linguistic barriers that persist even after substantial instruction, with implications for inclusive computing pedagogy. Full article
(This article belongs to the Special Issue Cognitive and Developmental Psychology in STEM Education)
29 pages, 2055 KB  
Article
Resilience Assessment and Enhancement Strategy for Transmission Lines Based on Distributed Fibre Optic Sensing
by Menghao Zhang, Qingwu Gong, Xiuyi Li and Hui Qiao
Electronics 2026, 15(8), 1739; https://doi.org/10.3390/electronics15081739 - 20 Apr 2026
Abstract
Typhoon-induced wind loads pose severe threats to transmission systems. However, existing resilience assessment approaches typically rely on sparse meteorological station data and assume spatially uniform wind speed distributions along transmission corridors, which fail to capture the span-level spatial difference of wind fields. To [...] Read more.
Typhoon-induced wind loads pose severe threats to transmission systems. However, existing resilience assessment approaches typically rely on sparse meteorological station data and assume spatially uniform wind speed distributions along transmission corridors, which fail to capture the span-level spatial difference of wind fields. To address this limitation, this paper proposes a distributed optical fiber sensing (DOFS)-driven span-level resilience assessment and hardening optimization framework for transmission networks. First, a phase-sensitive optical time domain reflectometry (Φ-OTDR)-based distributed optical fiber sensing system is employed, utilizing optical fibers embedded in existing OPGW cables as sensing media. By capturing vibration responses of the fiber induced by wind–structure interaction, real-time spatiotemporal wind speed sequences at the individual span level are reconstructed through signal processing and inversion algorithms, providing high-spatial-resolution environmental input data for resilience evaluation. Second, a span-level failure probability quantification method is established using a load–strength interference model. On this basis, a resilience evaluation framework—“span-level asset damage cost—line-level critical corridor identification—system-level load shedding assessment”—is constructed, enabling cross-scale resilience quantification from component damage to system-level performance degradation. Third, a span-level gradient hardening optimization model is developed. By adopting a scenario pre-calculation and iterative updating strategy, coordinated solving of reinforcement decisions and failure scenarios is achieved, thereby maximizing resilience enhancement benefits. The proposed framework is validated using DOFS-measured wind speed data collected from a 500 kV transmission line along the Fujian coast during three real typhoon events—Typhoon Shantuo, Typhoon Trami, and Typhoon Koinu—supporting the reliability of the acquired span-level wind speed information. Case studies conducted on a modified IEEE RTS-24 system demonstrate that the proposed span-level hardening strategy can substantially reduce reinforcement cost compared with the conventional line-level hardening strategy. In the reported benchmark case, it achieves zero load-shedding penalty with a markedly lower hardening cost, and under the same budget constraint, it further yields lower expected load shedding and lower expected asset damage. Full article
Show Figures

Figure 1

26 pages, 9631 KB  
Article
A Multi-Teacher Knowledge Distillation Framework for Enhancing the Robustness of Automated Sperm Morphology Assessment
by Osman Emre Tutay, Hamza Osman Ilhan, Hakkı Uzun, Merve Huner Yigit and Gorkem Serbes
Diagnostics 2026, 16(8), 1230; https://doi.org/10.3390/diagnostics16081230 - 20 Apr 2026
Abstract
Background/Objectives: The manual analysis of sperm morphology, crucial for male infertility diagnosis, is subjective and time-consuming. Automated methods using deep learning, offer a promising alternative; however, standard deep models are prone to overfitting when applied to small, heavily unbalanced clinical datasets, limiting their [...] Read more.
Background/Objectives: The manual analysis of sperm morphology, crucial for male infertility diagnosis, is subjective and time-consuming. Automated methods using deep learning, offer a promising alternative; however, standard deep models are prone to overfitting when applied to small, heavily unbalanced clinical datasets, limiting their generalization capability. This study proposes a knowledge distillation approach that functions as a strong regularizer, improving the robustness of automated sperm morphology analysis. Methods: We utilize soft distillation to transfer knowledge from a set of high-capacity teacher models to a smaller student model (SwinV2-base). The teacher architectures include SwinV2-large, EfficientNetV2-m, and ConvNeXtV2-large. To maximize performance, we investigated two distillation strategies: a single-teacher approach, where the student learns from one specific architecture, and a multi-teacher approach, where the student learns from an averaged response of multiple teachers. The models were trained on the imbalanced Hi-LabSpermMorpho dataset, which comprises 18 different sperm morphology categories derived from three differently stained (BesLab, Histoplus, GBL) sample sets. We adopted a cross-dataset training approach in which the teacher models were fine-tuned using the combination of two stained datasets, and the student model was trained on the third, distinct stained dataset. The global loss function combined cross-entropy loss with Kullback–Leibler divergence, employing the teacher’s soft probabilities to prevent the student from over-confidence. Results: The experimental results demonstrate that the student model trained in a multi-teacher setup with augmentation and soft distillation attains higher accuracies (70.94% on BesLab, 73.61% on Histoplus, 71.63% on GBL) than the baseline models. Conclusions: This approach mitigates challenges associated with data scarcity and heavily unbalanced sperm morphology datasets, providing consistent improvements and offering a highly generalizable solution for clinical diagnostics. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

21 pages, 2917 KB  
Article
Validity of a Commercially Available Inertial Measurement Unit for Artificial Intelligence-Based Trick Detection and Kinematic Performance Assessment in Skateboarding
by Birte Scholz, Niklas Noth, Maren Witt and Olaf Ueberschär
Sensors 2026, 26(8), 2537; https://doi.org/10.3390/s26082537 - 20 Apr 2026
Abstract
Inertial measurement units (IMUs) present promising avenues for performance diagnostics in skateboarding, yet systematic validation of their accuracy and applicability remains limited. This study validates the commercially available Spinnax Freak IMU system in the context of skateboarding, with a focus on selected trick [...] Read more.
Inertial measurement units (IMUs) present promising avenues for performance diagnostics in skateboarding, yet systematic validation of their accuracy and applicability remains limited. This study validates the commercially available Spinnax Freak IMU system in the context of skateboarding, with a focus on selected trick detection and classification, distance measurement, maximal horizontal speed, maximal vertical height of the skateboard and airtime during a jump trick. A total of 23 skateboarders (4 females, 19 males; 27.4 ± 10.9 years) participated in this study. Validation methods included comparisons with established reference systems such as laser ranging for maximal horizontal speed (LAVEG), 2D video analysis for maximal vertical height of the skateboard (Kinovea), light barrier measurements for airtime detection (OptoJump Next), and a fixed metric reference (10 m) for rolling distance measurements. The evaluation was supported by statistical analyses including mean absolute error (MAE), root mean-square error (RMSE), mean absolute percentage error (MAPE), t-tests, Bland–Altman plots, linear regression, and ICC(3,1). The Spinnax Freak system demonstrated high validity in detecting trick events and in providing distance measurements that were statistically equivalent to the reference. Trick classification, maximal horizontal speed, maximal vertical height of the skateboard and airtime showed substantial errors, indicating that these outputs are not reliable for biomechanical interpretation at this point. These findings highlight both the potential and the current constraints of single-sensor setups for field-based motion capture in skateboarding. Future developments should prioritize algorithmic refinement, improved temporal resolution, and optimized event classification to enhance measurement accuracy and expand applicability in biomechanical analysis and automated training documentation in skateboarding. Full article
(This article belongs to the Special Issue Wearable Sensors in Biomechanics and Human Motion)
Show Figures

Figure 1

13 pages, 555 KB  
Essay
Governing Generative AI in Healthcare: A Normative Conceptual Framework for Epistemic Authority, Trust, and the Architecture of Responsibility
by Fatma Eren Akgün and Metin Akgün
Healthcare 2026, 14(8), 1098; https://doi.org/10.3390/healthcare14081098 - 20 Apr 2026
Abstract
Background/Objectives: Large language models (LLMs) such as ChatGPT are rapidly being integrated into healthcare for tasks ranging from clinical documentation to diagnostic support. Current ethical discussions focus predominantly on bias, privacy, and accuracy, leaving three critical governance questions unresolved: What kind of knowledge [...] Read more.
Background/Objectives: Large language models (LLMs) such as ChatGPT are rapidly being integrated into healthcare for tasks ranging from clinical documentation to diagnostic support. Current ethical discussions focus predominantly on bias, privacy, and accuracy, leaving three critical governance questions unresolved: What kind of knowledge does an LLM output represent in clinical reasoning? When is a clinician’s or patient’s trust in that output justified? Who bears responsibility when an AI-informed decision leads to patient harm? This study proposes the Epistemic Authority–Trust–Responsibility (ETR) Architecture, a normative conceptual framework that addresses these three questions as an integrated governance challenge. Methods: The framework was developed through normative conceptual analysis—a method that constructs governance proposals by synthesising philosophical principles, ethical theories, and empirical evidence. The literature was identified through structured searches of PubMed, PhilPapers, and EUR-Lex (January 2020–March 2026), drawing on the philosophy of medical knowledge, the ethics of trust and testimony, and the moral philosophy of responsibility. Results: The ETR Architecture produces four outputs: (i) a four-tier classification system that distinguishes LLM outputs—from administrative drafts to clinical evidence claims—and matches each tier to appropriate verification requirements; (ii) the concept of the ‘epistemic placebo’, formally defined as a governance measure that creates a documented appearance of compliance while lacking at least one operative element of genuine oversight; (iii) a model specifying four conditions under which trust in healthcare AI is justified; (iv) four testable hypotheses with associated research designs connecting governance design to trust calibration and patient safety. Conclusions: The 2025–2027 regulatory transition period offers a critical window for shaping how healthcare institutions govern AI. We argue that deploying LLMs without explicitly classifying their outputs and building appropriate oversight risks allows governance norms to be set by technology vendors rather than by evidence-informed, patient-centred policy. Full article
(This article belongs to the Special Issue AI-Driven Healthcare: Transforming Patient Care and Outcomes)
Show Figures

Figure 1

26 pages, 2242 KB  
Article
Optimal Sizing and Hourly Scheduling of Wind-PV-Battery Systems for Islanded Expressway Service Area Microgrids Under Tiered Electricity Pricing
by Yaguang Shi, Zhangjie Liu and Mandi He
Energies 2026, 19(8), 1985; https://doi.org/10.3390/en19081985 - 20 Apr 2026
Abstract
External electricity supplementation for islanded microgrids at expressway service areas is often settled under tiered electricity pricing based on cumulative energy consumption, where marginal prices increase discontinuously once tier thresholds are exceeded. This mechanism reshapes battery dispatch behavior and may alter economically optimal [...] Read more.
External electricity supplementation for islanded microgrids at expressway service areas is often settled under tiered electricity pricing based on cumulative energy consumption, where marginal prices increase discontinuously once tier thresholds are exceeded. This mechanism reshapes battery dispatch behavior and may alter economically optimal storage sizing. This paper proposes a unified planning–-operation optimization framework for wind–PV–battery microgrids that jointly determines the storage capacity and hourly scheduling while enforcing power balance, battery state-of-charge dynamics, and tiered settlement costs. By introducing tier-wise energy allocation variables and tier cap constraints, the nonlinear settlement rule is reformulated into an equivalent piecewise-linear structure, leading to a mixed-integer linear programming (MILP) model that can be solved using standard optimization solvers. A season-weighted annualized case study using four typical seasonal days reveals critical cross-tier dispatch behaviors, where charging–discharging schedules shift near tier boundaries and external electricity purchases are actively suppressed from entering higher-priced tiers. The proposed framework quantifies the premium-avoidance value of storage and provides a practical decision support tool for premium risk-aware sizing and operation of islanded expressway service-area microgrids. Full article
35 pages, 2051 KB  
Article
Leakage-Controlled Horizon-Specific Model Selection for Daily Equity Forecasting: An Automated Multi-Model Pipeline
by Francisco Augusto Nuñez Perez, Francisco Javier Aguilar Mosqueda, Adrian Ramos Cuevas, Jaqueline Muñoz Beltran and Jose Cruz Nuñez Perez
Forecasting 2026, 8(2), 34; https://doi.org/10.3390/forecast8020034 - 20 Apr 2026
Abstract
Short-horizon equity forecasting remains challenging because daily prices are noisy, heavy-tailed, and subject to structural breaks and regime shifts. We develop a fully automated, reproducible, and leakage-controlled multi-model pipeline for daily forecasting with horizon-specific configuration selection. The task is formulated as predicting cumulative [...] Read more.
Short-horizon equity forecasting remains challenging because daily prices are noisy, heavy-tailed, and subject to structural breaks and regime shifts. We develop a fully automated, reproducible, and leakage-controlled multi-model pipeline for daily forecasting with horizon-specific configuration selection. The task is formulated as predicting cumulative H-day log-returns from OHLCV-derived information and converting them to implied price forecasts. All model families share a homologated design: causal feature construction, a strictly chronological split with an explicit purging rule to prevent label-window overlap for multi-day targets, training-only robustification (winsorization and adaptive clipping), and a unified metric suite computed consistently in return and price spaces. The framework benchmarks transparent baselines (zero- and mean-return), gradient-boosted trees (XGBoost), and deep temporal models (LSTM and CNN/TCN). Lookback length L{60,180,500} is selected via an internal walk-forward procedure on the pre-evaluation block, and final performance is reported on an external hold-out segment (last 15% of instances). Experiments on daily data for MT, DELL, and the S&P 500 index (through 3 February 2026) show that all families achieve similarly strong price-level fit at H=1, largely driven by persistence in the price process, while separation across families becomes more visible at H=5. However, predictive performance in return space remains weak, with R2 close to zero or negative, and Diebold–Mariano tests do not provide consistent evidence of statistical superiority over naive benchmarks. Under an operational rule that minimizes hold-out RMSE on the price scale, selected models are asset- and horizon-dependent, supporting horizon-wise selection rather than a single global architecture. Overall, the primary contribution lies in the proposed leakage-controlled evaluation and benchmarking framework rather than in demonstrating consistent predictive gains in financial time series forecasting. Full article
8 pages, 3306 KB  
Proceeding Paper
Automated Response Surface Methodology: Computational Replication and Validation Framework for Optimizing Supercapattery Materials
by Thiago Ferro de Oliveira and Simoni Margareti Plentz Meneghetti
Eng. Proc. 2026, 138(1), 2; https://doi.org/10.3390/engproc2026138002 - 20 Apr 2026
Abstract
Combining Response Surface Methodology (RSM) with Central Composite Design (CCD) is a powerful statistical approach to optimizing materials in energy storage systems. This study presents an open-source Python (v3.8+) framework that replicates and validates the RSM-based optimization of NiCo2S4–graphene [...] Read more.
Combining Response Surface Methodology (RSM) with Central Composite Design (CCD) is a powerful statistical approach to optimizing materials in energy storage systems. This study presents an open-source Python (v3.8+) framework that replicates and validates the RSM-based optimization of NiCo2S4–graphene supercapattery materials. We validated the framework by replicating a 20-experiment CCD analyzing graphene/NCS ratios, hydrothermal time, and S/Ni molar ratios. Advanced optimization using the Differential Evolution algorithm was integrated to efficiently solve the high-dimensional response surface space. The model explained 97.16% of the variance, and comprehensive diagnostic tests confirmed the assumptions of normality and residual independence. This approach provides an open-source methodology that supports reproducible and scalable data-driven material design and facilitates transparent computational materials science studies. Full article
Show Figures

Figure 1

25 pages, 14275 KB  
Article
TC-KAN: Time-Conditioned Kolmogorov–Arnold Networks with Time-Dependent Activations for Long-Term Time Series Forecasting
by Ziyu Shen, Yifan Fu, Liguo Weng, Keji Han and Yiqing Xu
Sensors 2026, 26(8), 2538; https://doi.org/10.3390/s26082538 - 20 Apr 2026
Abstract
Long-term time series forecasting (LTSF) is critical for modern power systems, energy management, and grid planning. Yet virtually all existing forecasting models employ stationary activation functions that apply identical nonlinear mappings regardless of temporal context—a fundamental mismatch with real-world load data, which exhibits [...] Read more.
Long-term time series forecasting (LTSF) is critical for modern power systems, energy management, and grid planning. Yet virtually all existing forecasting models employ stationary activation functions that apply identical nonlinear mappings regardless of temporal context—a fundamental mismatch with real-world load data, which exhibits strongly regime-dependent dynamics such as summer demand peaks, winter heating patterns, and overnight low-load periods. We address this gap by proposing TC-KAN (Time-Conditioned Kolmogorov–Arnold Network), the first forecasting architecture to augment KAN activation functions with position-aware coefficient parameterisation. The core innovation replaces the static polynomial coefficients in standard KAN activations with position-conditioned coefficients produced by a lightweight positional-embedding MLP, providing additional learnable capacity beyond standard KAN while adding negligible parameter overhead. TC-KAN further integrates a dual-pathway processing block—combining depthwise convolution for local temporal pattern extraction with the time-conditioned KAN layer for enhanced nonlinear transformation—within a channel-independent framework with Reversible Instance Normalisation. Experiments were conducted on four standard ETT benchmark datasets and the high-dimensional Weather dataset. TC-KAN achieves superior or competitive accuracy in most configurations while requiring merely 51K parameters—approximately 40% of DLinear and ∼100× fewer than iTransformer. On ETTh2, TC-KAN reduces the mean squared error by up to 61.4% over DLinear, and matches the current state-of-the-art iTransformer on ETTm2 at a fraction of the computational cost. This extreme parameter reduction circumvents the steep memory bottlenecks endemic to massive Transformer models, positioning TC-KAN as a highly practical architecture tailored precisely for resource-constrained edge deployments—such as on-device load forecasting inside smart grid sensors and industrial IoT controllers. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

29 pages, 772 KB  
Review
Early Sepsis Diagnosis as a Global Imperative: The Role of Raman Spectroscopy
by Andrea Piccioni, Fabio Spagnuolo, Marina Sebastiani, Alberto Valentini, Giuseppe Pezzotti, Marcello Candelli, Marcello Covino, Marco De Spirito, Antonio Gasbarrini and Francesco Franceschi
J. Clin. Med. 2026, 15(8), 3138; https://doi.org/10.3390/jcm15083138 - 20 Apr 2026
Abstract
Background/Objectives: Sepsis is a leading cause of hospital mortality and represents a time-sensitive medical emergency. Current diagnostic strategies rely on clinical assessment, severity scores, biomarkers, and blood cultures. However, blood cultures require 24–72 h for pathogen identification and demonstrate limited sensitivity, while biomarkers [...] Read more.
Background/Objectives: Sepsis is a leading cause of hospital mortality and represents a time-sensitive medical emergency. Current diagnostic strategies rely on clinical assessment, severity scores, biomarkers, and blood cultures. However, blood cultures require 24–72 h for pathogen identification and demonstrate limited sensitivity, while biomarkers such as procalcitonin and C-reactive protein lack optimal specificity. These limitations support the widespread empirical use of broad-spectrum antibiotics and highlight the need for rapid, sensitive, and culture-independent diagnostic tools. Methods: A narrative literature review was conducted using PubMed and Google Scholar, including 28 studies published over the past 10 years, encompassing observational and preclinical investigations. Current evidence on the application of Raman spectroscopy in sepsis was summarized, with a dual focus on pathogen identification and the assessment of the host response. Results: Raman spectroscopy has demonstrated the ability to detect early molecular alterations in circulating immune cells and mitochondrial redox status, potentially preceding conventional biomarkers. For pathogen identification, Raman techniques have achieved diagnostic accuracies comparable to automated systems, but with significantly shorter turnaround times. Integration with microfluidics, optical tweezers, and deep learning algorithms has further enhanced performance, although these applications remain largely experimental. Conclusions: Despite these promising results, the lack of methodological standardization, spectral overlap among phylogenetically related species, limited large-scale validation, and challenges in interpreting certain spectral signatures remain unresolved. Most available evidence originates from preclinical, single-center, and controlled studies, underscoring the need for prospective multicenter trials and harmonized protocols. Full article
(This article belongs to the Special Issue Sepsis and Septic Shock: Diagnosis, Treatment, and Prognosis)
25 pages, 3443 KB  
Article
Improved Parameter-Driven Automated Three-Class Segmentation for Concrete CT: A Reproducible Pipeline for Large-Scale Dataset Production
by Youxi Wang, Tianqi Zhang and Xinxiao Chen
Buildings 2026, 16(8), 1620; https://doi.org/10.3390/buildings16081620 - 20 Apr 2026
Abstract
The automated production of large-scale labeled datasets from concrete X-ray computed tomography (CT) images is a fundamental prerequisite for training and validating deep learning-based segmentation models. However, existing methods either require extensive manual annotation or rely on domain-specific deep learning models that themselves [...] Read more.
The automated production of large-scale labeled datasets from concrete X-ray computed tomography (CT) images is a fundamental prerequisite for training and validating deep learning-based segmentation models. However, existing methods either require extensive manual annotation or rely on domain-specific deep learning models that themselves demand labeled data—a circular dependency. This paper presents a parameter-driven three-class segmentation framework that automatically classifies each pixel in a concrete CT slice into one of three material phases: void (air pores and cracks), coarse aggregate, and mortar matrix, generating annotation masks suitable for large-scale dataset production without manual labeling. The proposed method combines: (1) fixed-threshold void detection calibrated to concrete CT grayscale characteristics; (2) adaptive percentile-based initial segmentation responsive to image-specific statistics; (3) multi-criteria connected component scoring based on area, shape descriptors (circularity, solidity, compactness, extent, aspect ratio), intensity distribution, and boundary gradient; (4) material science-informed size constraints aligned with concrete phase volume fractions; and (5) a material continuity enforcement module that applies topological hole-filling and conditional convex-hull consolidation to eliminate internal contamination within accepted aggregate regions, reducing boundary roughness by 7.6% and recovering misclassified boundary pixels. All parameters are centralized in a configuration file, enabling reproducible batch processing of 224 × 224 pixel CT slices at 0.07–1.12 s per image. Evaluated on 1007 224 × 224 concrete CT patches cropped from 200 representative scan frames, the framework produces three-class segmentation masks with physically consistent void fractions (mean 3.2%), aggregate fractions (mean 32.4%), and mortar fractions (mean 64.4%), all within ranges reported in the concrete CT literature (used as a dataset-scale QC screen, not a validation metric). Primary outputs and the archived image–mask pairs for this work are provided as an 8-bit patch archive. For pixel-wise validation, we report IoU, Dice, and pixel accuracy on an independently labeled subset that can be unambiguously paired with the released predictions: averaged over 57 matched patches, mean pixel accuracy is 88.6%, macro-mean IoU is 74.7%, and macro-mean Dice is 84.9%. The framework provides a fully automated annotation pipeline for dataset production, eliminating manual labeling costs for concrete CT image collections. The generated datasets are suitable for training semantic segmentation networks such as U-Net and its variants. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
33 pages, 3687 KB  
Article
MulPViT-SimAM: An Electronic Substrate Defect Detection Framework for Addressing Class Imbalance Problems
by Yuting Wang, Liming Sun, Bang An and Ruiyun Yu
Machines 2026, 14(4), 456; https://doi.org/10.3390/machines14040456 - 20 Apr 2026
Abstract
As the cornerstone of contemporary electronics, the quality of electronic substrates—including Printed Circuit Boards (PCBs) and Ceramic Package Substrates (CPSs)—is intrinsic to product reliability. However, automated inspection is currently impeded by two persistent obstacles: the drastic multi-scale variation in defects and the acute [...] Read more.
As the cornerstone of contemporary electronics, the quality of electronic substrates—including Printed Circuit Boards (PCBs) and Ceramic Package Substrates (CPSs)—is intrinsic to product reliability. However, automated inspection is currently impeded by two persistent obstacles: the drastic multi-scale variation in defects and the acute class imbalance within defect datasets. Conventional deep learning approaches often fail to reconcile these challenges simultaneously, leading to suboptimal recognition of rare defect categories. To bridge this gap, we propose Multi-scale Partial Vision Transformer—Simple, Parameter-free Attention Module (MulPViT-SimAM), a robust framework designed for class-imbalanced electronic substrate defect detection. Our method features a novel multi-scale backbone (MulPViT) that synergizes partial convolutions with hierarchical attention mechanisms, facilitating the efficient extraction of both fine-grained local textures and global contextual dependencies. Additionally, we embed the Simple, Parameter-free Attention Module (SimAM) into the feature fusion stage to adaptively highlight defect-specific features while dampening background noise. To further mitigate data imbalance, we utilize the Equalized Focal Loss (EFL) function, which employs a category-specific modulating factor to dynamically equilibrate the learning focus across different classes. Comprehensive benchmarking reveals state-of-the-art performance, achieving mAP@0.5 scores of 95.7% on the standard PKU-MARKET-PCB dataset and 54.2% on the highly challenging CPS2D-AD dataset. Significantly, our approach effectively mitigates class imbalance, narrowing the performance deviation of rare categories to just 4.3% on the PKU-Market-PCB dataset and 1.4% on the CPS2D-AD dataset, compared to 11.8% and 7.5% in baseline models. These findings position MulPViT-SimAM as a viable and efficient solution for industrial quality control. Full article
26 pages, 2942 KB  
Review
Application of Large Language Models in Geotechnical Engineering: A Movement Towards Safe and Sustainable Future
by Kaustav Chatterjee, Mohak Desai and Joshua Li
Geotechnics 2026, 6(2), 38; https://doi.org/10.3390/geotechnics6020038 - 20 Apr 2026
Abstract
Over the last two decades, there has been a paradigm shift in geotechnical engineering driven by advances in sensing, communication, and data-driven techniques. These advancements enhanced the safety and reliability of geotechnical infrastructure through real-time monitoring and automated decision-making. In recent times, Large [...] Read more.
Over the last two decades, there has been a paradigm shift in geotechnical engineering driven by advances in sensing, communication, and data-driven techniques. These advancements enhanced the safety and reliability of geotechnical infrastructure through real-time monitoring and automated decision-making. In recent times, Large Language Models (LLMs) have emerged as advanced data-driven techniques contributing to automated risk assessment of geotechnical infrastructure. LLMs are advanced deep learning models widely used to solve complex numerical problems, analyze large volumes of data, and generate human language. This paper presents a critical review of the application of LLM in geotechnical engineering. The integration of LLMs into geotechnical engineering has demonstrated significant advances in slope stability analysis, bearing capacity computation, numerical analysis, soil–structure interaction, and underground infrastructure. By summarizing the latest research findings and practical applications, this research paper underscores the potential of LLMs to advance and automate various processes in geotechnical engineering. The findings presented in this paper not only provide insights into the current LLM-based geotechnical practices but also emphasize the instrumental role that LLM can play in advancing geotechnical engineering, ultimately ensuring a safer and more sustainable future. Lastly, this paper highlights the different LLM capabilities which can be used to empower geotechnical engineers. Full article
Show Figures

Figure 1

Back to TopTop