Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (307)

Search Parameters:
Keywords = 3D realistic structure model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 12791 KB  
Article
Empirical Validation of Fitts’ Law in Virtual Reality: Modeling, Prediction, and Modality Comparison
by Nikolina Rodin, Dario Ogrizović, Luka Batistić and Sandi Ljubic
Multimodal Technol. Interact. 2026, 10(5), 49; https://doi.org/10.3390/mti10050049 - 1 May 2026
Abstract
Fitts’ law is a foundational model for predicting pointing performance and has been increasingly explored in immersive virtual reality (VR) environments. This paper presents a controlled experimental framework for deriving modality-specific Fitts’ law models in VR and evaluating their predictive transfer to applied [...] Read more.
Fitts’ law is a foundational model for predicting pointing performance and has been increasingly explored in immersive virtual reality (VR) environments. This paper presents a controlled experimental framework for deriving modality-specific Fitts’ law models in VR and evaluating their predictive transfer to applied interaction tasks. The framework comprises two scenarios. The first replicates a standardized ISO 9241 pointing task in a 3D virtual environment to derive predictive movement time models by systematically varying target distance (20–50 cm), target size (2.5–5 cm), and spatial configuration (0, 45, 90, 135). The second simulates an applied warehouse-inspired task involving tool sorting and structured placement actions to evaluate the generalizability of the derived models in more ecologically valid VR interactions. Thirty-two participants completed all tasks using the Meta Quest 3 headset and two interaction modalities: a handheld controller and hand tracking with gesture recognition. Results show that Fitts’ law remains a strong predictor of movement time for 3D pointing in VR, with high linear fits for both the controller (R2=0.9615) and hand tracking (R2=0.9668). However, models derived from standardized pointing tasks showed limited transferability to applied object-manipulation scenarios, producing prediction errors of approximately 27–35% and systematically underestimating movement times. Additionally, both objective metrics and subjective evaluations indicated that controller-based interaction outperformed hand tracking in efficiency, accuracy, perceived workload, and usability. These findings highlight both the robustness and limitations of Fitts-based performance modeling in realistic VR interaction contexts. Full article
30 pages, 4432 KB  
Article
Unsupervised Acoustic Anomaly Detection for Rotating Machinery Under Submarine-Like Environments: Considering Data Scarcity and Background Noise via Proxy Data Generation
by Kwang Sik Kim and Jang Hyun Lee
Sensors 2026, 26(9), 2659; https://doi.org/10.3390/s26092659 (registering DOI) - 24 Apr 2026
Viewed by 580
Abstract
This study proposes a noise-robust unsupervised acoustic anomaly detection framework for early identification of abnormal operating conditions in rotating machinery under submarine-like environments with severe data scarcity. In such environments, underwater background noise and onboard interference sources significantly degrade signal quality, while limited [...] Read more.
This study proposes a noise-robust unsupervised acoustic anomaly detection framework for early identification of abnormal operating conditions in rotating machinery under submarine-like environments with severe data scarcity. In such environments, underwater background noise and onboard interference sources significantly degrade signal quality, while limited computing resources constrain the deployment of high-complexity deep learning models. To address the lack of labeled fault data, the publicly available MIMII dataset was adopted as a proxy platform, and representative submarine interference sources were physically modeled, including colored background noise, structure-borne resonance, band-limited auxiliary noise, tonal components, and sensor noise. These components were combined and scaled to predefined SNR levels (−6 to 6 dB) to generate realistic noise-augmented data. Three unsupervised approaches were compared under edge deployment constraints: (i) Gaussian Mixture Model (GMM) with statistical MFCC features, (ii) statistical-feature-based Ensemble Autoencoder, and (iii) Conv1D-based Ensemble Autoencoder using 1-s log Mel-spectrogram segments. Performance was evaluated in terms of AUC, F1-score, and computational cost. Results show that GMM provides competitive detection performance with minimal computational burden, whereas Conv1D achieves superior accuracy when temporal fault patterns dominate, at the expense of higher complexity. The study provides practical design guidelines for acoustic anomaly detection under multi-noise and resource-constrained conditions. Full article
(This article belongs to the Special Issue AI-Enabled Smart Sensors for Industry Monitoring and Fault Diagnosis)
18 pages, 3449 KB  
Article
Reproducibility of 3D-Printed Breast Phantoms in Mammography and Breast Tomosynthesis
by Kristina Bliznakova, Vencislav Nastev, Nikolay Dukov, Ivan Buliev, Zhivko Bliznakov, Valentina Dobreva, Chavdar Bachvarov, Georgi Todorov and Deyan Grancharov
Technologies 2026, 14(5), 251; https://doi.org/10.3390/technologies14050251 - 23 Apr 2026
Viewed by 132
Abstract
The development of realistic breast phantoms is critical for the evaluation of imaging systems and quantitative image analysis methods. In this work, breast samples derived from the same digital model were produced using 3D printing technology and evaluated for structural similarity and reproducibility. [...] Read more.
The development of realistic breast phantoms is critical for the evaluation of imaging systems and quantitative image analysis methods. In this work, breast samples derived from the same digital model were produced using 3D printing technology and evaluated for structural similarity and reproducibility. Four independently manufactured phantoms were imaged using mammography and breast tomosynthesis. Radiomic features were extracted from regions of interest in order to assess inter-phantom variability. The results showed very good agreement between the four printed phantoms. Most first-order and GLCM radiomic features exhibited very low inter-phantom variability, indicating consistent structural and intensity characteristics. Neighborhood-based texture features showed slightly higher variability, reflecting their sensitivity to local structural differences. Fractal and power spectrum analyses also confirmed the high structural similarity of the phantoms. These results indicate that the proposed manufacturing approach can produce reproducible breast imaging phantoms suitable for mammography and tomosynthesis imaging studies, with potential applications in imaging system evaluation and radiomic research. Full article
25 pages, 3884 KB  
Article
Deep-Learning-Based 3D Dose Distribution Prediction for VMAT Lung Cancer Treatment Using an Enhanced UNet3D Architecture with Composite Loss Functions
by Philip Chung Yin Mak, Luoyi Kong and Lawrence Wing Chi Chan
Bioengineering 2026, 13(5), 490; https://doi.org/10.3390/bioengineering13050490 - 23 Apr 2026
Viewed by 808
Abstract
The high complexity of radiation therapy for lung cancer necessitates effective planning of advanced treatments such as Volumetric Modulated Arc Therapy (VMAT) by radiation oncologists. The current VMAT treatment planning process typically involves extensive manual interaction and a time-consuming, trial-and-error, iterative approach that [...] Read more.
The high complexity of radiation therapy for lung cancer necessitates effective planning of advanced treatments such as Volumetric Modulated Arc Therapy (VMAT) by radiation oncologists. The current VMAT treatment planning process typically involves extensive manual interaction and a time-consuming, trial-and-error, iterative approach that requires planners’ experience. This can lead to varying levels of plan quality. To improve the quality of radiotherapy treatment plans quickly and accurately, this research presents a new architecture, Enhanced UNet3D, to generate three-dimensional (3-D) dose distributions for lung cancer patients. Enhanced UNet3D utilises a symmetric encoder–decoder architecture with residual connections and a target region-attention module to achieve high accuracy in dose shaping within the PTV. A new composite objective function, Enhanced Combined Loss (ECLoss), that includes both SharpLoss, a structure-aware DVH-guided loss, and 3D gradient regularisation, has been developed to address voxel-level class imbalance and achieve realistic spatial dose falloff. This research utilised a retrospective dataset of 170 VMAT plans to train and validate the proposed model. On the test set (n = 14), the model demonstrated exceptional overall accuracy, with a Mean Absolute Error (MAE) of 0.238 ± 0.075 Gy and a structural similarity index measure (SSIM) of 0.970 ± 0.005. Moreover, the model can perform near-real-time inference at approximately 0.5 s per patient, representing a significant reduction in computational resources compared to other architectures. Therefore, these results demonstrate that the Enhanced UNet3D model with ECLoss is a clinically feasible tool for the rapid evaluation and quality assurance of radiotherapy treatment plans and may reduce the need for manual trial-and-error in VMAT workflows. Full article
Show Figures

Figure 1

25 pages, 13764 KB  
Article
A 3D Fold-Modeling Method Based on Multiple-Point Statistics and Long Short-Term Memory Networks
by Xueye Chen, Gang Liu, Hongfeng Fang, Qiyu Chen, Ce Zhang, Zhesi Cui, Zhenwen He, Wenyao Fan and Junping Xiong
Appl. Sci. 2026, 16(9), 4079; https://doi.org/10.3390/app16094079 - 22 Apr 2026
Viewed by 269
Abstract
Accurate fold models are of great significance for mineralization control, resource exploration, and underground engineering. However, existing automated modeling methods show difficulty in quantitatively describing fold development patterns and lack the available reference models required for multiple-point statistics and intelligent modeling techniques. This [...] Read more.
Accurate fold models are of great significance for mineralization control, resource exploration, and underground engineering. However, existing automated modeling methods show difficulty in quantitatively describing fold development patterns and lack the available reference models required for multiple-point statistics and intelligent modeling techniques. This study proposes a novel three-dimensional (3D) fold-modeling method that integrates multiple-point-statistics-based pattern library construction with a long short-term memory (LSTM) network-based modeling framework. The multiple-point geostatistic is employed to quantify spatial distributions and correlations in geological data, thereby identifying the intrinsic structural patterns of folds. The extracted patterns are transformed into a training library that effectively represents the geological semantics and morphological diversity of folds, providing a reliable dataset for LSTM-based model training. An optimized ConvLSTM network is designed to ensure robust representation of fold complexity and variability. Based on the network, 3D models can be rapidly generated from geological profiles. Multiple experiments demonstrate that the proposed method can automatically produce 3D models that conform to realistic geological conditions and accurately reflect true fold geometries. The approach significantly improves modeling efficiency and geological feature representation, providing a reliable tool for geological engineering applications. Full article
(This article belongs to the Special Issue Advances in Geostatistical Information Analysis and Mapping)
Show Figures

Figure 1

26 pages, 4793 KB  
Article
Analysis of Dewatering Characteristics of Deep Foundation Pit in Anisotropic Permeability Coefficient Stratum
by Wentao Shang, Xinru Wang, Yu Tian, Xiao Zheng and Jianzhe Shi
Buildings 2026, 16(8), 1639; https://doi.org/10.3390/buildings16081639 - 21 Apr 2026
Viewed by 207
Abstract
Permeability anisotropy, which is widely present in natural soil deposits, plays an important role in controlling groundwater flow patterns and ground deformation during deep excavation dewatering. However, isotropic assumptions are still commonly adopted in engineering practice, making it difficult to accurately capture realistic [...] Read more.
Permeability anisotropy, which is widely present in natural soil deposits, plays an important role in controlling groundwater flow patterns and ground deformation during deep excavation dewatering. However, isotropic assumptions are still commonly adopted in engineering practice, making it difficult to accurately capture realistic subsurface hydraulic conditions. In this study, a deep foundation pit of a metro station in Jinan, China, is taken as a case study. A three-dimensional excavation–dewatering model incorporating permeability anisotropy is established using PLAXIS 3D to systematically investigate the influence of the permeability ratio (Kx/Kz) ranging from 0.1 to 10 on the seepage field evolution, dewatering influence radius, ground surface settlement, and consolidation time history. The results indicate that increasing permeability anisotropy promotes a fundamental transition of the seepage regime from vertically concentrated recharge to laterally dominated radial flow. Correspondingly, the dewatering influence radius exhibits a pronounced non-monotonic response to Kx/Kz, decreasing significantly with increasing permeability ratio and reaching a minimum at approximately Kx/Kz ≈ 5, followed by a slight rebound. Meanwhile, surface settlement profiles evolve from a localized concentration pattern to a widely distributed form as permeability anisotropy increases, accompanied by a remarkable outward expansion of the settlement influence zone. Both the magnitude and spatial distribution of settlement show high sensitivity to variations in permeability anisotropy. Based on these findings, a three-stage conceptual seepage structure model accounting for permeability anisotropy is proposed, characterized by vertically dominated flow, a transitional competition regime, and horizontally dominated flow. The staged evolution of seepage structures is shown to govern the non-monotonic variation in the dewatering influence radius and the spatial–temporal response of ground settlement. The results indicate a dual-scale influence mechanism of permeability anisotropy on dewatering-induced hydro-mechanical behavior, providing a theoretical basis for refined dewatering design and environmental impact assessment in deep excavation projects. Full article
Show Figures

Figure 1

15 pages, 3291 KB  
Article
Automated Segmentation of Digital Artifacts in Intraoral Photostimulable Phosphor Radiographs
by Ceyda Gizem Topal, Osman Yalçın, Hatice Tetik, Murat Ünal, Necla Bandirmali Erturk and Cemile Özlem Üçok
Diagnostics 2026, 16(8), 1194; https://doi.org/10.3390/diagnostics16081194 - 16 Apr 2026
Viewed by 240
Abstract
Background/Objectives: Intraoral radiographs acquired using photostimulable phosphor (PSP) plates are inherently susceptible to a wide spectrum of artifacts that can compromise diagnostic reliability and lead to unnecessary repeat exposures. Although structured taxonomies describing these artifacts have been proposed, automated methods capable of [...] Read more.
Background/Objectives: Intraoral radiographs acquired using photostimulable phosphor (PSP) plates are inherently susceptible to a wide spectrum of artifacts that can compromise diagnostic reliability and lead to unnecessary repeat exposures. Although structured taxonomies describing these artifacts have been proposed, automated methods capable of detecting and localizing multiple artifact types at the pixel level remain limited, particularly under realistic multi-class conditions. In this study, we address the problem of fine-grained, multi-class PSP artifact segmentation by systematically evaluating a deep learning-based framework and establishing a realistic baseline for this inherently challenging task. Methods: A retrospective, multi-center dataset comprising 1497 intraoral PSP radiographs (bitewing and periapical) collected from three institutions was analyzed. Pixel-level annotations were generated by expert oral and maxillofacial radiologists according to a standardized taxonomy consisting of four major artifact groups and 29 artifact classes, together with a background class. A 2D nnU-Net v2 architecture was employed as a baseline segmentation model. Model development was performed using 5-fold cross-validation, and performance was evaluated on an independent test set using Dice coefficient, Intersection over Union (IoU), Precision, and Recall. Results: Across all classes, the model achieved a mean Dice score of 0.0894 ± 0.0084 in cross-validation and 0.0952 on the independent test set, reflecting the intrinsic complexity of the task. Class-wise analysis revealed substantial variability, with higher performance in larger and visually distinctive artifacts, whereas small-scale, low-contrast, and underrepresented classes exhibited markedly reduced performance. Notably, several artifact categories were absent from the training data, resulting in a zero-shot scenario that directly constrained model generalization. Furthermore, segmentation performance demonstrated a strong dependency on class frequency, measured in terms of pixel distribution, underscoring the impact of severe class imbalance. Group-based evaluation showed relatively higher performance for pre-exposure and exposure-related artifacts compared to post-exposure and scanner-related categories. Conclusions: These findings demonstrate that large-scale, multi-class pixel-level segmentation of PSP artifacts represents a fundamentally challenging problem shaped by the combined effects of class imbalance, small object size, heterogeneous artifact morphology, and incomplete training representation. While the proposed framework confirms the feasibility of automated artifact localization, its current performance suggests greater immediate value as a quality control or screening support tool rather than a fully autonomous diagnostic system. By providing a comprehensive baseline and systematic analysis, this study establishes a benchmark for future research and highlights the critical need for imbalance-aware learning strategies, hierarchical modeling, and data-centric approaches to advance this field. Full article
Show Figures

Figure 1

20 pages, 1491 KB  
Systematic Review
Digital Imaging Technologies for Forensic Orofacial Identification: A Systematic Review and Research Agenda
by Sofia Viegas, Rodrigo Azenha-Gomes, João Abreu, Tiago Nunes and Ana Corte-Real
Appl. Sci. 2026, 16(8), 3766; https://doi.org/10.3390/app16083766 - 12 Apr 2026
Viewed by 393
Abstract
This systematic review critically examines the use of 2D and 3D digital imaging technologies of the face and teeth, with and without integration of artificial intelligence, for human identification in forensic and medicolegal contexts. Following PRISMA 2020 guidelines, Scopus, PubMed and Web of [...] Read more.
This systematic review critically examines the use of 2D and 3D digital imaging technologies of the face and teeth, with and without integration of artificial intelligence, for human identification in forensic and medicolegal contexts. Following PRISMA 2020 guidelines, Scopus, PubMed and Web of Science were systematically searched, identifying 26 studies published between 2011 and 2025 that met predefined eligibility criteria framed by a PECO-style question. Eighteen studies focused on facial imaging, six on dental imaging and two on integrated orofacial workflows, using digital photography, CCTV/video, 3D surface imaging, intraoral scanners, and three-dimensional superimposition methods, sometimes combined with classical algorithms and deep learning models. In controlled or semi-controlled settings, state-of-the-art facial algorithms often reported very high accuracy, with values up to 99.85%. By contrast, studies using real CCTV or other challenging forensic imagery showed more variable performance, with accuracies ranging from about 72.8% to 96.6%. Dental and orofacial studies reported 100% correct identifications for 3D superimposition of intraoral scans in small samples, and around 83% accuracy for automated AI-based dental identification. Crucially, fulfilling the promise of a true orofacial approach, this review proposes a structured research agenda focused on creating realistic multi-modal databases, standardizing protocols, and implementing probabilistic reporting (likelihood ratios) to guide future validation and legal admissibility. Full article
(This article belongs to the Special Issue State-of-the-Art Digital Dentistry)
Show Figures

Figure 1

20 pages, 13035 KB  
Article
Development of Wideband Circular Microstrip Patch Antenna for Use in Microwave Imaging for Brain Tumor Detection
by Hüseyin Özmen, Mengwei Wu and Mariana Dalarsson
Sensors 2026, 26(7), 2062; https://doi.org/10.3390/s26072062 - 25 Mar 2026
Viewed by 760
Abstract
This work presents the design of a compact, wideband circular microstrip patch antenna for microwave imaging-based brain tumor detection. The main contribution is the development of a compact antenna structure incorporating enhanced ground-plane slot modifications, which significantly improves impedance bandwidth while maintaining a [...] Read more.
This work presents the design of a compact, wideband circular microstrip patch antenna for microwave imaging-based brain tumor detection. The main contribution is the development of a compact antenna structure incorporating enhanced ground-plane slot modifications, which significantly improves impedance bandwidth while maintaining a small electrical size, making it highly suitable for medical imaging systems. In addition, the study integrates antenna design, safety evaluation, and microwave imaging analysis within a unified framework to assess tumor localization feasibility using a realistic head model in CST Microwave Studio. The proposed antenna is fabricated on an FR-4 substrate with dimensions of 37 × 54.5 × 1.6 mm3, corresponding to an electrical size of 0.176λ × 0.260λ × 0.0076λ at the lowest operating frequency of 1.43 GHz. Ground-plane slot enhancements are introduced to achieve wideband performance, resulting in an impedance bandwidth from 1.43 to 4 GHz and a fractional bandwidth of 94.7%. The antenna exhibits a maximum realized gain of 3.7 dB. To evaluate its suitability for medical applications, specific absorption rate (SAR) analysis is performed using a realistic human head model at multiple antenna positions and at 1.5, 2.1, 2.5, 3.3, and 3.9 GHz frequencies. The computed SAR values range from 0.109 to 1.56 W/kg averaged over 10 g of tissue, satisfying the IEEE C95.1 safety guideline limit of 2 W/kg. For tumor detection assessment, time-domain simulations are conducted in CST Microwave Studio using a monostatic radar configuration, where the antenna operates as both transmitter and receiver at twelve angular positions around the head with 30° increments. The collected scattered signals are processed using the Delay-and-Sum (DAS) beamforming algorithm to reconstruct dielectric contrast maps and localize the tumor. It should be noted that the tumor-imaging demonstrations presented in this work are based on numerical simulations, while experimental validation is limited to the characterization of the fabricated antenna. Nevertheless, the findings indicate that the proposed antenna is a promising candidate for noninvasive, low-cost microwave brain tumor imaging applications. Full article
Show Figures

Figure 1

14 pages, 1100 KB  
Article
Three-Dimensional Displacement Patterns in Maxillary Molar Distalization: A Comparative Finite Element Study
by Roland Kmeid, Joseph Bouserhal, Allahyar Geramy, Maria Daccache and Moschos Papadopoulos
Dent. J. 2026, 14(3), 187; https://doi.org/10.3390/dj14030187 - 23 Mar 2026
Viewed by 282
Abstract
Objectives: This study aimed to analyze the three-dimensional displacement of maxillary first molars using a finite element model with two headgear configurations, namely cervical and horizontal pull headgears, as well as pendulum, infrazygomatic miniscrews, Bollard miniplates, Advanced Molar Distalization Appliance (AMDA), and Beneslider. [...] Read more.
Objectives: This study aimed to analyze the three-dimensional displacement of maxillary first molars using a finite element model with two headgear configurations, namely cervical and horizontal pull headgears, as well as pendulum, infrazygomatic miniscrews, Bollard miniplates, Advanced Molar Distalization Appliance (AMDA), and Beneslider. The goal was to clarify how variations in anchorage design and force direction influence molar movement across the sagittal, vertical, and transverse planes. Methods: A three-dimensional finite element model of the maxillary dentition and supporting structures was constructed using reference anatomical data and standardized material properties. Each appliance was virtually simulated under its clinically recommended force magnitude and direction to ensure realistic biomechanical conditions. The orientation of each force vector relative to the molar’s center of resistance (CR) was analyzed, and resulting tooth displacements were quantified along the sagittal (Z), vertical (Y), and transverse (X) axes using 49-node reference paths connecting key anatomical landmarks. Results: Appliances applying forces through or above the molar CR, such as the AMDA, infrazygomatic miniscrews, and Bollard miniplates, produced nearly bodily distalization with minimal tipping (<0.6° (range 0.3–0.6°)) and slight intrusion (−0.12 to −0.18 mm). Conversely, systems delivering forces below the CR, such as the cervical headgear and pendulum, resulted in greater crown tipping and extrusion. The Beneslider exhibited an intermediate displacement pattern with moderate vertical control. Conclusions: Force vector height and direction relative to the molar CR critically determine 3D displacement behavior. Skeletal anchorage and adjustable systems, particularly the AMDA, demonstrated the most controlled distalization pattern with minimal tipping, whereas conventional tooth-borne designs induced more tipping and extrusion. Full article
(This article belongs to the Special Issue Accelerated Orthodontics: The Modern Innovations in Orthodontics)
Show Figures

Graphical abstract

23 pages, 352 KB  
Article
Performance Comparison of Python-Based Complex Event Processing Engines for IoT Intrusion Detection: Faust Versus Streamz
by Maryam Abbasi, Filipe Cardoso, Paulo Váz, José Silva, Filipe Sá and Pedro Martins
Computers 2026, 15(3), 200; https://doi.org/10.3390/computers15030200 - 23 Mar 2026
Viewed by 556
Abstract
The proliferation of Internet of Things (IoT) devices has intensified the need for efficient real-time anomaly and intrusion detection, making the selection of an appropriate Complex Event Processing (CEP) engine a critical architectural decision for security-aware data pipelines. Python-based CEP frameworks offer compelling [...] Read more.
The proliferation of Internet of Things (IoT) devices has intensified the need for efficient real-time anomaly and intrusion detection, making the selection of an appropriate Complex Event Processing (CEP) engine a critical architectural decision for security-aware data pipelines. Python-based CEP frameworks offer compelling advantages through the seamless integration with data science and machine learning ecosystems; however, rigorous comparative evaluations of such frameworks under realistic IoT security workloads remain absent from the literature. This study presents the first systematic comparative evaluation of Faust and Streamz—two Python-native CEP engines representing fundamentally different architectural philosophies—specifically in the context of IoT network intrusion detection. Faust was selected for its actor-based stateful processing model with native Kafka integration and distributed table support, while Streamz was selected for its reactive, lightweight pipeline design targeting high-throughput stateless processing, making them representative of the two dominant paradigms in Python stream processing. Although both engines target different application niches, their performance characteristics under realistic CEP workloads have never been rigorously compared, leaving practitioners without empirical guidance. The primary evaluation employs an IoT network intrusion dataset comprising 583,485 events from 83 heterogeneous devices. To assess whether the observed performance characteristics are specific to this single dataset or generalize across different workload profiles, a secondary IoT-adjacent benchmark is included: the PaySim financial transaction dataset (6.4 million records), selected because its event schema, fraud-pattern temporal structure, and volume differ substantially from the intrusion dataset, providing a stress test for cross-workload robustness rather than a claim of domain equivalence. We acknowledge the reviewer’s valid point that a second IoT-specific intrusion dataset (such as TON_IoT or Bot-IoT) would constitute a more directly comparable validation; this is identified as a priority for future work. The load levels used in scalability experiments (up to 5000 events per second) intentionally exceed the dataset’s natural rate to stress-test each engine’s architectural ceiling and identify saturation thresholds relevant to large-scale or multi-sensor IoT deployments. We conducted controlled experiments with comprehensive statistical analysis. Our results demonstrate that Streamz achieves superior throughput at 4450 events per second with 89% efficiency and minimal resource consumption (40 MB memory, 12 ms median latency), while Faust provides robust intrusion pattern detection with 93–98% accuracy and stable, predictable resource utilization (1.4% CPU standard deviation). A multi-framework comparison including Apache Kafka Streams and offline scikit-learn baselines confirms that Faust achieves detection quality competitive with JVM-based alternatives (Faust: 96.2%; Kafka Streams: 96.8%; absolute difference of 0.6 percentage points, not statistically significant at p=0.318) while retaining the Python ecosystem advantages. Statistical analysis confirms significant performance differences across all metrics (p<0.001, Cohen’s d>0.8). Critical scalability thresholds are identified: Streamz maintains efficiency above 95% up to 3500 events per second, while Faust degrades beyond 2500 events per second. These findings provide IoT security engineers and system architects with actionable, empirically grounded guidance for CEP engine selection, establish reproducible benchmarking methodology applicable to future Python-based stream processing evaluations, and advance theoretical understanding of the accuracy–throughput trade-off in stateful versus stateless Python CEP architectures. Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
Show Figures

Figure 1

15 pages, 2312 KB  
Article
Magnetodynamic Characteristics of QGP Energy Dissipation in RMHD Framework with Relativistic Heavy-Ion Collisions
by Huang-Jing Zheng and Sheng-Qin Feng
Particles 2026, 9(1), 29; https://doi.org/10.3390/particles9010029 - 19 Mar 2026
Viewed by 316
Abstract
Relativistic heavy-ion collisions generate ultra-strong magnetic fields that interact with the quark–gluon plasma (QGP), a key focus of high-energy physics research. This study investigates QGP energy density evolution under time-dependent magnetic fields within a (1 + 1)D relativistic magnetohydrodynamic (RMHD) framework integrated with [...] Read more.
Relativistic heavy-ion collisions generate ultra-strong magnetic fields that interact with the quark–gluon plasma (QGP), a key focus of high-energy physics research. This study investigates QGP energy density evolution under time-dependent magnetic fields within a (1 + 1)D relativistic magnetohydrodynamic (RMHD) framework integrated with Bjorken flow. Three magnetic field temporal evolution models (Type-1, Type-2, Type-3) are analyzed for two different equations of state: (1) p=cs2e (simplified ultra-relativistic), and (2) p=cs2e2MB (magnetized conformal), incorporating a temperature-dependent magnetic susceptibility derived from lattice QCD. Results show that stronger magnetic fields consistently suppress QGP energy density decay, with suppression magnitude dependent on the magnetic field’s temporal profile. Ultra-relativistic fluids exhibit slowed energy decay due to magnetic pressure counteracting hydrodynamic expansion. In contrast, magnetized conformal fluids display faster energy dissipation under identical conditions, arising from the synergistic effect of enhanced magnetic fluid coupling, increased energy dissipation during interaction, and QGP’s perfect fluid expansion at elevated temperatures. Temperature-dependent magnetic susceptibility reveals a transition from diamagnetic (confined phase) to paramagnetic (deconfined QGP phase) behavior, introducing a feedback mechanism that strengthens energy retention at higher temperatures. This work clarifies the interplay between magnetic field dynamics, QCD phase structure, and hydrodynamic expansion, providing key observational signatures for distinguishing fluid types in heavy-ion collisions and advancing realistic modeling of magnetized QGP. Full article
Show Figures

Figure 1

17 pages, 539 KB  
Article
Wavelet-Based Error-Correcting Codes: Performance Comparison with BCH in Modern Channels
by Alla Levina and Sergey Boyko
Mathematics 2026, 14(6), 993; https://doi.org/10.3390/math14060993 - 14 Mar 2026
Viewed by 344
Abstract
Reliable data transmission over noisy channels requires effective error-correcting codes. While classical algebraic constructions, such as Bose–Chaudhuri–Hocquenghem (BCH) codes, remain industry standards, structured alternatives based on discrete wavelet transforms offer potential benefits in terms of implementation complexity and error resilience. This study presents [...] Read more.
Reliable data transmission over noisy channels requires effective error-correcting codes. While classical algebraic constructions, such as Bose–Chaudhuri–Hocquenghem (BCH) codes, remain industry standards, structured alternatives based on discrete wavelet transforms offer potential benefits in terms of implementation complexity and error resilience. This study presents a comparative analysis of BCH and wavelet-based linear block codes, focusing on their error-correction capability and overall performance under realistic wireless channel conditions. This work evaluates both coding schemes across five channel models: additive white Gaussian noise (AWGN), Rayleigh fading, sinusoidal attenuation, multiplicative Gaussian noise, and a composite Rayleigh-plus-sinusoid channel. Performance is assessed using bit error rate (BER), frame error rate (FER), and decoding reliability across a range of signal-to-noise ratios. Results show that wavelet codes achieve error-correction performance comparable to or slightly better than BCH in most channels. Notably, they demonstrate a consistent advantage in scenarios with periodic or slow-varying interference, outperforming BCH starting from the 1.5 dB SNR threshold where the wavelet code achieves a BER reduction of up to 48% and a 37.5% improvement in FER, significantly enhancing decoding reliability in structured noise environments. These findings indicate that wavelet-based codes are not only viable but, in specific practical environments characterized by structured noise, represent a superior alternative for robust and reliable communication systems. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

32 pages, 12219 KB  
Article
Stochastic Mechanical Response and Failure Mode Transition of Corroded Buried Pipelines Subjected to Reverse Faulting
by Tianchong Li, Kaihua Yu, Yachao Hu, Ruobing Wu, Yuchao Yang and Feng Liu
Materials 2026, 19(5), 1033; https://doi.org/10.3390/ma19051033 - 8 Mar 2026
Viewed by 318
Abstract
Buried oil and gas pipelines, the critical arteries of global energy infrastructure, are increasingly vulnerable to severe geological hazards such as reverse faulting, yet their structural integrity is often pre-compromised by stochastic corrosion damage accumulated during service. However, quantifying the coupled impact of [...] Read more.
Buried oil and gas pipelines, the critical arteries of global energy infrastructure, are increasingly vulnerable to severe geological hazards such as reverse faulting, yet their structural integrity is often pre-compromised by stochastic corrosion damage accumulated during service. However, quantifying the coupled impact of spatial corrosion heterogeneity and large ground deformation remains a formidable challenge due to the complex nonlinearities involved in soil–structure interactions and wall thinning. This study establishes a probabilistic assessment framework integrating random field theory, nonlinear finite element analysis, and a generative conditional diffusion model to characterize realistic 2D non-Gaussian corrosion morphologies. The numerical results reveal a significant geometric stiffening effect induced by internal pressure, where moderate operating levels effectively suppress cross-sectional distortion by counteracting the Brazier effect. Consequently, this mechanism facilitates a fundamental transition in failure modes from localized tensile rupture to ductile buckling, significantly extending the critical fault displacement threshold. Furthermore, probabilistic fragility analysis demonstrates that the spatial dispersion of pitting, rather than just average wall thinning, governs the initiation of premature failure. Mechanistic analysis indicates that high internal pressure, while providing pneumatic support, exacerbates tensile strain localization at corrosion pits, leading to a heightened probability of premature rupture under minor fault deformations, a critical hazard that traditional deterministic models significantly underestimate. These findings provide a quantitative theoretical foundation for the reliability-based design and maintenance of energy lifelines traversing active tectonic zones. Full article
(This article belongs to the Section Materials Simulation and Design)
Show Figures

Figure 1

22 pages, 8497 KB  
Article
Influence of Retrofitting by Clamps on the Behaviour of Dry Stone Historical Masonry Structures Under Seismic Load
by Nikolina Živaljić, Ivan Balić, Hrvoje Smoljanović, Boris Trogrlić and Ante Munjiza
Buildings 2026, 16(5), 1062; https://doi.org/10.3390/buildings16051062 - 7 Mar 2026
Viewed by 378
Abstract
Dry stone structures, especially in the Mediterranean area, are often represented as cultural heritage buildings. The strategic goal is to preserve significant structures; therefore, it is necessary to know as well as possible what their behaviour is as a result of the expected [...] Read more.
Dry stone structures, especially in the Mediterranean area, are often represented as cultural heritage buildings. The strategic goal is to preserve significant structures; therefore, it is necessary to know as well as possible what their behaviour is as a result of the expected actions. On the basis of this, appropriate decisions can be made in case of necessary retrofitting. One of the most destructive actions on structures is an earthquake. Therefore, this paper assessed the behaviour of three dry stone historical structures under seismic loading in the historic centre of the city of Split in Croatia. The bell tower of St. Domnius Cathedral, the Eastern colonnade, and the Prothyron in Diocletian’s Palace were analysed. The presented numerical analyses were processed using the Y-2D computer programme, based on the combined finite-discrete element method. The structures were modelled with plane models in which stone blocks were modelled as discrete elements. This numerical model, in addition to allowing the estimation of seismic resistance, provides a very realistic expected failure mechanism, which is its significant advantage. Namely, this information is crucial for determining appropriate measures in case structural repairs become necessary for these types of structures. In the framework of this paper, this is exactly what was used to determine the place where the structure needs to be strengthened. By incrementally increasing the ground acceleration, the seismic resistance of the structures with the original geometry for all three earthquakes were first analysed. After the mode of the failure mechanism was obtained, structures were strengthened with clamps and the influence of retrofitting on the seismic resistance and failure mechanism was analysed for the case of the most unfavourable earthquake load. Full article
(This article belongs to the Special Issue Challenges in Structural Repairs and Renovations)
Show Figures

Figure 1

Back to TopTop