Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,831)

Search Parameters:
Keywords = black-boxing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 7158 KB  
Article
Self-Consistency-Based Fake Media Detection Using Multi-Perspective LLM Reasoning
by Zeinab Shahbazi, Sara Behnamian, Zahra Shahbazi and Sadiqa Jafari
Electronics 2026, 15(9), 1822; https://doi.org/10.3390/electronics15091822 - 24 Apr 2026
Abstract
The rapid proliferation of synthetic and misleading media has intensified the need for robust fake media detection systems. While large language models (LLMs) have recently been employed as classifiers for misinformation detection, most existing approaches treat them as black-box predictors, overlooking their internal [...] Read more.
The rapid proliferation of synthetic and misleading media has intensified the need for robust fake media detection systems. While large language models (LLMs) have recently been employed as classifiers for misinformation detection, most existing approaches treat them as black-box predictors, overlooking their internal reasoning dynamics. In this paper, we propose a novel framework for fake media detection based on self-consistency divergence across multi-perspective LLM reasoning. Instead of generating a single verdict, the proposed method prompts an LLM to analyze a given media item from multiple independent reasoning perspectives, including factual consistency, logical coherence, emotional manipulation, and source credibility. By sampling multiple reasoning chains under controlled stochasticity, semantic divergence and logical instability across the generated explanations are quantified. We hypothesize, and empirically show, that fake media induces significantly greater reasoning variance than genuine content because fabricated narratives often lack stable factual grounding. Experiments conducted on benchmark fake news datasets show that reasoning divergence serves as a strong discriminative signal, improving detection robustness and interpretability compared to standard single-pass LLM classifiers. The findings suggest that internal reasoning instability can function as an intrinsic reliability metric, opening a new direction for explainable and model-centric fake media detection. Full article
(This article belongs to the Special Issue Multimodal Learning for Multimedia Content Analysis and Understanding)
18 pages, 3912 KB  
Article
Beyond the Black Box: Resin Viscosity and Tensile Strength as Fabrication Guides for VPP 3D-Printed Microfluidic Molds
by Rifat Hussain Chowdhury, Shunya Okamoto, Takayuki Shibata, Tuhin Subhra Santra and Moeto Nagai
Micro 2026, 6(2), 29; https://doi.org/10.3390/micro6020029 - 24 Apr 2026
Abstract
Resin 3D-printed molds are being increasingly favored for PDMS microfluidics across many disciplines. However, resin diversity, as well as secret manufacturer formulations, leads to a lack of standardization when using 3D printing for microscale applications. The impact of physical resin properties, both in [...] Read more.
Resin 3D-printed molds are being increasingly favored for PDMS microfluidics across many disciplines. However, resin diversity, as well as secret manufacturer formulations, leads to a lack of standardization when using 3D printing for microscale applications. The impact of physical resin properties, both in its monomeric concoction and polymerized lattices at 100 µm or lower scales, needs quantification. We tested the performance of locally available resin formulations, isolating the impact of resin pigments and how it impacted the resin’s properties and performance. Lower resin viscosity improved feature fidelity (edge filleting < 25 µm) and improved resolution limit for recessed features, while cured polymer mechanical strength impacted the limit for positive mold features. We combined our findings to fabricate quality negative and positive mold structures in the mold and determined the best protocols associated with limitations during the fabrication of such structures. The methodologies in this study are expected to be widely applicable across various resin types and simplify the adoption of 3D printing protocols for specific feature fabrication in microscale molds for PDMS devices. Full article
Show Figures

Figure 1

11 pages, 387 KB  
Article
Depth Fragility and Skeletal Universality: Decoupling Topology and Function in Deep Neural Networks
by Quang Nguyen, Hai Ha Pham, Davide Cassi and Michele Bellingeri
Mathematics 2026, 14(9), 1438; https://doi.org/10.3390/math14091438 - 24 Apr 2026
Abstract
Deep neural networks (DNNs) are traditionally analyzed as black-box function approximators, yet their internal structure exhibits phase transitions characteristic of complex physical systems. In this study, we investigate topological–functional decoupling—the phenomenon whereby a network retains full graph connectivity while losing computational function—in [...] Read more.
Deep neural networks (DNNs) are traditionally analyzed as black-box function approximators, yet their internal structure exhibits phase transitions characteristic of complex physical systems. In this study, we investigate topological–functional decoupling—the phenomenon whereby a network retains full graph connectivity while losing computational function—in trained neural networks through the lens of percolation theory. By subjecting three distinct architectures (Shallow, Deep, and Wide MLPs) to a unified edge-pruning analysis on Fashion-MNIST, we uncover a fundamental divergence between structural integrity and computational capacity in this experimental setting. We report three key phenomena observed in these experiments: (1) the zombie network state under stochastic pruning, where the system retains global connectivity (P1.0) yet suffers a catastrophic functional collapse (accuracy falls below 50% of baseline at prunning ratio pf0.350.68 depending on depth), proves that graph reachability does not imply computational capability; (2) depth fragility, where increased network depth triggers multiplicative signal decay (the avalanche effect), rendering deep architectures exponentially more vulnerable to random edge removal than shallow ones (pfdeep0.35 vs. pfshallow0.68); and (3) scale-free universality, observed under magnitude-based pruning, where a robust functional skeleton maintains accuracy near the baseline (∼89%) up to extreme sparsity (pf0.850.95) across all three architectures. Robustness stems not from holographic redundancy in the overall connection count but from the emergent heavy-tailed rich-club organization of weight magnitudes—a sparse set of high-magnitude synapses that form the functional backbone of the network, decoupled from the redundant topological mass. These findings offer new physical constraints for the design of resilient neuromorphic hardware. Full article
(This article belongs to the Section E: Applied Mathematics)
26 pages, 1015 KB  
Article
AI-Driven Biopsychosocial Screening for Breast Cancer: Enhancing Risk Prediction via Differential Evolutionary Linear Discriminant Analysis for Feature Extraction
by José Luis Llaguno-Roque, Adriana Laura López-Lobato, Juan Carlos Pérez-Arriaga, Héctor Gabriel Acosta-Mesa, Ángel J. Sánchez-García, Gabriel Gutiérrez-Ospina, Antonia Barranca-Enríquez and Tania Romo-González
Math. Comput. Appl. 2026, 31(3), 66; https://doi.org/10.3390/mca31030066 - 24 Apr 2026
Abstract
In Mexico, the high prevalence and mortality rates associated with breast cancer (BC) constitute a critical public health challenge that demands context-specific preventive measures. This study proposes an integrative framework for predicting BC risk based on a biopsychosocial model. We hypothesize that emotional [...] Read more.
In Mexico, the high prevalence and mortality rates associated with breast cancer (BC) constitute a critical public health challenge that demands context-specific preventive measures. This study proposes an integrative framework for predicting BC risk based on a biopsychosocial model. We hypothesize that emotional suppression and repression act as key neuroendocrine disruptors and predisposing factors within the Mexican female population. To test this, we systematically compared the predictive performance of various machine learning classification models using the clinical, psychological, and combined profiles of 110 women. These models were evaluated with and without the application of a robust evolutionary algorithm: Differential Evolutionary Linear Discriminant Analysis for Feature Extraction (DELDAFE). The results demonstrated that integrating clinical and psychological data into a combined latent space significantly improved the performance of the classification algorithms. The Artificial Neural Network achieved the highest metrics (0.9975 Precision; 0.9976 F1-score). However, due to the inherent “black-box” nature of these models (limited clinical interpretability), the Decision Tree emerged as the optimal practical alternative, providing highly competitive (0.8874 Precision; 0.8853 F1-score) and interpretable results. These findings provide empirical evidence that psychological factors, rather than being mere incidental comorbidities, could be associated with the etiology of breast cancer and be used as risk factors in predicting the disease. Ultimately, this AI-driven biopsychosocial screening model offers a scalable, low-cost, and context-adapted risk assessment tool for early BC diagnosis in Mexican women. Full article
(This article belongs to the Special Issue New Trends in Computational Intelligence and Applications 2025)
Show Figures

Figure 1

23 pages, 24540 KB  
Article
Landscape Drivers of Trail Formation in Peri-Urban Mountains: Insights from an Explainable Machine Learning Approach
by Qin Guo, Shili Chen, Xueyue Bai and Yue Zhang
Land 2026, 15(5), 715; https://doi.org/10.3390/land15050715 - 24 Apr 2026
Abstract
The rapid growth of hiking tourism presents a critical challenge for balancing visitor safety with the sustainable management of ecologically fragile mountain environments. Traditional models developed in urban settings struggle to capture the highly non-linear, heterogeneous, and zero-inflated characteristics of wilderness trekking behavior. [...] Read more.
The rapid growth of hiking tourism presents a critical challenge for balancing visitor safety with the sustainable management of ecologically fragile mountain environments. Traditional models developed in urban settings struggle to capture the highly non-linear, heterogeneous, and zero-inflated characteristics of wilderness trekking behavior. In order to quantify the nonlinear and threshold-based effects of environmental variables on hikers’ spatial decisions in unstructured wilderness and to identify distinct behavioral regimes for segmented management, this study introduces an explainable machine learning framework to reconstruct hikers’ spatial decision-making in a complex mountainous system in Inner Mongolia, China. Random Forest (RF), XGBoost, and LightGBM were compared in predicting trail density and the Euclidean distance to the nearest trail. Results show that transforming behavioral traces into continuous proximity surfaces dramatically improves model performance, with XGBoost achieving the highest predictive accuracy for Trail_Dist. By integrating the SHapley Additive exPlanations framework, this study moves beyond black-box prediction to reveal the nonlinear mechanisms driving hiker behavior. Key findings include: (1) Nighttime light range exhibits a U-shaped threshold effect as the primary anthropogenic attractor. (2) Elevation shows an exponential inhibitory trend above 1238 m. (3) Strong spatial coupling exists between elevation and slope, alongside a landscape compensation effect where high Normalized Difference Vegetation Index (NDVI) areas attract off-trail movements. This research provides a robust methodological pathway for predicting behavior in unstructured outdoor environments. It offers a scientific foundation for smart scenic area management, including optimized route planning, precise ecological protection zoning, and targeted emergency rescue preparedness. Full article
19 pages, 2579 KB  
Article
Black-Box Hyperparameter Optimization for Financial RAG Retrieval: An Efficiency–Effectiveness Trade-Off Study
by Yangyang Jin, Xindi Wang and Qianli Dong
Information 2026, 17(5), 405; https://doi.org/10.3390/info17050405 - 24 Apr 2026
Abstract
This study examines black-box hyperparameter optimization for financial retrieval-augmented generation (RAG) retrieval under limited budget constraints. Using FinQA as the primary dataset, it compares Grid Search, Random Search, and Bayesian Optimization under a unified search space, evaluation protocol, and multi-seed setting, and further [...] Read more.
This study examines black-box hyperparameter optimization for financial retrieval-augmented generation (RAG) retrieval under limited budget constraints. Using FinQA as the primary dataset, it compares Grid Search, Random Search, and Bayesian Optimization under a unified search space, evaluation protocol, and multi-seed setting, and further uses FinanceBench for external validation. The results show that Random Search and Bayesian Optimization can approach the Grid reference at substantially lower cost, but the small development-set advantage of Bayesian Optimization does not remain stable on the test set or across repeated runs. A more consistent finding is that high-performing configurations are concentrated in a limited parameter region. Overall, the results suggest that, in budget-constrained financial RAG retrieval tuning, identifying stable high-performing parameter regions may be more useful than relying on increasingly complex optimization methods. Full article
(This article belongs to the Section Information Processes)
Show Figures

Graphical abstract

24 pages, 750 KB  
Article
Adversarial Evaluation of Large Language Models for Building Robust Offensive Language Detection in Moroccan Arabic
by Soufiyan Ouali, Kanza Raisi, Asmaa Mourhir, El Habib Nfaoui and Said El Garouani
Big Data Cogn. Comput. 2026, 10(5), 132; https://doi.org/10.3390/bdcc10050132 - 24 Apr 2026
Abstract
Offensive language detection is crucial for ensuring safe and inclusive digital environments. Identifying harmful content protects users and supports healthier online interactions. Despite advances in transformer-based models, particularly Large Language Models (LLMs), their application to this task remains underexplored for low-resource languages such [...] Read more.
Offensive language detection is crucial for ensuring safe and inclusive digital environments. Identifying harmful content protects users and supports healthier online interactions. Despite advances in transformer-based models, particularly Large Language Models (LLMs), their application to this task remains underexplored for low-resource languages such as Moroccan Arabic, especially compared with high-resource languages. This study evaluates the performance of various open- and closed-source LLMs for offensive language detection in Moroccan Darija. The evaluated models include general-purpose LLMs such as LLaMA, Mistral, and Gemma, as well as Arabic-focused models such as ArabianGPT, Falcon Arabic, and Atlas-Chat. We also experiment with reasoning models such as DeepSeek and GPT-4. Beyond traditional evaluation metrics, we investigate the robustness of these LLMs and examine the impact of adversarial training on their performance. Moreover, we contribute to the field by creating a large, high-quality dataset. Our evaluation revealed that GPT-4o Mini achieved the best overall performance, reaching an F1-score of 88%. However, robustness testing under black-box and white-box adversarial attacks exposed notable vulnerabilities, with attack success rates reaching 30%, thereby highlighting the need for enhancement. Despite the complex morphology and linguistic variability of Moroccan Darija, adversarial training resulted in a notable improvement in both overall model performance and robustness against adversarial attacks, yielding an average increase of 20.89% in resistance to attacks. Furthermore, this approach enabled GPT-4o Mini to achieve an F1-score of 91%, surpassing the current state-of-the-art performance by 6%. These results highlight the importance of incorporating adversarial approaches in low-resource dialectal settings to effectively address linguistic variability and data scarcity. Full article
(This article belongs to the Special Issue Natural Language Processing Applications in Big Data)
Show Figures

Figure 1

20 pages, 2352 KB  
Article
Experimental Analysis of an AZ31 Magnesium Alloy Structural FPV Drone Frame: Comparison with Aluminum and Carbon Fiber
by Andrij Milenin
Processes 2026, 14(9), 1361; https://doi.org/10.3390/pr14091361 - 24 Apr 2026
Abstract
This study investigates the thermal and vibration-attenuation performance of a novel 7-inch FPV drone frame manufactured from cast AZ31 magnesium alloy (MG), compared to 6061-T6 aluminum (AL) and carbon fiber (CF) composite structures under an extreme payload of 2 kg. Using quantitative spectral [...] Read more.
This study investigates the thermal and vibration-attenuation performance of a novel 7-inch FPV drone frame manufactured from cast AZ31 magnesium alloy (MG), compared to 6061-T6 aluminum (AL) and carbon fiber (CF) composite structures under an extreme payload of 2 kg. Using quantitative spectral analysis of Blackbox flight logs, the research demonstrates that the MG frame provides superior system-level vibration damping, particularly under high-stress conditions. Under a 2 kg payload, the MG frame exhibited a 49% reduction in vibration power compared to the AL frame. Spectral data identified primary resonance peaks for the MG frame at 147 Hz (0 kg) and 204 Hz (2 kg), whereas the AL frame showed significantly higher frequency peaks at 179.5 Hz (0 kg) and 239.4 Hz (2 kg). Comparative modal hammer tests further validated these findings, with the magnesium design exhibiting lower impulse energy (0.22 mW/Hz) and faster decay than aluminum (0.24 mW/Hz). Thermal imaging analysis showed better motor cooling for the metallic frames; average motor temperatures on the magnesium frame (51.8 °C) and AL frame (50.3 °C) were significantly lower than on the CF structure (77.5 °C). The findings establish that AZ31 magnesium alloy offers an excellent synergy of lightweight stiffness and damping capacity, making it a viable alternative for heavy-duty FPV platforms requiring high signal integrity. Full article
(This article belongs to the Section Materials Processes)
Show Figures

Figure 1

23 pages, 1602 KB  
Article
Evaluation of Water Vapor Feedback Using a Two-Layer Atmospheric Box Model
by Kazuma Morimoto, Hiroshi Kobayashi and Hiroyuki Shima
Mod. Math. Phys. 2026, 2(2), 4; https://doi.org/10.3390/mmphys2020004 - 23 Apr 2026
Abstract
Massive-scale, ultra-high-resolution numerical simulations for climate change prediction provide data of exceptional accuracy and reliability. However, this comes at the cost of enormous computational resources, and the underlying processes often remain a “black box”. In contrast to these sophisticated methods, we theoretically analyzed [...] Read more.
Massive-scale, ultra-high-resolution numerical simulations for climate change prediction provide data of exceptional accuracy and reliability. However, this comes at the cost of enormous computational resources, and the underlying processes often remain a “black box”. In contrast to these sophisticated methods, we theoretically analyzed the water vapor feedback effect using a highly simplified model that focuses exclusively on the most critical physical factors governing climate change. Specifically, we formulated a two-layer box model by dividing the entire atmosphere into layers of equal optical thickness. Using this model, we quantitatively verified the extent to which the water vapor feedback effect—a key driver of global warming—can be theoretically reproduced. Full article
Show Figures

Figure 1

30 pages, 1167 KB  
Article
Does CSR Implementation Transfer into Better Performance?- Empirical Evidence from Chinese Construction SMEs
by Yunxia Ran, Azlan Shah Ali, Liyin Shen, Haowei Yu, Tao Wang, Fuchuan Zhou and Bucai Hu
Buildings 2026, 16(9), 1653; https://doi.org/10.3390/buildings16091653 - 23 Apr 2026
Abstract
Due to acute resource constraints and environmental turbulence, many small and medium-sized construction enterprises (SMEs) prioritize short-term survival over corporate social responsibility (CSR) initiatives. Grounded in social exchange theory (SET), this study investigates how CSR implementation drives financial performance (FP) via the mediating [...] Read more.
Due to acute resource constraints and environmental turbulence, many small and medium-sized construction enterprises (SMEs) prioritize short-term survival over corporate social responsibility (CSR) initiatives. Grounded in social exchange theory (SET), this study investigates how CSR implementation drives financial performance (FP) via the mediating role of non-financial performance (NP), aiming to deconstruct the “psychological black box” of this transformation. Drawing on a sequential mixed-methods design involving PLS-SEM analysis of 380 responses and 10 semi-structured interviews, the results confirm that CSR practices, particularly ethical practices and community engagement, can be effectively translated into improved NP, which acts as a vital strategic conduit for enhancing FP. However, skills development and training showed limited immediate impact due to a systemic “digital mismatch” and significant time-lag effects. Theoretically, this research refines SET by identifying a hierarchical transition where socio-emotional assets serve as compensatory resources in volatile and resource-constrained environments. Practically, the findings offer a strategic roadmap for SMEs to mitigate technological and systemic barriers, providing novel pathways for fostering CSR to achieve sustainable growth. Full article
Show Figures

Figure 1

26 pages, 1283 KB  
Article
BlackBoxTestGen: An Automatic Black-Box Test Case Generation Framework
by Adisak Intana, Kuljaree Tantayakul and Pongsakorn Kaewnaka
Computers 2026, 15(5), 263; https://doi.org/10.3390/computers15050263 - 22 Apr 2026
Viewed by 91
Abstract
Software testing is essential for software engineering practices, as it ensures that the final software product is reliable and satisfies all requirements before delivery. However, manually designing black-box testing test cases is time-consuming, inconsistent, and difficult to maintain in accordance with changing specifications. [...] Read more.
Software testing is essential for software engineering practices, as it ensures that the final software product is reliable and satisfies all requirements before delivery. However, manually designing black-box testing test cases is time-consuming, inconsistent, and difficult to maintain in accordance with changing specifications. Therefore, this paper presents BlackBoxTestGen, an automatic framework that unifies three specification-driven black-box testing techniques, including rule-based Equivalence Class Partitioning (ECP), syntax, and state transition testing. The framework utilises a redesigned XML structure for test case generation to be shared among a data dictionary, decision tree, and state machine, used by each testing technique. The degree of testing coverage is accumulatively calculated during the test case generation process. The beneficial value of our proposed framework was demonstrated with the development of a web-based prototype tool. We rigorously evaluated its performance in terms of accuracy, computational efficiency, and scalability through a multidimensional approach. This included assessment by professional experts, algorithmic stress testing via parameter scaling, and application to close-to-realistic case studies. The results indicate that BlackBoxTestGen provides a robust integration of testing techniques. By automating the generation of compact and reproducible test cases, the framework substantially reduces manual effort and minimises drift between techniques. Full article
(This article belongs to the Special Issue Advancing Software Engineering with Artificial Intelligence)
70 pages, 5036 KB  
Review
A Review of Mathematical Reduced-Order Modeling of PCM-Based Latent Heat Storage Systems
by John Nico Omlang and Aldrin Calderon
Energies 2026, 19(9), 2017; https://doi.org/10.3390/en19092017 - 22 Apr 2026
Viewed by 273
Abstract
Phase change material (PCM)-based latent heat storage (LHS) systems help address the mismatch between renewable energy supply and thermal demand. However, their practical implementation is constrained by the strongly nonlinear and multiphysics nature of phase change, which makes high-fidelity simulations and real-time applications [...] Read more.
Phase change material (PCM)-based latent heat storage (LHS) systems help address the mismatch between renewable energy supply and thermal demand. However, their practical implementation is constrained by the strongly nonlinear and multiphysics nature of phase change, which makes high-fidelity simulations and real-time applications computationally expensive. This review examines mathematical reduced-order modeling (ROM) as an effective strategy to overcome this limitation by combining physics-based simplifications, projection methods, interpolation techniques, and data-driven models for PCM-based LHS systems. While physical simplifications (such as dimensional reduction and effective property approximations) represent an important first layer of model reduction, the primary focus of this work is on the mathematical ROM methodologies that operate on the governing equations after such physical simplifications have been applied. The review covers approaches including two-temperature non-equilibrium and analytical thermal-resistance models, Proper Orthogonal Decomposition (POD), CFD-derived look-up tables, kriging and ε-NTU grey/black-box metamodels, and machine-learning methods such as artificial neural networks and gradient-boosted regressors trained from CFD data. These ROM techniques have been applied to packed beds, PCM-integrated heat exchangers, finned enclosures, triplex-tube systems, and solar thermal components, achieving speed-ups from tens to over 80,000 times faster than full CFD simulations while maintaining prediction errors typically below 5% or within sub-Kelvin temperature deviations. A critical comparative analysis exposes the fundamental trade-off between interpretability, data dependence, and computational efficiency, leading to a practical decision-making framework that guides method selection for specific applications such as design optimization, real-time control, and system-level simulation. Remaining challenges—including accurate representation of phase change nonlinearity, moving phase boundaries, multi-timescale dynamics, generalization across geometries, experimental validation, and integration into industrial workflows—motivate a structured roadmap for future hybrid physics–machine learning developments, standardized validation protocols, and pathways toward industrial deployment. Full article
(This article belongs to the Section D: Energy Storage and Application)
Show Figures

Figure 1

96 pages, 2106 KB  
Article
A Random Field Theory of Electromagnetic Information
by Said Mikki
Entropy 2026, 28(5), 481; https://doi.org/10.3390/e28050481 - 22 Apr 2026
Viewed by 110
Abstract
As a rigorous and comprehensive foundation for electromagnetic information theory (EIT), we develop a general theory that elucidates the universal stochastic structure of radiated electromagnetic (EM) fields and induced currents in generic EM information transmission systems. The framework encompasses arbitrary random scatterers, input [...] Read more.
As a rigorous and comprehensive foundation for electromagnetic information theory (EIT), we develop a general theory that elucidates the universal stochastic structure of radiated electromagnetic (EM) fields and induced currents in generic EM information transmission systems. The framework encompasses arbitrary random scatterers, input information fields, and EM mutual coupling. The system is modeled as a multiply connected, arbitrary Riemannian manifold within the language of differential geometry. Our approach exploits exact Green’s functions (GFs) on manifolds to construct a novel electromagnetic random field theory (EM-RFT). Interpreted as response functions localized on the surfaces of transceivers and scatterers, the GFs allow us to treat the internal physical details of the EM system as a black box, redirecting analytical attention toward external input–output relations in line with signal processing and communication theory. This integration of random fields (RFs), electromagnetics, and GFs yields a unified framework for deriving and characterizing the stochastic structure of arbitrary EM information transmission systems. We rigorously establish that EM random fields satisfying Maxwell’s equations can always be constructed using system GFs driven by external information fields. The theory further decouples stochastic input RFs from random fluctuations associated with the communication medium (e.g., scatterers), and introduces general correlation propagators valid for arbitrary EM links. Using the Karhunen–Loève expansion, all EM random fields are represented as sums of random variables, providing both a simulation framework for arbitrary EM RFs and a basis for evaluating mutual information between input and output spatial domains at arbitrary locations in the system. Full article
Show Figures

Figure 1

20 pages, 4107 KB  
Article
Surface Fractal Characterization of Granite Cut by Diamond Wire Saw
by Yihe Liu, Yufei Gao and Jiahao Xu
Fractal Fract. 2026, 10(5), 276; https://doi.org/10.3390/fractalfract10050276 - 22 Apr 2026
Viewed by 234
Abstract
The surface quality of granite cut by diamond wire saw significantly impacts the cost of subsequent processes such as grinding and polishing. Traditional evaluation parameters like surface roughness (Ra) or peak-to-valley value (PV) face challenges in characterizing the surface morphology. This study introduces [...] Read more.
The surface quality of granite cut by diamond wire saw significantly impacts the cost of subsequent processes such as grinding and polishing. Traditional evaluation parameters like surface roughness (Ra) or peak-to-valley value (PV) face challenges in characterizing the surface morphology. This study introduces fractal dimension (FD) as a potential auxiliary parameter for evaluating the surface quality of sawn granite. Cutting experiments were conducted on Shanxi Black granite using varying wire speeds, feed speeds, and workpiece sizes. The box-counting method was employed to extract the three-dimensional fractal dimension (3D FD) of the granite surface, which characterizes the overall surface complexity, as well as the distribution of two-dimensional fractal dimensions (2D FD) for granite surface cross-sectional profiles at different angles. The results indicate that the granite-sawn surface exhibits complex micro-morphology featuring brittle micro-pits and wavelike saw marks along the feed direction. A strong negative correlation exists between the 3D FD and both surface roughness Ra and PV value, suggesting that 3D FD can serve as an indicator of granite surface quality, with higher FD values corresponding to better surface quality. Moreover, compared to the PV value constrained by material heterogeneity, 3D FD more effectively represents the true surface quality of the granite. Additionally, the distribution characteristics of 2D FD at different angles effectively reveal surface anisotropy and damage. The results suggest that a more symmetrical 2D FD distribution is associated with consistent surface integrity in the evaluated samples. This suggests that FD has the potential to serve as a meaningful auxiliary parameter for characterizing granite surface quality. The findings hold significant importance for the accurate evaluation of diamond wire-saw-cut granite surfaces and provide a basis for the formulation of subsequent grinding process. Full article
Show Figures

Figure 1

23 pages, 2737 KB  
Article
Multimodal and Explainable Deep Learning for Occupational Accident Classification Using Transformer-LSTM Architectures
by Esin Ayşe Zaimoğlu
Buildings 2026, 16(9), 1642; https://doi.org/10.3390/buildings16091642 - 22 Apr 2026
Viewed by 162
Abstract
Occupational safety analytics is increasingly moving toward data-driven methodologies; however, existing models often struggle to capture the multidimensional nature of accident causation. This study presents a multimodal Hybrid Transformer-LSTM framework for classifying occupational fatalities by jointly modeling unstructured narratives, cyclical temporal features, and [...] Read more.
Occupational safety analytics is increasingly moving toward data-driven methodologies; however, existing models often struggle to capture the multidimensional nature of accident causation. This study presents a multimodal Hybrid Transformer-LSTM framework for classifying occupational fatalities by jointly modeling unstructured narratives, cyclical temporal features, and regional spatial indicators. Utilizing a large-scale dataset of 14,914 OSHA fatality records, the proposed architecture leverages BERT-based embeddings for semantic extraction and Bidirectional LSTMs as non-linear pattern encoders for spatiotemporal context. Conceptually grounded in the Swiss Cheese Model, the framework treats different data modalities as proxies for distinct layers of system risk, ranging from proximal unsafe acts to environmental preconditions. Experimental results show that the multimodal architecture achieves an accuracy of 84.56%, representing a 5.33% gain over unimodal BERT baselines. To address the inherent “black-box” nature of deep learning, a SHAP-based explainability framework is incorporated to quantify the contributions of both textual tokens and environmental features to the model’s decision-making process. The results indicate that integrating narrative semantics with temporal and spatial context enhances discriminative performance and enables context-aware classification within a weakly supervised setting. By providing a scalable and interpretable classification framework, this study offers a data-driven decision-support approach for safety professionals and regulatory bodies seeking to implement evidence-based risk management strategies in high-risk industrial sectors. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

Back to TopTop