Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,616)

Search Parameters:
Keywords = connectivity probability

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
49 pages, 6577 KB  
Review
The Path from PCA to Autoencoders to Variational Autoencoders: Building Intuition for Deep Generative Modeling
by Alaa Tharwat and Mahmoud M. Eid
Stats 2026, 9(2), 23; https://doi.org/10.3390/stats9020023 - 28 Feb 2026
Viewed by 30
Abstract
This tutorial provides a comprehensive and intuitive journey through the evolution of deep generative models, tracing a clear path from the foundations of Principal Component Analysis (PCA) to modern Variational Autoencoders (VAEs), showing how each method solves the limitations of the previous one. [...] Read more.
This tutorial provides a comprehensive and intuitive journey through the evolution of deep generative models, tracing a clear path from the foundations of Principal Component Analysis (PCA) to modern Variational Autoencoders (VAEs), showing how each method solves the limitations of the previous one. We begin with PCA, a linear tool for reducing data dimensions. Its inability to model non-linear patterns motivates the use of Autoencoders (AEs), which use neural networks to learn flexible, compressed representations. However, AEs lack a probabilistic framework, preventing them from generating new data. VAEs address this by treating the latent space as a probability distribution, enabling data generation. We compare the three methods through theoretical analysis, experiments, and step-by-step numerical examples that show exactly how each model compresses data—a detail often missing elsewhere. Unlike resources that treat these topics separately, we connect them into a single narrative, building intuition progressively from linear to probabilistic deep generative models. Full article
(This article belongs to the Section Applied Statistics and Machine Learning Methods)
42 pages, 7988 KB  
Article
Topology Reconstruction Algorithm Design for Multi-Node Failure Scenarios in FANET
by Jia-Wang Chen, Hua-Min Chen, Shaofu Lin, Shoufeng Wang and Hui Li
Drones 2026, 10(3), 159; https://doi.org/10.3390/drones10030159 - 26 Feb 2026
Viewed by 96
Abstract
With the advancement of UAV (Unmanned Aerial Vehicle) technology, flying ad-hoc networks (FANETs), composed of multiple coordinating UAVs, demonstrate tremendous application potential in disaster relief, environmental monitoring and intelligent logistics. However, inherent resource constraints and unpredictable operating environments make UAV failures a frequent [...] Read more.
With the advancement of UAV (Unmanned Aerial Vehicle) technology, flying ad-hoc networks (FANETs), composed of multiple coordinating UAVs, demonstrate tremendous application potential in disaster relief, environmental monitoring and intelligent logistics. However, inherent resource constraints and unpredictable operating environments make UAV failures a frequent and critical challenge. Particularly in mission-critical applications, simultaneous or consecutive failures of multiple UAVs can severely disrupt network topology, leading to catastrophic consequences such as network fragmentation and service interruptions. Furthermore, traditional topology reconstruction algorithms suffer from high computational overhead and significant communication delays. Primarily designed for single-node failure recovery, they are ill-equipped to address the challenge of concurrent multi-node failures. To address these challenges, this paper proposes a topology reconstruction algorithm tailored for multi-node failure scenarios in FANETs. The core objective of this algorithm is to minimize communication overhead and secondary damage to the network during the reconstruction process while ensuring basic reconstruction results, thereby improving the system’s energy efficiency and robustness. The proposed framework integrates three key phases: First, overlapping communication coverage areas among neighbors of failed nodes are leveraged to define first and second regions, enabling rapid identification of connection restoration candidate positions and avoiding computationally intensive global calculations. Second, a comprehensive importance evaluation mechanism is constructed based on the topological and functional attributes of node, categorizing nodes into different importance types. For failed nodes of varying importance, differentiated search ranges and retry strategies are employed to ensure the most suitable nodes are selected for reconstruction tasks. Third, the inflexibility of repulsion ranges in traditional artificial potential field (APF) method is addressed by introducing dynamic repulsion influence zones and a composite repulsion model. The improved APF algorithm enhances safety in high-speed scenarios and reduces the probability of UAVs becoming trapped in local minima. Finally, extensive simulations validate that the proposed algorithm accurately identifies critical network nodes and promptly implements effective reconstruction measures to minimize network damage. Full article
24 pages, 5580 KB  
Article
DF-TransVAE: A Deep Fusion Network for Binary Classification-Based Anomaly Detection in Internet User Behavior
by Huihui Fan, Yuan Jia, Wu Le, Zhenhong Jia, Hui Zhao, Congbing He, Hedong Jiang, Zeyu Hu, Xiaoyi Lv, Jianting Yuan and Xiaohui Huang
Appl. Sci. 2026, 16(5), 2243; https://doi.org/10.3390/app16052243 - 26 Feb 2026
Viewed by 117
Abstract
User behavior anomaly detection plays a vital role in network security for identifying malicious access and abnormal activities in high-dimensional internet user behavior data. Although Transformer architectures have been widely adopted in anomaly detection tasks, and their integration with Variational Autoencoders (VAEs) has [...] Read more.
User behavior anomaly detection plays a vital role in network security for identifying malicious access and abnormal activities in high-dimensional internet user behavior data. Although Transformer architectures have been widely adopted in anomaly detection tasks, and their integration with Variational Autoencoders (VAEs) has often been used to further improve detection accuracy, existing integration methods have failed to effectively balance global feature dependency modeling and generative data distribution learning. This results in limited capability in identifying complex anomalous patterns. To address this issue, this paper proposes DF-TransVAE, a novel deeply integrated framework that advances the integration of a Transformer and a VAE for supervised anomaly detection. The framework first fuses global contextual representations from the Transformer encoder with original input features, then maps the fused representation into the latent space via the VAE encoder. A cross-attention mechanism is introduced as the core of deep integration, enabling dynamic, bidirectional interaction between the fused features and latent variables to enhance information fusion. Lastly, a fully connected classifier equipped with residual connections outputs anomaly probabilities for supervised binary classification. Experimental results on two public datasets demonstrate that the proposed framework achieves better performance than existing deep learning methods in terms of accuracy, precision, recall, and F1-score, particularly in detecting complex anomalous patterns. Our results indicate that the deep integration mechanism we propose effectively addresses the limitations of conventional Transformer–VAE combinations. Full article
Show Figures

Figure 1

15 pages, 1916 KB  
Article
Evaluation of Starlink Low Earth Orbit Satellite Internet Connectivity to Support Smart Forestry Applications in Varying Stand Conditions in the Inland Northwest
by Axel N. Wall, Robert F. Keefe and Eloise G. Zimbelman
Forests 2026, 17(3), 290; https://doi.org/10.3390/f17030290 - 25 Feb 2026
Viewed by 174
Abstract
The global push to advance smart and digital forestry relies on emerging technologies to support efficient, AI-assisted, and data-driven forest management, but many forest operations occur in remote forests where reliable internet connectivity is unavailable. Low Earth Orbit (LEO) satellite constellations such as [...] Read more.
The global push to advance smart and digital forestry relies on emerging technologies to support efficient, AI-assisted, and data-driven forest management, but many forest operations occur in remote forests where reliable internet connectivity is unavailable. Low Earth Orbit (LEO) satellite constellations such as Starlink may provide reliable connectivity where cellular networks are unavailable. The performance of LEO-based solutions remains poorly understood under forest canopies, and empirical evaluations linking canopy characteristics to connectivity performance are largely lacking. In this study, the effect of forest vegetation on Starlink performance below the canopy was evaluated by placing a satellite receiver at thirty randomly selected permanent single tree inventory plots on the University of Idaho Experimental Forest and measuring connection success, connection time, and upload and download speeds along 50 m transects in all cardinal directions. LiDAR-derived stand density index (SDI), leaf area index (LAI), rumple index (RI), and vegetation cover (VC) were used to quantify canopy structure. Principal Component Analysis and survival analysis showed that higher values of PC1, primarily driven by SDI, LAI, and RI, reduced the probability of establishing a connection. Linear regression analysis indicated that higher SDI increased connection time, indicating that denser stands slowed or prevented connectivity. Linear mixed-effects models demonstrated that internet speed primarily declined with increasing distance, with download and upload rates dropping beyond 40 m from the router. LAI, RI, and VC did not influence connection time or speed, suggesting that overall stand density rather than leaf area per unit ground area has a greater impact on signal obstruction. Overall, dense forest structure and distance are the main constraints on LEO satellite connectivity and performance, and understanding these limitations supports the development and deployment of satellite-based networking to advance smart forestry operations. These results provide one of the first quantitative assessments of LEO satellite connectivity constraints in operational forest conditions, offering practical guidance for deploying satellite-based networks to support smart forestry applications in remote environments. Full article
(This article belongs to the Section Forest Operations and Engineering)
Show Figures

Figure 1

29 pages, 877 KB  
Article
A Mathematical Framework for Radio Resource Assignment in UAV-Aided Vehicular Communications
by Francesca Conserva and Chiara Buratti
Drones 2026, 10(3), 156; https://doi.org/10.3390/drones10030156 - 24 Feb 2026
Viewed by 119
Abstract
Unmanned Aerial Vehicle (UAV), when equipped as communication relays, offer a flexible solution to extend Vehicle-to-Vehicle (V2V) communications beyond fixed infrastructure and Non-Line-of-Sight constraints. In this setting, the allocation of radio resources, across time, frequency and space through beamforming, is challenged by the [...] Read more.
Unmanned Aerial Vehicle (UAV), when equipped as communication relays, offer a flexible solution to extend Vehicle-to-Vehicle (V2V) communications beyond fixed infrastructure and Non-Line-of-Sight constraints. In this setting, the allocation of radio resources, across time, frequency and space through beamforming, is challenged by the mobility of Connected and Autonomous Vehicles (CAVs) and their temporal dependencies, as access opportunities depend on prior transmission outcomes such as queue backlog or failed attempts. This paper proposes a Radio Resource Assignment (RRA) framework for UAV-aided V2V networks with beamforming-capable UAV relays. The model discretizes time and space to account for mobility and to track the movement of groups of CAVs across beam segments. The model also incorporates Time Division Multiple Access (TDMA)-based scheduling, beam activation constraints, and realistic traffic generation patterns. Analytical expressions are derived for per-user success probability and system throughput under both, ideal and realistic conditions, and they are validated against simulations, confirming the accuracy of the proposed approximations. Numerical results highlight trade-offs involving UAV altitude and resource allocation interval, while a heuristic beam-activation optimization strategy is shown to further enhance performance, achieving up to 12% throughput gain over uniform activation. Full article
(This article belongs to the Section Drone Communications)
Show Figures

Figure 1

28 pages, 7101 KB  
Article
Rainfall–Surface Runoff Estimation Using SCS-CN Model and Geospatial Techniques: A Case Study of the Shatt Al-Arab Region, Iraq–Iran
by Hadi Allafta, Christian Opp and Buraq Al-Baldawi
Earth 2026, 7(1), 32; https://doi.org/10.3390/earth7010032 - 19 Feb 2026
Viewed by 306
Abstract
Accurate quantification of surface runoff is required for the appropriate design of storage structures, irrigation patterns, waterways, erosion control structures, water harvesting projects, and groundwater development schemes. However, the paucity of runoff data in Iraq and Iran is a serious obstacle. The soil [...] Read more.
Accurate quantification of surface runoff is required for the appropriate design of storage structures, irrigation patterns, waterways, erosion control structures, water harvesting projects, and groundwater development schemes. However, the paucity of runoff data in Iraq and Iran is a serious obstacle. The soil conservation service–curve number (SCS–CN) method is applied in conjunction with remote sensing (RS) and geographic information system (GIS) to predict the surface runoff in the Shatt Al-Arab Region. In the present study, the Shatt Al-Arab Region is defined as the drainage areas and lateral sub-basins that contribute direct surface runoff to the main channel between Qurna city and the Arabian Gulf. Rainfall, land use/land cover (LULC), hydrologic soil group (HSG), and slope maps are developed in a GIS platform and processed to produce surface runoff for 35 years (1979–2013). The surface runoff ranges between 163 mm (2008) and 300 mm (1982) with an average of 233 mm yr−1. The average annual surface runoff in the study area is 33.657 km3. A scatter plot constructed to visualize the connection between annual rainfall and annual runoff reveals a significant positive relation (coefficient of determination (r2) = 0.67, probability value (p) < 0.05). The runoff potential is low in the southern parts of the study area and gradually rises towards the northern parts. Cross-validation of the modeled annual runoff with the annual runoff data shows reasonably close matches (r2 = 0.73, p < 0.001) demonstrating the procedure’s suitability. Full article
Show Figures

Figure 1

23 pages, 14906 KB  
Article
Stability Assessment of Reservoir Bank Anti-Dip Slopes Using a Modified Goodman–Bray Method and Monte Carlo Simulation
by Junheng Chen, Jiawen Zhou, Nan Jiang, Haibo Li, Yuxiang Hu, Hongyu Luo and Jieyuan Zhang
Water 2026, 18(4), 505; https://doi.org/10.3390/w18040505 - 18 Feb 2026
Viewed by 383
Abstract
Toppling failure is a fundamental mode of instability in rock slopes and occurs predominantly in reservoir bank anti-dip bedded rock masses. Reservoir impoundment changes seepage conditions and weakens slopes, whereas discontinuity non-persistence introduces uncertainty and complicates the identification of coupled toppling–sliding mechanisms. To [...] Read more.
Toppling failure is a fundamental mode of instability in rock slopes and occurs predominantly in reservoir bank anti-dip bedded rock masses. Reservoir impoundment changes seepage conditions and weakens slopes, whereas discontinuity non-persistence introduces uncertainty and complicates the identification of coupled toppling–sliding mechanisms. To address this, a probabilistic framework using the Goodman–Bray limit equilibrium method is developed. Equivalent strength parameters are introduced to unify the strength contrast between unsaturated and saturated segments along a common basal surface. Basal discontinuity connectivity is modeled as a random variable, and a Monte Carlo simulation is used to derive failure mode probabilities and a probability-weighted factor of safety. The framework is applied to the Huangcaoping anti-dip slope in the Dagangshan reservoir area at a normal water level of 1130 m. The most probable scenario has a probability of 0.116, involving sliding at 1120–1420 m and toppling at 1420–1550 m, with a probability-weighted mean factor of safety of 0.978. Predicted failure characteristics and deformation intervals are consistent with engineering observations, confirming the method’s effectiveness. This integration enables the simultaneous characterization of stability levels and the evolution mechanism. The approach provides mechanism-explicit mode likelihoods and a robust stability metric to support hazard assessment, monitoring placement, and reinforcement design. Full article
Show Figures

Figure 1

19 pages, 2621 KB  
Article
Defective Photovoltaic Module Detection Using EfficientNet-B0 in the Machine Vision Environment
by Minseop Shin, Junyoung Seo, In-Bae Lee and Sojung Kim
Machines 2026, 14(2), 232; https://doi.org/10.3390/machines14020232 - 16 Feb 2026
Viewed by 216
Abstract
Machine vision based on artificial intelligence technology is being actively utilized to reduce defect rates in the photovoltaic module production process. This study aims to propose a machine vision approach using EfficientNet-B0 for defective photovoltaic module detection. In particular, the proposed approach is [...] Read more.
Machine vision based on artificial intelligence technology is being actively utilized to reduce defect rates in the photovoltaic module production process. This study aims to propose a machine vision approach using EfficientNet-B0 for defective photovoltaic module detection. In particular, the proposed approach is applied to the electroluminescence (EL) operation, which identifies microcracks in PV modules by using polarization current. The proposed approach extracts low-level structures and local brightness variations, such as busbars, fingers, and cell boundaries, from a single convolutional block. Furthermore, the mobile inverted bottleneck convolution (MBConv) block progressively transforms defect patterns—such as microcracks and dark spots—that appear at various shooting angles into high-level feature representations. The converted image is then processed using global average pooling (GAP), Dropout, and a final fully connected layer (Dense) to calculate the probability of a defective module. A sigmoid activation function is then used to determine whether a PV module is defective. Experiments show that the proposed Efficient-B0-based methodology can stably achieve defect detection accuracy comparable to AlexNet and GoogLeNet, despite its relatively small number of parameters and fast processing speed. Therefore, this study will contribute to increasing the efficiency of EL operation in industrial fields and improving the productivity of PV modules. Full article
Show Figures

Figure 1

18 pages, 324 KB  
Article
Simplicity and Complexity in Combinatorial Optimization
by Kamal Dingle and Marcus Hutter
Entropy 2026, 28(2), 226; https://doi.org/10.3390/e28020226 - 15 Feb 2026
Viewed by 735
Abstract
Many problems in physics and computer science can be framed in terms of combinatorial optimization. Due to this, it is interesting and important to study theoretical aspects of such optimization. Here, we study connections between Kolmogorov complexity, optima, and optimization. We argue that [...] Read more.
Many problems in physics and computer science can be framed in terms of combinatorial optimization. Due to this, it is interesting and important to study theoretical aspects of such optimization. Here, we study connections between Kolmogorov complexity, optima, and optimization. We argue that (1) optima and complexity are connected, with extrema being more likely to have low complexity (under certain circumstances); (2) optimization by sampling candidate solutions according to algorithmic probability may be an effective optimization method; and (3) coincidences in extrema to optimization problems are a priori more likely as compared to a purely random null model. Full article
20 pages, 3193 KB  
Article
A Geological Modeling Workflow for Shale Reservoirs: A Case Study of the F2 Member in the Qintong Sag
by Maozhou Han, Siyu Yu, Shaohua Li, Changsheng Lu, Chijun Huang, Kailong Wei and Shengze Li
Appl. Sci. 2026, 16(4), 1759; https://doi.org/10.3390/app16041759 - 10 Feb 2026
Viewed by 230
Abstract
Shale reservoirs provide critical storage space for unconventional oil and gas, yet their frequent vertical facies alternations and complex spatial architectures make it difficult for conventional two-point geostatistical methods to reproduce thin interbedding and reservoir-scale continuity. Multiple-point geostatistics can incorporate structural information through [...] Read more.
Shale reservoirs provide critical storage space for unconventional oil and gas, yet their frequent vertical facies alternations and complex spatial architectures make it difficult for conventional two-point geostatistical methods to reproduce thin interbedding and reservoir-scale continuity. Multiple-point geostatistics can incorporate structural information through training images (TIs), but practical 3D shale modeling is often hindered by the limited availability of representative 3D TIs. Using the F2 Member in the Qintong Sag, Subei Basin, eastern China, as a case study, we propose a hierarchical 2D-to-3D geological modeling workflow that combines mixed-point geostatistical simulation (MIXSIM) for generating vertical 2D facies sections and a sequential 2D simulation strategy with conditioning data (s2Dcd) for propagating section-based patterns into 3D space under hard well constraints. In the workflow, vertical sections serve as TI carriers to explicitly capture bedding-scale alternations, while well data are imposed as hard conditioning information during 3D simulation. Quantitative evaluation is performed in terms of (i) conditioning-data consistency, (ii) vertical facies-transition statistics quantified by transition counts and Markov transition probability matrices, (iii) global facies proportions summarized as the mean of 10 realizations, and (iv) connectivity characterized by connected geobody analysis. The realizations honor the conditioning data exactly, reproduce vertical transition behavior with a transition-matrix discrepancy of DMAE=0.0396, and maintain global facies proportions close to well-based estimates with a maximum deviation of 2.36%. These results demonstrate that the proposed MIXSIM–s2Dcd workflow provides a practical solution for well-data-driven, high-resolution 3D shale facies modeling when 3D training images are unavailable. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

44 pages, 940 KB  
Article
A Two-Level Relative-Entropy Theory for Isotropic Turbulence Spectra: Fokker–Planck Semigroup Irreversibility and WKB Selection of Dissipation Tails
by Shin-ichi Inage
Mathematics 2026, 14(4), 620; https://doi.org/10.3390/math14040620 - 10 Feb 2026
Viewed by 279
Abstract
We propose a two-level theory that connects Lin-equation-based dynamical coarse-graining of the turbulence cascade with an information-theoretic selection principle in logarithmic wavenumber space. This framework places the dissipation-range spectral shape on a verifiable logical basis rather than on ad hoc fitting. At the [...] Read more.
We propose a two-level theory that connects Lin-equation-based dynamical coarse-graining of the turbulence cascade with an information-theoretic selection principle in logarithmic wavenumber space. This framework places the dissipation-range spectral shape on a verifiable logical basis rather than on ad hoc fitting. At the first (dynamical) level, we formulate an autonomous conservative Fokker–Planck equation for the normalized density and probability current. Under sufficient boundary decay and a strictly positive effective diffusion, the sign-reversed Kullback–Leibler divergence is shown to be a Lyapunov functional, yielding a rigorous H-theorem and fixing the arrow of time in scale space. At the second (selection) level, the dissipation range is treated as a stationary boundary-value problem for an open system by introducing a killing term for an unnormalized scale density. A WKB (Liouville–Green) analysis restricts the admissible tail to a stretched-exponential form and links the tail exponent to the high-wavenumber scaling of the effective diffusion. The exponential prefactor is fixed by dissipation-rate consistency, and the remaining degree of freedom is determined by one-dimensional Kullback–Leibler minimization (Hyper-MaxEnt) against a globally constructed reference distribution. The resulting exponent range is validated against the high-resolution DNS spectra reported in the literature. Full article
(This article belongs to the Special Issue Mathematical Fluid Dynamics: Theory, Analysis and Emerging Trends)
Show Figures

Figure 1

31 pages, 2458 KB  
Article
On the Number of Spanning Trees of New Graph Families Created from the Star Graph and the Examination of Their Entropies
by Salama Nagy Daoud and Ahmad Asiri
Axioms 2026, 15(2), 122; https://doi.org/10.3390/axioms15020122 - 6 Feb 2026
Viewed by 174
Abstract
Complexity (number of spanning trees) is an essential and significant component in the design of communication networks (graphs). To ensure strong resistance and stiffness and to enhance the probability of a connection between two vertices, improvements to a network’s quality and perfection increase [...] Read more.
Complexity (number of spanning trees) is an essential and significant component in the design of communication networks (graphs). To ensure strong resistance and stiffness and to enhance the probability of a connection between two vertices, improvements to a network’s quality and perfection increase the number of trees that span it. Using block matrices and linear algebra techniques, we derive explicit formulas for the number of spanning trees of new graph families that are produced from star graphs in this study. The number of spanning trees in a graph is measured by the entropy of spanning trees, also known as asymptotic complexity, a graph theory metric that assesses the network’s structural robustness and dependability. Increased flexibility, stronger diverse connections, and improved resistance to random structural changes are all indicated by higher entropy. We also investigate the entropy of spanning trees on our graphs at the end of this study. Lastly, we compare the entropy of our graphs to that of other previously studied graphs with average degrees of four and five. Full article
(This article belongs to the Special Issue Graph Invariants and Their Applications)
Show Figures

Figure 1

23 pages, 11570 KB  
Article
Geometric Graph Learning Network for Node Classification
by Lei Wang, Xitong Xu and Zhuqiang Li
Electronics 2026, 15(3), 696; https://doi.org/10.3390/electronics15030696 - 5 Feb 2026
Viewed by 242
Abstract
Graph attention improves neighbor discrimination, but it remains limited by local receptive fields and by a strong dependence on the input topology, which is often unreliable on heterophilous graphs. We propose Geometric Graph Learning Network (G2LNet), a structure-learning framework that infers message-passing probabilities [...] Read more.
Graph attention improves neighbor discrimination, but it remains limited by local receptive fields and by a strong dependence on the input topology, which is often unreliable on heterophilous graphs. We propose Geometric Graph Learning Network (G2LNet), a structure-learning framework that infers message-passing probabilities from an explicit geometric topology learned in latent Euclidean or hyperbolic spaces. G2LNet combines (i) a geometric mapping module, (ii) distance- or inner-product-based relation operators with perceptual connectivity to control the influence of the given graph, and (iii) end-to-end constraint objectives enforcing stability, sparsity, and (optional) symmetry of the learned topology. This design yields unified local, non-local, and graph-free neighborhoods, enabling systematic analysis of when non-local aggregation helps. Experiments on node classification across nine publicly available benchmark datasets demonstrate that G2LNet’s controlled variant consistently achieves higher accuracy than representative strong baseline models–both local and non-local–on most datasets. This establishes a robust alternative for smaller scale node classification tasks. Full article
Show Figures

Figure 1

25 pages, 5062 KB  
Article
A Two-Vector Framework for MRI Knee Diagnostics: Fuzzy Risk Modeling, Digital Maturity, and Finite-Element Wear Assessment
by Akerke Tankibayeva, Saule Kumargazhanova, Bagdat Azamatov, Zhanerke Azamatova, Nail Beisekenov and Marzhan Sadenova
Appl. Sci. 2026, 16(3), 1554; https://doi.org/10.3390/app16031554 - 3 Feb 2026
Viewed by 234
Abstract
Knee disorders are a major indication for musculoskeletal imaging, yet MRI reliability remains constrained by signal nonuniformity, motion artefacts, protocol variability, and reader-dependent effects. This study presents an integrated two-vector framework that couples (i) a fuzzy diagnostic control-risk model with (ii) a quantitative [...] Read more.
Knee disorders are a major indication for musculoskeletal imaging, yet MRI reliability remains constrained by signal nonuniformity, motion artefacts, protocol variability, and reader-dependent effects. This study presents an integrated two-vector framework that couples (i) a fuzzy diagnostic control-risk model with (ii) a quantitative digital-maturity assessment to strengthen MRI-based diagnosis of knee pathology. The vertical vector characterizes organizational readiness through a weighted fuzzy aggregation of six capability agents (technical, information and analytical, mathematical/model, metrological, human resources, and software support). The horizontal vector estimates producer’s and consumer’s risks as misclassification probabilities relative to an acceptance boundary, driven by measurement/interpretation uncertainty, variability of the decision threshold, and the ratio of instrumental to physiological dispersion. Simulation results indicate that error probabilities increase sharply when threshold uncertainty exceeds 20–25% and rise by approximately 15–20% as the standard-deviation ratio approaches unity. To connect diagnostic reliability with downstream mechanics, a FE analysis of the tibial insert in TKA under F=1150 N at 0° flexion predicts a peak contact pressure of 85.449 MPa and a maximum UHMWPE von Mises stress of 43.686 MPa, identifying wear-critical contact zones. Overall, the proposed framework provides interpretable quantitative targets for QA, protocol refinement, and resource allocation in radiology services undergoing digital transformation, and offers a reproducible pathway for linking imaging reliability to biomechanical risk. Full article
(This article belongs to the Special Issue Advanced Techniques and Applications in Magnetic Resonance Imaging)
Show Figures

Figure 1

17 pages, 2537 KB  
Article
Numerical Analysis of In-Plane Stiffness of Light-Timber-Framed Wall Elements with Various Sheathing Materials
by Jelena Vilotijević and Miroslav Premrov
Buildings 2026, 16(3), 629; https://doi.org/10.3390/buildings16030629 - 2 Feb 2026
Viewed by 180
Abstract
This paper numerically analyses numerous parameters with the most sensitive impact on the in-plane lateral behaviour of light timber-framed (LTF) wall elements. Different types of sheathing material (fibre-plaster boards, OSB) are studied according to the parametrically chosen distance between the fasteners, using three [...] Read more.
This paper numerically analyses numerous parameters with the most sensitive impact on the in-plane lateral behaviour of light timber-framed (LTF) wall elements. Different types of sheathing material (fibre-plaster boards, OSB) are studied according to the parametrically chosen distance between the fasteners, using three different calculation procedures: (a) a previously developed semi-analytical procedure using the Modified Gamma Method (MGM) accounts for bending, shear, and timber-to-framing connection flexibility simultaneously; (b) a previously developed FEM Spring Model as the most accurate approach; and (c) in this study, a specially developed innovative FEM 2D Hinge Model using the two-dimensional hinge layer to simulate the deformability between the sheathing boards and the timber frame, which enables significantly faster FEM analysis compared to the already developed FEM Spring Model. This, in turn, realistically allows for much faster analysis of real multi-storey timber structures. In order to only judge the influence of the sheathing material and fastener disposition, in all cases, the tensile and compressive vertical supports are considered to be stiff-supported wall elements as prescribed by the valid Eurocode 5 standard; however, it is possible to additionally include all three possible supporting flexibilities. The study places particular emphasis on the deformation of sliding fasteners between the sheathing boards and the timber frame, which arises from fastener flexibility and can significantly reduce the overall in-plane stiffness of LTF wall elements. For specially selected parametric values of fastener spacing (s = 20, 37.5, 75, and 150 mm), parametric FEM analysis using a special 2D hinge layer is additionally developed and performed to validate the previously developed semi-analytical expressions by the MGM for the in-plane wall stiffness, which seems to be the most appropriate for designing engineering implementation. All applied approaches to modelling wall elements considered the same parameters for evaluating the stiffness of an individual wall element, which represents a fundamental input parameter in the modelling of frame wall elements within the overall structure. The aim of the study is to determine the most suitable and accurate model, as the response of the entire structure to horizontal loading depends on the design of the individual wall element. Among these, it has been demonstrated that the thickness of the load-bearing timber frame and the type of resisting LTF walls (internal or external) have practically no significant effect on the in-plane stiffness of such wall elements. Consequently, the type of sheathing material (FPB or OSB) and especially the spacing between the fasteners are much more sensitive parameters, which would probably need to be given further consideration in future FEM studies. Full article
Show Figures

Figure 1

Back to TopTop