Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (18,615)

Search Parameters:
Keywords = large-scale modeling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 7263 KB  
Article
LEViM-Net: A Lightweight EfficientViM Network for Earthquake Building Damage Assessment
by Qing Ma, Dongpu Wu, Yichen Zhang, Jiquan Zhang, Jinyuan Xu and Yechi Yao
Remote Sens. 2026, 18(10), 1592; https://doi.org/10.3390/rs18101592 (registering DOI) - 15 May 2026
Abstract
Building damage and collapse are the main sources of serious casualties and financial losses during earthquakes, which are among the most destructive natural disasters that endanger human life and property. Therefore, quick and precise post-earthquake building damage assessment is essential for risk assessment [...] Read more.
Building damage and collapse are the main sources of serious casualties and financial losses during earthquakes, which are among the most destructive natural disasters that endanger human life and property. Therefore, quick and precise post-earthquake building damage assessment is essential for risk assessment and emergency action. Convolutional neural networks (CNNs) primarily concentrate on local features and frequently ignore global contextual information within and across buildings, despite the fact that deep learning-based techniques allow automated damage identification. Transformer-based approaches, on the other hand, are good at capturing global dependencies, but their large memory and processing costs restrict their usefulness. As a result, existing networks still struggle to achieve an effective balance between accuracy and efficiency. To address this issue, this study proposes a lightweight and efficient network for post-earthquake building damage assessment. Specifically, we develop a two-stage method based on EfficientViM with an encoder–decoder architecture. In the encoder, Mamba is introduced to extract multi-scale change features with long-range dependencies, leveraging the state space model to preserve global modeling capability while significantly reducing computational complexity. In the decoder, two lightweight modules are designed to further enhance discriminative capability and computational efficiency. The network finally outputs building localization and pixel-level building damage, respectively. Experiments were conducted on four earthquake events from the BRIGHT dataset using a three-for-training and one-for-testing cross-event rotation evaluation strategy. The results demonstrate that LEViM-Net requires only 30.94 M parameters and 27.10 G FLOPs. In addition, for the Türkiye earthquake event, the proposed method achieves an F1 score of 80.49%, an overall accuracy (OA) of 88.17%, and a mean intersection over union (mIoU) of 49.73%. The proposed model enables efficient remote-sensing-based mapping of macroscopic and image-visible building damage, providing timely support for early-stage emergency response. Full article
(This article belongs to the Special Issue Advances in AI-Driven Remote Sensing for Geohazard Perception)
14 pages, 1732 KB  
Article
Estimation of Inter-Scale Transfer Rates Within a Compressor Flowfield Using High-Fidelity Data
by Pawel Jan Przytarski, Matteo Dellacasagrande and Davide Lengani
Int. J. Turbomach. Propuls. Power 2026, 11(2), 23; https://doi.org/10.3390/ijtpp11020023 (registering DOI) - 15 May 2026
Abstract
To better understand the impact that multi-scale unsteadiness has on industrial flows, we use Large Eddy Simulation (LES) data representative of a midspan compressor section operating in an idealized multi-stage environment. We collect a large number of three-dimensional flow snapshots and perform a [...] Read more.
To better understand the impact that multi-scale unsteadiness has on industrial flows, we use Large Eddy Simulation (LES) data representative of a midspan compressor section operating in an idealized multi-stage environment. We collect a large number of three-dimensional flow snapshots and perform a large-scale flow decomposition using a parallel framework based on the Proper Orthogonal Decomposition (POD). Once the flow is split into orthogonal modes, we quantify kinetic energy budgets on a mode-by-mode basis. This enables us to characterize energy exchanges between these modes and analyze the flow in a multi-scale manner. As a result we are able to reconstruct an approximate energy cascade within the domain. The results provide insights into the role that various scales play in modulating the energy transfer within the flow. This work is a stepping stone towards utilizing all the information embedded in the 3D unsteady flowfield and its evolution for the purpose of informing turbulence modeling. Full article
36 pages, 2777 KB  
Article
ZeroTrustEdu: A Lightweight Post-Quantum Cryptography Framework with Adaptive Trust Scoring for Secure Cloud-IoT E-Learning Platforms
by Weam Gaoud Alghabban
Electronics 2026, 15(10), 2132; https://doi.org/10.3390/electronics15102132 (registering DOI) - 15 May 2026
Abstract
The rapid proliferation of Internet of Things (IoT) devices in cloud-based e-learning platforms has posed significant security risks, particularly in protecting learner information, authentication of devices, and safe communication in the highly heterogeneous learning settings. Current cryptographic solutions are largely based on classical [...] Read more.
The rapid proliferation of Internet of Things (IoT) devices in cloud-based e-learning platforms has posed significant security risks, particularly in protecting learner information, authentication of devices, and safe communication in the highly heterogeneous learning settings. Current cryptographic solutions are largely based on classical public-key infrastructure (PKI) protocols such as RSA and ECC, which will become vulnerable with the advent of large-scale quantum computers capable of executing Shor’s algorithm. In addition, traditional perimeter-based security models are inadequate for handling the dynamics, scattered, and resource-limited characteristics of IoT-enabled educational systems. As a solution to these problems, this paper introduces ZeroTrustEdu, a scalable zero-trust cryptographic solution that combines lightweight post-quantum key management with adaptive trust scoring of cloud-connected IoT e-learning infrastructure. The proposed framework makes three fundamental contributions namely: (1) a hierarchical zero-trust security model with no implicit trust, operating across device, edge, and cloud layers; (2) a lightweight key distribution protocol based on the Module-Lattice Key Encapsulation Mechanism (ML-KEM) compliant with NIST FIPS 203 standards and (3) an adaptive behavioral trust scoring engine that dynamically adjusts device and user trust levels based on real-time interaction analytics. The architecture is evaluated using extensive NS-3 network simulations with up to 100,000 concurrent IoT nodes with formal security analysis under Chosen Plaintext Attack (CPA) and Chosen Ciphertext Attack (CCA) threat models. Comparative evaluation against RSA-2048, ECC-P256, and AES-256 baselines demonstrates that, ZeroTrustEdu delivers a 62% ± 3% (95% CI, 10 independent runs) reduction in ML-KEM encapsulation latency (12.8 ms for key encapsulation/decapsulation, contributing to a complete device authentication latency of 47.3 ms including ML-DSA signature operations), 45% reduced communication overheads, and 38% reduction in energy consumption on ARM Cortex-M4 constrained devices compared to RSA-2048 and achieves provable post-quantum security reducible to the hardness of the Module Learning With Errors (MLWE) problem. These findings demonstrate that the proposed architecture provides a viable, scalable, and quantum-resilient security solution for next-generation IoT-enabled e-learning environments. The cryptographic security of ZeroTrustEdu is guaranteed at the primitive level through NIST-standardized ML-KEM (FIPS 203) and ML-DSA (FIPS 204), with IND-CCA2 and EUF-CMA security formally proven in the respective standards; full protocol-level formal verification using automated theorem provers (ProVerif, Tamarin) is identified as valuable future work to rule out protocol-composition vulnerabilities beyond primitive-level guarantees. Full article
(This article belongs to the Section Computer Science & Engineering)
37 pages, 4112 KB  
Review
Digitisation of Procurement and Information Modelling—Literature Review on e-Procurement
by Eliana Basile, Francesca Porcellini, Enrico Pasquale Zitiello, Sonia Lupica Spagnolo, Antonio Salzano and Salvatore Antonio Biancardo
Buildings 2026, 16(10), 1969; https://doi.org/10.3390/buildings16101969 (registering DOI) - 15 May 2026
Abstract
In recent decades, the introduction of e-procurement has profoundly transformed the methods of procuring goods, services, and works, redefining traditional procurement processes and significantly impacting global economic, operational, and regulatory dynamics. The construction sector has also been affected by this transition, which has [...] Read more.
In recent decades, the introduction of e-procurement has profoundly transformed the methods of procuring goods, services, and works, redefining traditional procurement processes and significantly impacting global economic, operational, and regulatory dynamics. The construction sector has also been affected by this transition, which has altered the operating models of public procurement and favoured the adoption of digital tools aimed at more efficient, transparent, and automated process management. This study proposes a systematic literature review based on the analysis of 95 scientific contributions, with the aim of outlining the evolution of the e-procurement paradigm in the construction sector and identifying the main directions for research development. Despite the widespread dissemination of studies on the topic, it emerges that the actual maturity of e-procurement systems is still limited, often resulting in a logic of document dematerialization rather than full process digitalization. In this context, the review critically analyses the role of Building Information Modelling as an enabling factor for the evolution of e-procurement, exploring the potential of its integration into procurement flows. Particular attention is paid to the contribution of the Digital Building Logbook, an information tool capable of extending the value of data generated during the tender phase throughout the building’s entire life cycle, supporting advanced management and maintenance strategies. The results highlight how, despite the significant potential of integrating e-procurement and BIM, significant technological, regulatory, and cultural issues persist that limit its large-scale adoption. This underscores the need to develop shared and interoperable methodological approaches capable of transforming procurement from a document-based process to an integrated information system, oriented toward value creation throughout the entire life cycle of projects. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

20 pages, 4630 KB  
Article
Deep Neural Network-Based Optimal Transmission Switching Method for Enhancing Power System Flexibility
by Dawei Huang, Yang Wang, Na Yu, Lingguo Kong and Miao Guo
Electronics 2026, 15(10), 2131; https://doi.org/10.3390/electronics15102131 (registering DOI) - 15 May 2026
Abstract
With the large-scale grid integration of renewable energy sources such as wind power and photovoltaics, power system net load fluctuations have become significantly more severe, imposing higher demands on system flexibility. Traditional optimal transmission switching (OTS) models require the simultaneous optimization of continuous [...] Read more.
With the large-scale grid integration of renewable energy sources such as wind power and photovoltaics, power system net load fluctuations have become significantly more severe, imposing higher demands on system flexibility. Traditional optimal transmission switching (OTS) models require the simultaneous optimization of continuous and discrete variables, resulting in high computational complexity that renders them unsuitable for daily real-time scheduling in large-scale power systems. This paper develops a flexible real-time rolling optimization scheduling model that incorporates OTS and proposes a two-stage fast solution framework based on deep neural networks (DNN). In the offline training phase, a multilayer perceptron-based DNN is trained using load and renewable generation data to rapidly and accurately predict the optimal line switching scheme. In the online application phase, the network topology predicted by the DNN transforms the original mixed-integer linear programming problem into a standard linear programming problem, substantially reducing computational complexity and solution time. Case studies on the modified IEEE 118-bus and IEEE 300-bus systems show that the proposed method achieves high prediction accuracy, reduces solution time by up to 117 times, and maintains nearly identical system operating costs to the physics-driven approach in the majority of cases. The results demonstrate that the proposed approach effectively balances computational efficiency and economic performance, verifying the practical value of optimal transmission switching in enhancing large-scale renewable energy accommodation and overall power system flexibility. Full article
(This article belongs to the Special Issue Design and Control of Renewable Energy Systems in Smart Cities)
Show Figures

Figure 1

19 pages, 1800 KB  
Article
Reliability Limits of Hydrogen Storage Systems Under Variable Production: A Dimensionless Regime Map Approach
by Thanh Dam Pham, Dong Trong Nguyen, Du Van Toan, Bui Tri Tam, Do Van Chanh and Pham Quy Ngoc
Sustainability 2026, 18(10), 5008; https://doi.org/10.3390/su18105008 (registering DOI) - 15 May 2026
Abstract
Large-scale hydrogen storage is expected to play a critical role in balancing the variability of renewable energy systems, particularly those driven by wind power. However, the combined influence of storage capacity and deliverability on supply reliability remains insufficiently characterized. This study investigates the [...] Read more.
Large-scale hydrogen storage is expected to play a critical role in balancing the variability of renewable energy systems, particularly those driven by wind power. However, the combined influence of storage capacity and deliverability on supply reliability remains insufficiently characterized. This study investigates the reliability limits of hydrogen storage systems operating under variable hydrogen production and time-varying demand. A dimensionless modeling framework is developed to map system performance across a wide range of storage capacities and deliverability levels. The results reveal a clear transition between reliable and unreliable operating regimes. Reliable operation requires a minimum deliverability level approximately equal to the mean hydrogen production rate, corresponding to a value of about 1.05–1.10 times the average production across the range of intermittency conditions considered in this study (from moderate to highly variable production). Below this threshold, increasing storage capacity alone cannot prevent supply shortfalls. Once this threshold is exceeded, further increases in deliverability provide diminishing returns and storage capacity becomes the dominant factor governing reliability. In this regime, the required storage capacity approaches a plateau on the order of 10–30 days of average hydrogen throughput, depending on the level of production variability. The proposed regime-based framework provides a practical tool for evaluating storage feasibility and guiding preliminary capacity design in renewable hydrogen systems. Full article
(This article belongs to the Special Issue Sustainability and Challenges of Underground Gas Storage Engineering)
16 pages, 2350 KB  
Article
Peatland Stratigraphy as a Proxy for Long-Term Carbon Dynamics: A Case Study from Estonia
by Jüri Liiv, Peep Miidla, Merrit Shanskiy and Ergo Rikmann
Sustainability 2026, 18(10), 5004; https://doi.org/10.3390/su18105004 (registering DOI) - 15 May 2026
Abstract
Sustainable management of peatlands is one of the key global strategies for mitigating climate change. The balance between carbon (C) sequestration and emission in peatlands reflects environmental conditions over time and can provide insight into long-term ecosystem dynamics. However, current methods for estimating [...] Read more.
Sustainable management of peatlands is one of the key global strategies for mitigating climate change. The balance between carbon (C) sequestration and emission in peatlands reflects environmental conditions over time and can provide insight into long-term ecosystem dynamics. However, current methods for estimating greenhouse gas (GHG) fluxes are often labor-intensive, costly, and site-specific. In this study, we propose a simplified and cost-efficient method to estimate long-term carbon balance in peatlands based on the inorganic (mineral) content of drill core samples. The approach uses exponential decay equations to approximate peat accumulation and decomposition processes over time. A conceptual model is applied that accounts for both anaerobic transformation of organic matter of varying molecular complexity and enhanced aerobic decomposition resulting from anthropogenic drainage during the last century. The model was applied to more than 100 drill cores from four peatland systems in Estonia. The resulting trends were compared qualitatively with known climatic fluctuations of the last millennium, including periods associated with the Little Ice Age. The results suggest that, in many cases, carbon losses from decomposition in deeper peat layers may exceed carbon accumulation in upper layers, even in peatlands that appear to be well preserved. The proposed method provides a rapid, low-cost, first-order approximation of peatland carbon dynamics and may serve as a complementary tool for large-scale assessments where detailed process-based models are not feasible. Full article
(This article belongs to the Section Air, Climate Change and Sustainability)
25 pages, 2253 KB  
Article
Monocular Visual Pose Estimation Method Based on Spherical Cooperative Target
by Yanyu Ding, Chaoran Zhang, Yongbin Zhang, Fujin Yang, Zhiyuan Tang, Shipeng Li, Xinran Liu and Xiaojun Zhao
Sensors 2026, 26(10), 3139; https://doi.org/10.3390/s26103139 - 15 May 2026
Abstract
In close-range monocular visual measurement and cooperative target pose estimation, conventional planar targets are constrained by viewpoint changes and are prone to perspective distortion. Although spherical targets provide omnidirectional observability, their PnP-based pose estimation may still suffer from large errors under limited fields [...] Read more.
In close-range monocular visual measurement and cooperative target pose estimation, conventional planar targets are constrained by viewpoint changes and are prone to perspective distortion. Although spherical targets provide omnidirectional observability, their PnP-based pose estimation may still suffer from large errors under limited fields of view and sparse feature observations. To address this issue, this paper proposes an integrated visual measurement framework covering both high-precision spherical target construction and robust pose estimation. First, a composite marker layout based on adaptively scaled latitude–longitude topology is designed. To suppress cumulative distortion caused by long-sequence multi-view rigid registration, a center-to-pole point-cloud stitching strategy is developed, and multiple observations are fused using geometric-consistency weighting to accurately reconstruct the feature-point coordinate system of the target. Second, a joint optimization method is proposed by combining feature-point reprojection error with a contour center consistency constraint. Specifically, the theoretical contour center is predicted from the analytical projection model of the sphere and constrained to agree with the observed contour center fitted from the image. In addition, an SQPnP-based sequential reinitialization mechanism is introduced to improve robustness under sparse-point observations. Simulation results demonstrate that the proposed method achieves higher accuracy and robustness under continuous pose changes, sparse feature points, and different noise levels, compared with EPnP, EPnP+LM, LM, and SQPnP, while real-image experiments further demonstrate its practical feasibility. Full article
(This article belongs to the Section Sensing and Imaging)
30 pages, 1421 KB  
Article
Optimization of Cold-Chain Logistics Unitization Strategies Under Dynamic Temperature Constraints
by Jing Wang, Xianfeng Zhao, Xueqiang Du, Jichun Li and Shibo Xu
Sustainability 2026, 18(10), 5002; https://doi.org/10.3390/su18105002 (registering DOI) - 15 May 2026
Abstract
The decoupling of physical loading configurations from dynamic temperature control in cold-chain logistics exposes supply chains to severe thermal compliance risks and exponential cost penalties. To address this structural gap, this study formulated the Cold Chain Unitization Loading Optimization Problem (CCULP). We propose [...] Read more.
The decoupling of physical loading configurations from dynamic temperature control in cold-chain logistics exposes supply chains to severe thermal compliance risks and exponential cost penalties. To address this structural gap, this study formulated the Cold Chain Unitization Loading Optimization Problem (CCULP). We propose a mixed-integer linear programming (MILP) model that integrates continuous-time heat-transfer dynamics—including door-opening impulse disturbances—and Q10-driven quality-decay kinetics as endogenous constraints within the hierarchical assignment of perishable goods to insulated containers, pallets, and vehicles. By treating container thermal resistance as a core decision variable, the model operationalizes a “prevention-first” economic strategy. To solve this NP-hard problem, we developed a Temperature-Aware Heuristic Algorithm (TAHA) that embeds a forward-Euler temperature simulation loop directly into the combinatorial search. Computational experiments on instances up to 100 SKU types demonstrate that TAHA achieves near-optimal solutions (within 0.7% of the MILP proven optimum) while converging 63 times faster than a genetic algorithm benchmark. Moreover, compared with traditional geometry-centric heuristics, TAHA’s proactive container-polarization strategy effectively eliminates the “penalty cliff,” yielding up to a 25.9% reduction in total system cost on Large-scale instances, almost entirely attributable to the elimination of temperature-violation penalties. Sensitivity analyses further confirm TAHA’s robustness under extreme environmental stress (e.g., 40 °C ambient temperatures) and frequent logistical disturbances, offering an integrated framework for proactive risk mitigation and for reducing food loss in sustainable temperature-controlled distribution. Full article
Show Figures

Figure 1

20 pages, 413 KB  
Article
Multi-Attribute Utility Analysis of Sustainable Supplier Selection Based on Optimized Genetic Algorithm
by Jinxiu Yi and Weijun Shan
Sustainability 2026, 18(10), 5000; https://doi.org/10.3390/su18105000 (registering DOI) - 15 May 2026
Abstract
With the global emphasis on sustainable development, supply chain management is facing new challenges and opportunities. Enterprises often face a large number of suppliers when selecting suppliers, which makes the selection process complex. Considering the crucial role of supplier selection in sustainable supply [...] Read more.
With the global emphasis on sustainable development, supply chain management is facing new challenges and opportunities. Enterprises often face a large number of suppliers when selecting suppliers, which makes the selection process complex. Considering the crucial role of supplier selection in sustainable supply chains, a sustainable supplier selection model based on multi-attribute utility analysis and a fuzzy approximation ideal solution ranking method is proposed to reduce carbon emissions and environmental pollution. This model helps companies scientifically evaluate and select suppliers by comprehensively considering three aspects: environment, economy, and society. Meanwhile, the study utilizes an optimized genetic algorithm-based order allocation model to raise the efficacy and fairness of order allocation. Reducing procurement costs often relies on improving resource utilization and reducing production waste, which directly lowers the energy consumption and carbon emission intensity per unit of product. At the same time, reducing product damage and delivery delay rates can avoid additional greenhouse gas emissions caused by rework, abandonment, and emergency transportation. By improving supplier productivity and optimizing order allocation, the developed model can not only reduce economic costs but also control environmental pollution and carbon footprints from the source of the supply chain. The outcomes indicate that technological level is a crucial factor influencing supplier selection, with a significant positive impact on supplier willingness to choose, and its standard path coefficient is 0.199, with a significance level of 0.001. Meanwhile, the optimized genetic algorithm exhibits strong stability and convergence in order allocation. This optimization model has high efficiency in handling large-scale orders. This provides strong support for the decision-making of enterprises in sustainable supply chain management and a valuable reference for China’s exploration and practice in the field of sustainable development. Full article
17 pages, 323 KB  
Review
Toward a Molecular Reclassification of Myalgic Encephalomyelitis/Chronic Fatigue Syndrome: Integrating Multi-Omics, Machine Learning, and Precision Medicine
by Joshua Frank, Nicole Nesterovitch, Chetana Movva, Nancy G. Klimas and Lubov Nathanson
Int. J. Mol. Sci. 2026, 27(10), 4436; https://doi.org/10.3390/ijms27104436 (registering DOI) - 15 May 2026
Abstract
Myalgic Encephalomyelitis/Chronic Fatigue Syndrome (ME/CFS) is a complex, multi-system disease characterized by a multitude of symptoms across various organ systems. Diagnosis has relied heavily on heterogeneous clinical symptom presentation and evolving case definitions, with treatment focused on addressing presenting symptoms due to the [...] Read more.
Myalgic Encephalomyelitis/Chronic Fatigue Syndrome (ME/CFS) is a complex, multi-system disease characterized by a multitude of symptoms across various organ systems. Diagnosis has relied heavily on heterogeneous clinical symptom presentation and evolving case definitions, with treatment focused on addressing presenting symptoms due to the paucity of validated biomarkers. Meanwhile, advances have been made in understanding the underlying pathophysiology through strong epidemiologic, clinical, and basic science studies. This narrative review synthesizes recent advances that are likely to drive a shift in understanding from symptom-based classification toward a molecularly defined understanding of the disease. This shift in understanding will likely provide the foundation for future research efforts focused on targeting diagnosis and treatment more effectively. Specifically, we reference the identification of rare genetic risk variants through the HEAL2 deep learning framework, the large-scale DecodeME genome-wide association study, and dynamic epigenetic markers of disease state. In addition, the findings revealed the downstream consequences of this genetic and epigenetic priming: chronic innate immune activation, CD8+ T cell exhaustion characterized by upregulation of the exhaustion-driving transcription factors Thymocyte Selection-Associated HMG Box (TOX) and Eomesodermin (EOMES), and a cellular energy crisis centered on mitochondrial dysfunction. Furthermore, results of recent studies have revealed sex-specific transcriptomic and proteomic signatures of maladaptive recovery. We also highlight the role of machine learning and artificial intelligence integrations in translating high-dimensional multi-omics data into actionable biological insights, including the identification of monocyte subsets via Positive Unlabeled Learning, circulating cell-free RNA diagnostic signatures, and integrated multi-modal disease models such as BioMapAI. The combination of these findings, which highlight multiple identifiable mechanisms of molecular activity, support the feasibility of molecular subtyping, precision diagnostics, and targeted therapeutic strategies for ME/CFS. Full article
22 pages, 2402 KB  
Article
A Two-Stage Transformer Framework for Sparse-Array Direction-of-Arrival Estimation via Correlation Vector Recovery
by Wenchao He, Yiran Shi, Hongxi Zhao, Hongliang Zhu and Chunshan Bao
Sensors 2026, 26(10), 3132; https://doi.org/10.3390/s26103132 - 15 May 2026
Abstract
Accurate direction-of-arrival (DOA) estimation with high resolution is fundamental to many array sensing applications. In practice, however, sparse arrays with missing sensors and snapshot-limited observations often lead to incomplete and noisy second-order statistics, which substantially degrades the performance of conventional eigendecomposition-based estimators. In [...] Read more.
Accurate direction-of-arrival (DOA) estimation with high resolution is fundamental to many array sensing applications. In practice, however, sparse arrays with missing sensors and snapshot-limited observations often lead to incomplete and noisy second-order statistics, which substantially degrades the performance of conventional eigendecomposition-based estimators. In this paper, we propose a two-stage Transformer framework for sparse-array DOA estimation that explicitly separates correlation recovery from angle inference. The first stage operates in the correlation domain and learns to reconstruct a clean and complete correlation vector from partially observed measurements using masking-aware tokenization and global-context modeling. The recovered representation can be further converted into a structured covariance matrix, providing an interpretable interface to classical signal processing back-ends. Based on the recovered features, the second stage adopts a Transformer regressor to directly predict multi-source DOAs. Extensive simulations on a large-scale dataset with SNRs from −5 to 10 dB and various snapshot numbers demonstrate that the proposed method delivers robust accuracy and improved stability in low-SNR and snapshot-limited regimes, while maintaining competitive performance at higher SNRs. Additional evaluations with an ESPRIT back-end further confirm that the recovery-based covariance yields more reliable DOA estimation than conventional difference–coarray processing, with particularly evident gains under challenging noise conditions. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

33 pages, 9924 KB  
Article
Impact of Environmental Factors on Efficiency of Rooftop Solar Energy in Built-Up Areas: Investigation at Regional, National and City Levels
by Ashraf Mohamed Soliman and Huma Mohammad Khan
Buildings 2026, 16(10), 1962; https://doi.org/10.3390/buildings16101962 - 15 May 2026
Abstract
Rooftop photovoltaic systems are a key component of sustainable urban energy strategies; however, their performance is strongly influenced by environmental variability across spatial scales. This study develops and validates a mathematical model to quantify the influence of Global Horizontal Irradiation (GHI), air temperature, [...] Read more.
Rooftop photovoltaic systems are a key component of sustainable urban energy strategies; however, their performance is strongly influenced by environmental variability across spatial scales. This study develops and validates a mathematical model to quantify the influence of Global Horizontal Irradiation (GHI), air temperature, wind speed, and dust on rooftop solar energy efficiency at country, regional, and city levels. The model is applied to environmental and energy data from 96 countries and 17 regions and further validated using four large-scale rooftop PV projects in Bahrain. The results show strong agreement between predicted and actual solar energy production, with coefficients of determination of R2 = 0.77 at the country level, R2 = 0.84 at the regional level, and R2 = 0.998 at the city level, while mean absolute percentage errors generally remain below 10%. Regression and sensitivity analyses showed that at least one environmental factor exerts a statistically significant influence on rooftop solar energy yield, supporting the alternative research hypothesis. GHI is identified as the most influential driver at the national scale, whereas temperature and dust effects become more pronounced at finer spatial resolutions. Deployment gap analysis further reveals substantial untapped rooftop solar potential, highlighting the importance of non-environmental constraints in shaping real-world solar adoption. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

41 pages, 15944 KB  
Article
Comparative Study of Different Time Integration Algorithms for Solving Kinematic Problems
by Wei Xu, Yi-Fan Li and Yong-Ou Zhang
Modelling 2026, 7(3), 95; https://doi.org/10.3390/modelling7030095 (registering DOI) - 15 May 2026
Abstract
This study selects five numerical methods: the explicit Leap-Frog scheme, the implicit Crank–Nicolson scheme, the explicit second-order Runge–Kutta scheme, the implicit Newmark-β scheme, and the implicit Bathe scheme. These methods are compared through representative dynamic cases in terms of solution accuracy and computational [...] Read more.
This study selects five numerical methods: the explicit Leap-Frog scheme, the implicit Crank–Nicolson scheme, the explicit second-order Runge–Kutta scheme, the implicit Newmark-β scheme, and the implicit Bathe scheme. These methods are compared through representative dynamic cases in terms of solution accuracy and computational efficiency. The results demonstrate that implicit schemes maintain numerical convergence even with relatively large time steps. The findings also indicate that, although the actual convergence accuracy of the given schemes varies slightly among motion models of different dimensions, it remains close to the theoretical second-order accuracy. Different time integration schemes exhibit distinct numerical accuracies when applied to multi-dimensional motion problems. Overall, under identical time step sizes, the Bathe time integration scheme demonstrates slightly superior computational accuracy and error stability compared to other schemes considered. The numerical efficiency of time integration schemes also varies across dimensions and problem types. The actual computational time does not scale linearly with the time step size and is partially influenced by the complexity of the solution algorithm employed. In general, when solution accuracy is comparable, the Leap-Frog scheme shows marginally higher efficiency in explicit simulations, whereas the Crank–Nicolson scheme proves more efficient in implicit simulations. Full article
19 pages, 659 KB  
Article
An Association Among Problematic Smartphone Use, Sleep Quality, Behavioral Engagement and Academic Achievements: A Multiple Mediation Model Based on Large-Scale Data
by Da Zhou, Shuting Liu, Jinqing Liu, Helin Li and Yue Ma
Behav. Sci. 2026, 16(5), 781; https://doi.org/10.3390/bs16050781 (registering DOI) - 15 May 2026
Abstract
The purpose of this study is to investigate the mediating effects of sleep quality and mathematics behavioral engagement on the relationship between problematic smartphone use (PSU) and mathematics achievement while controlling for gender and socio-economic status (SES). A sample of 1645 fourth-grade students [...] Read more.
The purpose of this study is to investigate the mediating effects of sleep quality and mathematics behavioral engagement on the relationship between problematic smartphone use (PSU) and mathematics achievement while controlling for gender and socio-economic status (SES). A sample of 1645 fourth-grade students from southern China completed a math test and an online questionnaire assessing PSU, sleep quality, and mathematics behavioral engagement. Confirmatory factor analysis, descriptive statistics, Pearson’s correlation, and structural equation modeling were used in the data analysis. The findings showed that: (1) Boys exhibited higher levels of PSU than girls, and students from lower socioeconomic backgrounds were more prone to PSU. (2) PSU was negatively correlated with mathematics achievement. (3) Between PSU and mathematics achievement, mathematics behavioral engagement was a significant mediator. (4) Sleep quality and mathematics behavioral engagement played a significant sequential mediating role in the association between PSU and mathematics performance. This study focused on exploring the mechanisms at play in the relationship between PSU, sleep quality, behavioral engagement and mathematics achievement for primary school students. Full article
Back to TopTop