Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,479)

Search Parameters:
Keywords = consistent topology

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 62173 KB  
Article
SwathSel: A Swath-Based Optimal Remote Sensing Image Selection Method with Visual Consistency for Large-Scale Mapping
by Bai Zhang, Zongyu Xu, Yunhe Liu, Wenhao Ai, Liming Fan, Yuan An and Shuhai Yu
Remote Sens. 2026, 18(8), 1212; https://doi.org/10.3390/rs18081212 - 17 Apr 2026
Abstract
With advancements in Earth observation capabilities, the demand for large-scale mapping using remote sensing images has increased significantly. However, selecting an optimal image set for the area of interest (AOI) from a large collection of remote sensing images remains challenging. On the one [...] Read more.
With advancements in Earth observation capabilities, the demand for large-scale mapping using remote sensing images has increased significantly. However, selecting an optimal image set for the area of interest (AOI) from a large collection of remote sensing images remains challenging. On the one hand, it is crucial to select images with minimal redundancy and low cloud cover to enhance production efficiency and the effective coverage of mapping products. On the other hand, adjacent selected images should transition naturally so that the resulting mapping products appear visually cohesive. Unfortunately, most existing remote sensing image selection algorithms focus only on the former, with little attention to visual consistency. Meanwhile, images from the same swath inherently offer advantages in both redundancy reduction and visual consistency. However, a larger coverage area also carries the potential for greater variation in cloud cover, and cloud distribution within a swath can be highly complex. Managing the relationships among swaths, images, and cloud cover is also challenging. To address these issues, this paper proposes a novel image selection model, SwathSel. Candidate images are grouped through a composite grouping strategy based on swaths, cloud cover, and topological connectivity, thereby expanding the fundamental unit for image selection from individual scenes to connected image subsets. A dynamic adjustment mechanism is introduced to enhance grouping flexibility. Additionally, local and global swath consistency constraints are designed to strengthen visual consistency among images, and a subset evaluation module is used to comprehensively assess swath consistency, coverage, cloud cover, and metadata information. Through a greedy strategy combined with a rapid refinement technique, the final selected image set is obtained. Experiments were conducted on four datasets, and four quantitative metrics were designed to evaluate the visual consistency of the results. Compared with baseline models, SwathSel achieves lower redundancy and cloud cover while delivering superior visual consistency. Full article
(This article belongs to the Section Remote Sensing Image Processing)
44 pages, 8887 KB  
Article
CEEMDAN–SST-GraphPINN-TimesFM Model Integrating Operating-State Segmentation and Feature Selection for Interpretable Prediction of Gas Concentration in Coal Mines
by Linyu Yuan
Sensors 2026, 26(8), 2476; https://doi.org/10.3390/s26082476 - 17 Apr 2026
Abstract
Gas concentration series in coal mining faces are jointly affected by multiple coupled factors, including geological conditions, mining disturbances, ventilation organization, and gas drainage intensity, and therefore exhibit pronounced nonstationarity, strong fluctuations, spatiotemporal correlations across multiple monitoring points, and occasional abrupt spikes. To [...] Read more.
Gas concentration series in coal mining faces are jointly affected by multiple coupled factors, including geological conditions, mining disturbances, ventilation organization, and gas drainage intensity, and therefore exhibit pronounced nonstationarity, strong fluctuations, spatiotemporal correlations across multiple monitoring points, and occasional abrupt spikes. To address these challenges, this study proposes a gas concentration prediction and early-warning method that integrates CEEMDAN–SST with GraphPINN-TimesFM (Graph Physics-Informed Neural Network–Time Series Foundation Model). First, based on multi-source monitoring data such as wind speed, gas concentrations at multiple monitoring points, and equipment operating status, anomaly removal, operating-condition segmentation, and change-point detection are performed to construct stable operating-state labels. Feature selection is then conducted by combining optimal time-lag correlation, Shapley value contribution, and dynamic time warping. Second, WGAN-GP is employed to augment samples from minority operating conditions, while CEEMDAN–SST is used to decompose and reconstruct the target series so as to reduce the interference of nonstationary noise and enhance sequence predictability. On this basis, TimesFM is adopted as the backbone for long-sequence forecasting to capture long-term dependency features in gas concentration evolution. Furthermore, GraphPINN is introduced to embed the topological associations among monitoring points, airflow transmission delays, and convection–diffusion mechanisms into the training process, thereby enabling collaborative modeling that integrates data-driven learning with physical constraints. Finally, the predictive performance, early-warning capability, and interpretability of the proposed model are systematically evaluated through regression forecasting, warning discrimination, and Shapley-based interpretability analysis. The results demonstrate that the proposed method can effectively improve the accuracy, robustness, and physical consistency of gas concentration prediction under complex operating conditions, thereby providing a new technical pathway for gas over-limit early warning and safety regulation in coal mining faces. Full article
(This article belongs to the Section Environmental Sensing)
24 pages, 1136 KB  
Review
Explainable Deep Learning for Research on the Synergistic Mechanisms of Multiple Pollutants: A Critical Review
by Chang Liu, Anfei He, Jie Gu, Mulan Ji, Jie Hu, Shufeng Qiao, Fenghe Wang, Jing Hua and Jian Wang
Toxics 2026, 14(4), 335; https://doi.org/10.3390/toxics14040335 - 16 Apr 2026
Abstract
The synergistic control of multiple pollutants is critically challenged by complex nonlinear interactions, strong spatiotemporal heterogeneity, and the difficulty of tracing causal drivers. Deep learning offers high predictive power but suffers from the “black-box” problem, limiting its acceptance in environmental decision-making. Explainable Deep [...] Read more.
The synergistic control of multiple pollutants is critically challenged by complex nonlinear interactions, strong spatiotemporal heterogeneity, and the difficulty of tracing causal drivers. Deep learning offers high predictive power but suffers from the “black-box” problem, limiting its acceptance in environmental decision-making. Explainable Deep Learning (XDL) integrates physical mechanisms with interpretable algorithms, achieving both prediction accuracy and explanatory transparency. This review systematically evaluates the effectiveness and limitations of XDL in analyzing multi-pollutant interactions, with a comparative focus on atmospheric and aquatic environments. Key techniques, including SHAP, attention mechanisms, and physics-informed neural networks, are examined for their roles in synergistic monitoring, source apportionment, and regulatory optimization. The main findings reveal that: (1) XDL, particularly the “tree model + SHAP” paradigm, has become a dominant tool for quantifying driving factors, yet most attributions remain correlational rather than causal; (2) physics-informed fusion (soft vs. hard constraints) improves physical consistency but faces unresolved conflicts between data and physical laws, with current models lacking a conflict detection mechanism; (3) cross-media comparison shows a unified technical logic of “physical mechanism guidance + post hoc feature attribution”, but atmospheric applications lead in embedding advection–diffusion constraints, while aquatic research excels in spatial topology modeling via graph neural networks; (4) critical bottlenecks include the lack of causal inference, uncertainty-unaware interpretations, and data scarcity. Future directions demand a shift from correlation-only to causal-aware attribution, from blind fusion to conflict-detecting systems, and from no evaluation standards to domain-specific validation benchmarks. XDL is poised to transform multi-pollutant governance from experience-driven to intelligence-driven approaches, provided that verifiable interpretability and physical consistency become core design principles. Full article
Show Figures

Figure 1

21 pages, 361 KB  
Article
Enhancing Distribution Network Performance with Coordinated PV and D-STATCOM Compensation Under Fixed and Variable Reactive Power Modes
by Oscar Danilo Montoya, Luis Fernando Grisales-Noreña and Diego Armando Giral-Ramírez
Technologies 2026, 14(4), 234; https://doi.org/10.3390/technologies14040234 - 16 Apr 2026
Abstract
This paper addresses the optimal management of photovoltaic (PV) systems and distribution static synchronous compensators (D-STATCOMs) in modern electrical distribution networks. A mixed-integer nonlinear programming (MINLP) model is formulated which co-optimizes device placement, sizing, and multi-period dispatch to minimize the total annualized system [...] Read more.
This paper addresses the optimal management of photovoltaic (PV) systems and distribution static synchronous compensators (D-STATCOMs) in modern electrical distribution networks. A mixed-integer nonlinear programming (MINLP) model is formulated which co-optimizes device placement, sizing, and multi-period dispatch to minimize the total annualized system costs while satisfying AC power flow and operational constraints. To solve this challenging problem, a decomposition methodology is proposed, wherein the binary location decisions for the PVs and D-STATCOMs are treated as predefined inputs, upon the basis of site selections commonly reported in the literature. With the integer variables fixed, the problem is reduced to a continuous nonlinear programming (NLP) subproblem for optimal capacity sizing and operational scheduling, which is solved using the interior point optimizer (IPOPT) via the Julia/JuMP environment. The core contribution of this work lies in its comprehensive demonstration of the economic superiority of variable reactive power injection over conventional fixed compensation schemes. Through numerical validation on standard 33- and 69-bus test systems, it is shown that a variable D-STATCOM operation yields substantial and consistent economic gains. Compared to optimized fixed-injection solutions, variable injection provides additional annual savings averaging USD 120,516 (33-bus feeder) and USD 125,620 (69-bus grid), corresponding to a further 3.4% reduction in total costs. These benefits prove robust across different device location sets identified by various metaheuristic algorithms, and they scale effectively to larger network topologies. The results demonstrate that transitioning to variable power injection is not merely an incremental improvement but a fundamental advancement for achieving techno-economic optimality in distribution system planning. The proposed methodology provides utilities with a computationally efficient framework for determining near-optimal PV and D-STATCOM management strategies by first fixing deployment locations based on established planning insights and then rigorously optimizing sizing and dispatch, in order to maximize economic returns while ensuring reliable network operation. Full article
(This article belongs to the Special Issue Innovative Power System Technologies)
Show Figures

Figure 1

21 pages, 4648 KB  
Article
M-GNN: A Topology-Enhanced Multi-Modal Graph Neural Network for Cancer Driver Gene Prediction
by Lu Qin, Wen Zhu, Xinyi Liao and Yujing Zhang
Metabolites 2026, 16(4), 268; https://doi.org/10.3390/metabo16040268 - 16 Apr 2026
Viewed by 41
Abstract
Background: Accurate identification of cancer driver genes is essential for understanding tumorigenesis and developing targeted therapies. Although graph neural networks (GNNs) have advanced multi-omics integration, existing methods often simply concatenate omics features and underutilize the topological information of biological networks. Methods: We propose [...] Read more.
Background: Accurate identification of cancer driver genes is essential for understanding tumorigenesis and developing targeted therapies. Although graph neural networks (GNNs) have advanced multi-omics integration, existing methods often simply concatenate omics features and underutilize the topological information of biological networks. Methods: We propose M-GNN, a multi-modal GNN framework for cancer driver gene prediction. It employs separate Graph Convolutional Network (GCN) encoders to process four types of omics data (mutation, expression, methylation, copy number variation (CNV)), each represented as a 16-dimensional vector. We incorporate knowledge distillation by using soft labels from a pre-trained teacher model to enhance feature representation. An attention mechanism adaptively fuses the encoded omics features, and a dual-path classifier combining a GCN and a Multilayer Perceptron (MLP) preserves both intrinsic gene properties and network topology. Results: Experiments on three public protein–protein interaction (PPI) networks show that M-GNN consistently achieves the highest or second-highest AUPRC compared to five state-of-the-art methods. Ablation studies confirm the contribution of each module, and biological interpretability analysis—including analysis of GO enrichment and drug sensitivity—validates the reliability of the predicted genes. Conclusions: M-GNN provides a robust and interpretable computational tool for systematic cancer driver gene identification, effectively integrating multi-omics and network data. Full article
Show Figures

Figure 1

36 pages, 16246 KB  
Article
A Compliance-Driven Generative Framework for Zhejiang-Style Rural Facades
by Chengzong Wu, Liping He, Shishu Tong, Jun Zhao and Yun Wu
Buildings 2026, 16(8), 1544; https://doi.org/10.3390/buildings16081544 - 14 Apr 2026
Viewed by 240
Abstract
Under the background of the Rural Revitalization Strategy, Zhejiang Province is promoting “Zhejiang-style Vernacular Dwellings” as a crucial measure to enhance the rural living environment and architectural appearance. However, traditional stylistic control tools, such as standardized rural housing design atlases, exhibit limitations including [...] Read more.
Under the background of the Rural Revitalization Strategy, Zhejiang Province is promoting “Zhejiang-style Vernacular Dwellings” as a crucial measure to enhance the rural living environment and architectural appearance. However, traditional stylistic control tools, such as standardized rural housing design atlases, exhibit limitations including weak responsiveness to villagers’ individualized needs and high professional thresholds. Consequently, they struggle to address the bottlenecks in grassroots governance efficiency caused by massive and personalized housing demands. Meanwhile, when applied to architectural design, general generative AI technologies often suffer from “structural hallucinations” and the weakening of regional characteristics due to a lack of physical tectonic constraints. Oriented towards the governance requirements of the Zhejiang Provincial Rural Housing Design Guidelines, this study proposes a compliance evaluation-driven “Contour-Semantic-Image” hierarchical generative control framework. This aims to construct a visual scheme generation and pre-screening workflow that deeply adapts to the logic of rural governance. At the data level, this research aggregates multi-source materials, including official standardized atlases, government stylistic guidelines, and real-world photographs. Through expert screening and standardized processing of 596 schemes, a dataset of 333 high-quality, finely annotated structured samples is constructed. Furthermore, a human-guided, machine-segmented workflow assisted by Segment Anything Model 2 (SAM 2) is employed to establish a semantic label system comprising 4 major categories and 13 subcategories of components, thereby achieving the structural deconstruction of architectural prior knowledge. At the generation level, a two-stage model is trained based on Stable Diffusion and ControlNet: Stage I utilizes contour conditions and “layout prompts” to generate semantic label maps, aiming to strengthen component topology and layout consistency; Stage II employs the semantic label maps and “style prompts” as conditions to generate photorealistic facade images. By utilizing explicit semantic constraints to guide the model from pixel synthesis to logical generation, it achieves the controllable rendering of stylistic details and material expressions. At the evaluation level, an automated verification system featuring “clause translation–metric calculation–comprehensive scoring” is proposed. It conducts scoring, re-ranking, and diagnostic feedback on the generated variants across three dimensions: Design Rationality (Q), General Compliance (G), and Jiangnan water-town Regional Characteristics (P-J), forming a closed-loop “Generation-Evaluation-Feedback” workflow. Overall, this framework provides a “visualizable, evaluable, and explainable” pathway for scheme generation and pre-screening in the digital governance of rural architectural appearance. Full article
(This article belongs to the Special Issue Data-Driven Intelligence for Sustainable Urban Renewal)
Show Figures

Figure 1

32 pages, 25872 KB  
Article
Layered Spatial Articulation and Base Spatial Graph: Formalizing Structural Preconditions of Architectural Spatial Analysis
by Daegeon Lee and Jaewoo Yoo
Buildings 2026, 16(8), 1536; https://doi.org/10.3390/buildings16081536 - 14 Apr 2026
Viewed by 142
Abstract
Graph-based spatial analysis formalizes relations among spatial units, but the formation of these units and their boundary correspondences remains under-specified. This study defines the structural stage preceding relational abstraction and establishes the conditions under which spatial units and boundary correspondences become analytically determinate. [...] Read more.
Graph-based spatial analysis formalizes relations among spatial units, but the formation of these units and their boundary correspondences remains under-specified. This study defines the structural stage preceding relational abstraction and establishes the conditions under which spatial units and boundary correspondences become analytically determinate. It then develops a layered spatial articulation procedure that derives spatial objects from plan-encoded architectural information by differentiating topographic substrate, building frame, spatial enclosure, and relational boundary conditions. These are organized into a base spatial graph. The topology of this graph is fixed by articulation, and its edges encode admissible relational mode combinations. Using traditional Korean housing (hanok) as an illustrative reference for the proposed methodology, the study shows that heterogeneous spatial conditions can be consistently articulated into a unified structural domain prior to relational abstraction. The resulting base spatial graph defines a finite but combinatorially extensive space of admissible relational configurations. Within this domain, graph-domain operations act without expanding the articulated structure, while certain operations may reduce it through structural transformation. The study shows that spatial units cannot be treated as pre-given entities but must be structurally constituted. By formalizing this prior stage, the study establishes explicit structural preconditions for graph-based spatial analysis and provides a consistent analytical domain for subsequent spatial interpretation. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
24 pages, 2871 KB  
Article
Multi-Terminal Flexible Interconnection for Distribution Networks Using VSC-Based Hybrid Bidirectional Power Converter
by Shuoyang Li, Mingyuan Liu and Chengxi Liu
Electronics 2026, 15(8), 1602; https://doi.org/10.3390/electronics15081602 - 12 Apr 2026
Viewed by 165
Abstract
The large-scale integration of distributed energy resources poses numerous challenges to distribution networks. At present, multi-terminal flexible interconnection has become a key development trend for active distribution networks integrated with high-penetration distributed energy resources. Conventional unified power flow controllers (UPFCs) are mainly designed [...] Read more.
The large-scale integration of distributed energy resources poses numerous challenges to distribution networks. At present, multi-terminal flexible interconnection has become a key development trend for active distribution networks integrated with high-penetration distributed energy resources. Conventional unified power flow controllers (UPFCs) are mainly designed for high-voltage transmission networks and lack distribution-adapted control strategies, making it difficult for them to meet the networking requirements for multi-terminal interconnection. Moreover, most existing studies still focus on two-terminal devices, soft open points and improved UPFC topologies for transmission networks. Existing multi-port schemes mostly adopt only shunt-side structures without series compensation branches, which fail to regulate voltage magnitude and phase difference, thus failing to suppress closing inrush currents and mitigate busbar voltage sags. Meanwhile, such schemes struggle with three-phase imbalance, feeder load imbalance and bidirectional power flow fluctuations in distribution networks, and lack adaptive power allocation capability among multiple ports. To solve the above problems, this paper proposes a VSC-based series–shunt hybrid multi-terminal flexible interconnection converter. The proposed topology consists of one series-side VSC and n − 1 shunt-side VSCs connected through a common DC capacitor; it removes the shunt-side transformer, and effectively reduces cost and volume, while achieving phase shifting, voltage regulation and power flow control. Meanwhile, dual closed-loop PI cross-decoupling control and a flexible closing strategy are adopted to independently regulate the active and reactive power of each feeder, adapt to three-phase imbalance and load imbalance conditions, suppress inrush currents, and realize flexible power mutual support among multiple ports, thereby significantly enhancing adaptability to distribution networks. Full article
28 pages, 6857 KB  
Article
Experimental Validation and Reservoir Computing Capability of Spiking Neuron Based on Threshold Selector and Tunnel Diode
by Vasiliy Pchelko, Vladislav Kholkin, Vyacheslav Rybin, Alexander Mikhailov and Timur Karimov
Big Data Cogn. Comput. 2026, 10(4), 115; https://doi.org/10.3390/bdcc10040115 - 10 Apr 2026
Viewed by 189
Abstract
Despite the success of artificial neural networks in solving numerous tasks, they face significant challenges, including difficulties in online adaptation and rapidly increasing energy consumption. As a biologically plausible alternative, spiking neural networks offer promising capabilities for efficient cognitive computing. Recently, a three-element [...] Read more.
Despite the success of artificial neural networks in solving numerous tasks, they face significant challenges, including difficulties in online adaptation and rapidly increasing energy consumption. As a biologically plausible alternative, spiking neural networks offer promising capabilities for efficient cognitive computing. Recently, a three-element spiking neuron model consisting of a threshold selector, a tunnel diode, and a capacitor was proposed. In this work, we experimentally validate this model using a threshold selector hardware emulator and demonstrate its dynamical equivalence to the biologically plausible Izhikevich neuron model. To evaluate the novel neuron’s applicability for cognitive computing, we implement a liquid state machine (LSM) reservoir architecture with spatially dependent random topology for synaptic weight distribution. Our simulations on the MNIST and Fashion-MNIST benchmarks demonstrate competitive classification accuracy (97.9% and 89.5%, respectively) while offering estimated energy efficiency and processing speed enhancements compared to existing FPGA-based and memristor-based spiking reservoir implementations. The developed reservoir is feasible for processing neuromorphic sensors output, including visual perception tasks. Full article
Show Figures

Figure 1

31 pages, 464 KB  
Hypothesis
Gravity as a Boundary Condition for the Evolution of Three-Dimensional Multicellularity
by Oliver Ullrich and Cora S. Thiel
Life 2026, 16(4), 638; https://doi.org/10.3390/life16040638 - 10 Apr 2026
Viewed by 485
Abstract
Life evolved under a persistent 1 g field that is continuous, ubiquitous, and directionally structured. Here, we synthesize evidence across evolutionary biology, mechanobiology, and genome architecture to propose gravity as a mechanical boundary condition that helped canalize the emergence of complex multicellularity. Order-of-magnitude [...] Read more.
Life evolved under a persistent 1 g field that is continuous, ubiquitous, and directionally structured. Here, we synthesize evidence across evolutionary biology, mechanobiology, and genome architecture to propose gravity as a mechanical boundary condition that helped canalize the emergence of complex multicellularity. Order-of-magnitude considerations indicate that gravity-derived hydrostatic loads can fall within force/pressure regimes relevant to nuclear and chromatin mechanosensitivity when transmitted through adhesion–cytoskeleton–LINC–lamina coupling. Comparative genomic and imaging frameworks suggest that complex animals increasingly rely on volumetric genome organization (packing domains and higher-order 3D architectures) that supports durable transcriptional memory and stable differentiated cell identities. Integrating these concepts with altered-gravity experiments, we argue that microgravity and hypergravity perturb chromatin topology and region-level transcription in rapid, largely reversible patterns consistent with a mechanically defined 1 g reference state. We advance a boundary-condition thesis: gravity is not a sole driver but a stable reference that likely contributed to the evolvability and long-term robustness of mechanogenomic architectures required for high-dimensional differentiation and tissue homeostasis. Full article
(This article belongs to the Section Cell Biology and Tissue Engineering)
Show Figures

Graphical abstract

25 pages, 8326 KB  
Article
Research on Restoring Urban Flood Community Resilience Based on Hydrodynamic Models
by Mian Wang, Ruirui Sun, Huanhuan Yang, Hao Wang, Ding Jiao and Gaoqing Lv
Water 2026, 18(8), 903; https://doi.org/10.3390/w18080903 - 9 Apr 2026
Viewed by 344
Abstract
Global climate change continues to intensify, leading to an increase in extreme meteorological disasters characterized by high intensity, frequency, and extensive impact. Chinese cities are facing increasingly severe flood disaster risks. As the fundamental unit of the urban system, scientifically quantifying a community’s [...] Read more.
Global climate change continues to intensify, leading to an increase in extreme meteorological disasters characterized by high intensity, frequency, and extensive impact. Chinese cities are facing increasingly severe flood disaster risks. As the fundamental unit of the urban system, scientifically quantifying a community’s post-disaster recovery capacity provides a crucial basis for formulating disaster prevention and mitigation strategies. Existing research has largely focused on either quantitative resilience assessment of communities or the functional recovery of specific systems within communities, falling short of meeting the quantitative needs for assessing community functional recovery after flood disasters. Given this, this paper aims to construct a community functional recovery model based on different land use types to precisely quantify the recovery trajectory of community functions. First, the MIKE 21 two-dimensional hydrodynamic model is employed to simulate 100-year and 200-year flood scenarios, obtaining dynamic inundation data at the community scale. Subsequently, a semi-Markov process is adopted to model the recovery of individual buildings, with the aggregated building functions within the community summarized to derive building recovery curves. A road network topology model is constructed using the Space L method, and network global efficiency is applied to quantify community road functionality. Green space functional loss is quantified based on the percentage of inundated areas. Finally, calculation is performed based on the proposed dual-layer computational framework consisting of a connectivity layer and a functional layer, and the overall community functional recovery curve after the disaster is generated, thereby achieving precise quantification of the recovery process. The research findings indicate that increased disaster intensity significantly amplifies functional losses and recovery delays. Concurrently, distinct land use types exert markedly different impacts on community recovery. This study quantitatively reveals the phased dominant roles of various land use types throughout the community recovery process, providing a scientific basis for formulating phased, prioritized resilience enhancement strategies. Full article
(This article belongs to the Special Issue "Watershed–Urban" Flooding and Waterlogging Disasters)
Show Figures

Figure 1

21 pages, 940 KB  
Article
Minimum Vertex Cut with Reachable Set (MVCRS) Problem for Suppressing Botnet Propagation in IoT Networks: Complexity and Algorithms
by Shingo Yamaguchi
Sensors 2026, 26(8), 2324; https://doi.org/10.3390/s26082324 - 9 Apr 2026
Viewed by 154
Abstract
This paper formulates the “Minimum Vertex Cut with Reachable Set” (MVCRS) problem as an optimization framework to suppress botnet propagation in networked systems, and clarifies its computational complexity and algorithmic solutions. Building a firewall to minimize damage is essential for addressing botnet propagation [...] Read more.
This paper formulates the “Minimum Vertex Cut with Reachable Set” (MVCRS) problem as an optimization framework to suppress botnet propagation in networked systems, and clarifies its computational complexity and algorithmic solutions. Building a firewall to minimize damage is essential for addressing botnet propagation in Internet of Things (IoT) networks. We define the basic MVCRS problem as minimizing the sum of the weight of the deployed resources and the resulting propagation scope. While we demonstrate that the constrained version of the problem is NP-complete, we show that the fundamental trade-off optimization model can be solved in polynomial time by reducing it to the maximum flow–minimum cut problem. This provides a theoretical baseline for optimal resource allocation in cybersecurity. Experimental evaluations reveal the limitations of conventional heuristics. In community-structured networks, the degree-based greedy algorithm overlooks critical bridge nodes, yielding an optimality gap of up to 72.6% above the theoretical minimum cost. Conversely, our exact algorithm consistently guarantees the optimal minimum cost (a 0% gap) with high statistical stability across diverse topologies. Furthermore, it scales efficiently to solve 100,000-node IoT networks within practical time limits, proving to be a reliable and efficient foundation for botnet suppression in complex real-world systems. Full article
26 pages, 9517 KB  
Article
SSPRCD: Scene Graph-Based Street-Scene Spatial Positional Relation Change Detection with Graph Differencing and Structural Quantification
by Xian Guo, Wenjing Ding, Yichuan Wang and Jie Jiang
ISPRS Int. J. Geo-Inf. 2026, 15(4), 161; https://doi.org/10.3390/ijgi15040161 - 9 Apr 2026
Viewed by 335
Abstract
Street-view imagery supports fine-grained urban monitoring, but most street-scene change detection methods are pixel-centric or object-centric and cannot explicitly capture the evolution of inter-entity spatial relations needed for interpretable tasks (e.g., compliance inspection and post-disaster assessment). To address this, we propose SSPRCD, a [...] Read more.
Street-view imagery supports fine-grained urban monitoring, but most street-scene change detection methods are pixel-centric or object-centric and cannot explicitly capture the evolution of inter-entity spatial relations needed for interpretable tasks (e.g., compliance inspection and post-disaster assessment). To address this, we propose SSPRCD, a scene graph-based framework that extracts entity-relation triplets with pixel locations, builds spatial knowledge graphs, and achieves stable node alignment via intra-/inter-temporal consistency. Graph differencing then identifies added, removed, and unchanged entities/relations, while nGED and graph2vec jointly quantify structural discrepancies between temporal scenes. Experiments on the TSUNAMI dataset, with comparisons across two object detectors and seven scene graph generation backbones, show that SSPRCD achieves a macro-F1 of 0.65 for the object-level task, F1 of 0.72 for binary change detection, and F1 of 0.89 for relation-level detection, consistently outperforming baseline methods. Overall, SSPRCD delivers relation-aware and topology-informed change explanations that improve the interpretability of street-block level change analysis for geospatial in-formation updating and urban applications. Full article
Show Figures

Figure 1

20 pages, 5199 KB  
Article
Mesoscale Modeling of Steel Fiber Reinforced Concrete Using Geometric Entity Expansion and Point–Line Topology
by Jutong Li, Lu Zhang, Youkai Li and Chaoqun Sun
Materials 2026, 19(8), 1508; https://doi.org/10.3390/ma19081508 - 9 Apr 2026
Viewed by 341
Abstract
Mesoscale modeling provides an efficient and cost-effective approach for investigating the damage mechanisms of fiber-reinforced concrete. To address the physical distortion in conventional models that arises from neglecting the volumetric effect of steel fibers and to construct a more realistic random mesoscale model [...] Read more.
Mesoscale modeling provides an efficient and cost-effective approach for investigating the damage mechanisms of fiber-reinforced concrete. To address the physical distortion in conventional models that arises from neglecting the volumetric effect of steel fibers and to construct a more realistic random mesoscale model of steel fiber-reinforced concrete (SFRC), this study proposes an efficient modeling method based on geometric entity expansion and point–line topology. First, polygonal aggregates with diverse morphologies are generated using a polar-coordinate perturbation scheme combined with a convex-hull correction algorithm. Next, abandoning the traditional zero-thickness line-segment assumption, steel fibers are expanded into rectangular entities via rigid-body kinematics to explicitly represent their excluded volume. Furthermore, a vector-cross-product-based Point–Line Method is developed to replace conventional circumscribed-circle screening, enabling accurate discrimination of interference interactions between fiber–aggregate and fiber–fiber pairs. An automated framework—consisting of skeleton placement, entity generation, topological discrimination, and mesh mapping—is implemented through a Python 3.13.9 scripting interface, allowing efficient batch generation of high-content mesoscale models with aggregate area fractions up to 70%. The proposed model is then used to simulate the failure process of SFRC specimens under uniaxial compression and benchmarked against experimental results. The results show that the developed mesoscale model accurately reproduces the nonlinear mechanical response and the strengthening–toughening effects of SFRC, achieving a relative error of only 0.31% in peak stress and a root mean square error (RMSE) as low as 1.70 MPa over the full stress–strain curve. The simulations not only confirm the pronounced strength gain due to steel fiber incorporation (~19.7%), but also reveal, at the mesoscale, the mechanism by which fiber bridging suppresses damage localization, thereby demonstrating the reliability and practical effectiveness of the proposed modeling approach. Full article
(This article belongs to the Section Construction and Building Materials)
Show Figures

Graphical abstract

34 pages, 8819 KB  
Article
Mitigating Overfitting and Physical Inconsistency in Flood Susceptibility Mapping: A Physics-Constrained Evolutionary Machine Learning Framework for Ungauged Alpine Basins
by Chuanjie Yan, Lingling Wu, Peng Huang, Jiajia Yue, Haowen Li, Chun Zhou, Congxiang Fan, Yinan Guo and Li Zhou
Water 2026, 18(7), 882; https://doi.org/10.3390/w18070882 - 7 Apr 2026
Viewed by 366
Abstract
Flood susceptibility mapping in high-altitude ungauged basins faces a structural dichotomy: physically based models often suffer from systematic biases due to uncertain satellite precipitation, whereas data-driven models are prone to overfitting and lack physical consistency in data-scarce regions. To resolve this, this study [...] Read more.
Flood susceptibility mapping in high-altitude ungauged basins faces a structural dichotomy: physically based models often suffer from systematic biases due to uncertain satellite precipitation, whereas data-driven models are prone to overfitting and lack physical consistency in data-scarce regions. To resolve this, this study proposes a Physically constrained Particle Swarm Optimization–Random Forest (P-PDRF) framework, validated in the Lhasa River Basin. The core innovation lies in coupling a hydrological model with statistical learning by utilizing the maximum daily runoff depth as a “Relative Hydraulic Intensity Index.” This approach leverages the topological correctness of physical simulations to circumvent absolute forcing errors. Furthermore, a Physiographically Constrained Negative Sampling (PCNS) strategy and a PSO-optimized “Shallow Tree” configuration are introduced to enforce structural regularization against stochastic noise. Empirical results demonstrate that P-PDRF achieves superior generalization (AUC = 0.942), significantly outperforming standard Random Forest, Support Vector Machine, and Analytic Hierarchy Process models. Ablation studies confirm that the dynamic index outweighs the static Topographic Wetness Index in feature importance, effectively correcting topographic artifacts where static models misclassify arid depressions as high-risk zones. This study offers a scalable Physics-Informed Machine Learning solution for the global “Prediction in Ungauged Basins” initiative. Full article
(This article belongs to the Special Issue Urban Flood Risk Assessment and Management)
Show Figures

Figure 1

Back to TopTop