Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (95)

Search Parameters:
Keywords = vertex similarity

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 42030 KB  
Article
DStreaM: A Convective Term Approximation Approach That Corresponds to Pure Convection
by Kiril Shterev
Mathematics 2026, 14(3), 389; https://doi.org/10.3390/math14030389 - 23 Jan 2026
Viewed by 59
Abstract
In recent decades, considerable effort has been devoted to developing higher-order schemes for the discretization of convective terms that are both stable and reliable. In this work, the central idea is that the approximation should be made to reflect the physics of pure [...] Read more.
In recent decades, considerable effort has been devoted to developing higher-order schemes for the discretization of convective terms that are both stable and reliable. In this work, the central idea is that the approximation should be made to reflect the physics of pure convection: the transported quantity is advected along streamlines, and information is propagated only in the upwind direction, i.e., the transported property is determined by previous values along the streamline but not by downstream values. In the proposed approach, streamlines on the computational mesh are represented by discrete streamlines, and the method is called the Discrete Streamline Method (DStreaM). A discrete streamline is constructed as a narrow triangle with one vertex at the node where the approximation is sought and two vertices at upstream neighbouring nodes. Discrete streamlines are oriented according to the local flow direction, in a manner similar to skew-upwind schemes, so that consistency with pure convection is ensured for DStreaM. The method is conservative only for uniform meshes with a constant velocity field; for general meshes and non-uniform velocity fields, it is non-conservative, and a non-zero local conservation error remains. The performance of DStreaM is assessed on the following standard test problems: convection of a step profile, a double-step profile, a sinusoidal profile, and the Smith–Hutton problem. DStreaM solutions are compared with those obtained using the first-order upwind scheme and second-order total variation diminishing (TVD) schemes with Minmod, QUICK, and SUPERBEE limiters. Across these benchmarks, high-resolution solution profiles and L1/L2 error levels comparable to those of the considered TVD schemes are produced by DStreaM. In the DStreaM construction, only local node coordinates and mesh connectivity are used; in this work, implementation is performed on both uniform Cartesian meshes and unstructured triangular meshes generated by a Delaunay triangulation. Representative results are reported with a focus on accuracy, iterative convergence, and conservation limitations. Full article
(This article belongs to the Special Issue High-Order Numerical Methods and Computational Fluid Dynamics)
8 pages, 240 KB  
Article
Some Problems and Conjectures About Voloshin Triple Systems
by Mario Gionfriddo
Mathematics 2026, 14(1), 42; https://doi.org/10.3390/math14010042 - 22 Dec 2025
Viewed by 240
Abstract
In this paper we give a short survey of Voloshin Triple Systems, which are Steiner Triple Systems with a vertex colouring which colour the blocks using exactly two colours. We also provide two conjectures about VTSs obtained by the [...] Read more.
In this paper we give a short survey of Voloshin Triple Systems, which are Steiner Triple Systems with a vertex colouring which colour the blocks using exactly two colours. We also provide two conjectures about VTSs obtained by the construction v2v+1, starting from v=3. Finally, we point out some open problems about Steiner Systems S(2,4,v) with a similar vertex colouring, which could provide similar conjectures. Full article
(This article belongs to the Special Issue New Perspectives of Graph Theory and Combinatorics)
15 pages, 932 KB  
Systematic Review
Androgenetic Alopecia and Risks of Overall and Aggressive Prostate Cancer: An Updated Systematic Review and Meta-Analysis
by David G. Hanelin, Sapir Amar and Ilir Agalliu
Cancers 2025, 17(21), 3581; https://doi.org/10.3390/cancers17213581 - 6 Nov 2025
Viewed by 2501
Abstract
Background: Androgenetic alopecia, also known as male pattern baldness (MPB), is a common hair loss disorder among middle-aged men. MPB shares similar risk factors with prostate cancer (PrCa), including advancing age, family history, and sex hormones. Several studies have examined the associations between [...] Read more.
Background: Androgenetic alopecia, also known as male pattern baldness (MPB), is a common hair loss disorder among middle-aged men. MPB shares similar risk factors with prostate cancer (PrCa), including advancing age, family history, and sex hormones. Several studies have examined the associations between MPB and PrCa; however, the evidence remains unclear. We carried out an updated meta-analysis of epidemiological studies that examined the relationship between age at onset and patterns of MPB (either frontal, vertex, or both) and their associations with risks of total and aggressive PrCa. Methods: A literature search was performed using PubMed and Web of Science databases for epidemiological studies published between 1 January 2000 and 31 December 2024 that examined the associations between MPB and PrCa. From each eligible study, relevant data were extracted on study design and population, sample size, prevalence of MPB at various ages, and their association with PrCa. Pooled relative risks (RR) and corresponding 95% confidence intervals (CI) were calculated using the Der-Simonian and Laird random-effects models. Heterogeneity across studies was assessed by I2 statistics, while the quality of studies was evaluated using the Newcastle–Ottawa Scale. Results: A total of 19 observational studies, including 17,810 cases and 146,806 controls/non-cases, were analyzed. The prevalence of MPB increased from 5% to 65% with aging and varied across the studies. Both frontal and vertex MPB were associated with a pooled RR of 1.08 (95% CI 1.02–1.14) for total PrCa, but there was no association with frontal-only MPB. Younger-onset MPB (<40 years old) was also associated with an RR = 1.13 (95% CI 0.96–1.31) for PrCa, although results were not statistically significant. Vertex-only MPB was associated with more aggressive PrCa (pooled RR = 1.14; 95% CI 1.02–1.25); however, there was substantial heterogeneity in the definition of aggressive PrCa across the studies. Conclusions: Men with both frontal and vertex MPB have a modestly elevated risk of PrCa. However, most studies were conducted in Caucasian men and they did not evaluate effect modifications by genetic variations in androgen metabolism pathway genes or changes in serum levels of androgens with aging. Large prospective cohort studies with more accurate longitudinal assessment of hair loss patterns are needed to better understand the complex relationship between genetic susceptibility, endogenous hormones, MPB, and subsequent risk of PrCa. Full article
(This article belongs to the Special Issue Urological Cancer: Epidemiology and Genetics)
Show Figures

Figure 1

28 pages, 3784 KB  
Article
Dicke State Quantum Search for Solving the Vertex Cover Problem
by Jehn-Ruey Jiang
Mathematics 2025, 13(18), 3005; https://doi.org/10.3390/math13183005 - 17 Sep 2025
Viewed by 810
Abstract
This paper proposes a quantum algorithm, named Dicke state quantum search (DSQS), to set qubits in the Dicke state |Dkn of D states in superposition to locate the target inputs or solutions of specific patterns among 2n unstructured [...] Read more.
This paper proposes a quantum algorithm, named Dicke state quantum search (DSQS), to set qubits in the Dicke state |Dkn of D states in superposition to locate the target inputs or solutions of specific patterns among 2n unstructured input instances, where n is the number of input qubits and D=nk=O(nk) for min(k,nk)n/2. Compared to Grover’s algorithm, a famous quantum search algorithm that calls an oracle and a diffuser O(2n) times, DSQS requires no diffuser and calls an oracle only once. Furthermore, DSQS does not need to know the number of solutions in advance. We prove the correctness of DSQS with unitary transformations, and show that each solution can be found by DSQS with an error probability less than 1/3 through O(nk) repetitions, as long as min(k,nk)n/2. Additionally, this paper proposes a classical algorithm, named DSQS-VCP, to generate quantum circuits based on DSQS for solving the k-vertex cover problem (k-VCP), a well-known NP-complete (NPC) problem. Complexity analysis demonstrates that DSQS-VCP operates in polynomial time and that the quantum circuit generated by DSQS-VCP has a polynomial qubit count, gate count, and circuit depth as long as min(k,nk)n/2. We thus conclude that the k-VCP can be solved by the DSQS-VCP quantum circuit in polynomial time with an error probability less than 1/3 under the condition of min(k,nk)n/2. Since the k-VCP is NP-complete, NP and NPC problems can be polynomially reduced to the k-VCP. If the reduced k-VCP instance satisfies min(k,nk)n/2, then both the instance and the original NP/NPC problem instance to which it corresponds can be solved by the DSQS-VCP quantum circuit in polynomial time with an error probability less than 1/3. All statements of polynomial algorithm execution time in this paper apply only to VCP instances and similar instances of other problems, where min(k,nk)n/2. Thus, they imply neither NP ⊆ BQP nor P = NP. In this restricted regime of min(k,nk)n/2, the Dicke state subspace has a polynomial size of D=nk=O(nk), and our DSQS algorithm samples from it without asymptotic superiority over exhaustive enumeration. Nevertheless, DSQS may be combined with other quantum algorithms to better exploit the strengths of quantum computation in practice. Experimental results using IBM Qiskit packages show that the DSQS-VCP quantum circuit can solve the k-VCP successfully. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

27 pages, 2240 KB  
Article
Hybrid Entropy-Based Metrics for k-Hop Environment Analysis in Complex Networks
by Csaba Biró
Mathematics 2025, 13(17), 2902; https://doi.org/10.3390/math13172902 - 8 Sep 2025
Viewed by 696
Abstract
Two hybrid, entropy-guided node metrics are proposed for the k-hop environment: Entropy-Weighted Redundancy (EWR) and Normalized Entropy Density (NED). The central idea is to couple local Shannon entropy with neighborhood density/redundancy so that structural heterogeneity around a vertex is captured even when [...] Read more.
Two hybrid, entropy-guided node metrics are proposed for the k-hop environment: Entropy-Weighted Redundancy (EWR) and Normalized Entropy Density (NED). The central idea is to couple local Shannon entropy with neighborhood density/redundancy so that structural heterogeneity around a vertex is captured even when classical indices (e.g., degree or clustering) are similar. The metrics are formally defined and shown to be bounded, isomorphism-invariant, and stable under small edge edits. Their behavior is assessed on representative topologies (Erdős–Rényi, Barabási–Albert, Watts–Strogatz, random geometric graphs, and the Zephyr quantum architecture). Across these settings, EWR and NED display predominantly negative correlation with degree and provide information largely orthogonal to standard centralities; vertices with identical degree can differ by factors of two to three in the proposed scores, revealing bridges and heterogeneous regions. These properties indicate utility for vulnerability assessment, topology-aware optimization, and layout heuristics in engineered and quantum networks. Full article
(This article belongs to the Special Issue Graph Theory and Applications, 3rd Edition)
Show Figures

Figure 1

37 pages, 5079 KB  
Article
The Complexity of Classes of Pyramid Graphs Based on the Fritsch Graph and Its Related Graphs
by Ahmad Asiri and Salama Nagy Daoud
Axioms 2025, 14(8), 622; https://doi.org/10.3390/axioms14080622 - 8 Aug 2025
Cited by 1 | Viewed by 596
Abstract
A quantitative study of the complicated three-dimensional structures of artificial atoms in the field of intense matter physics requires a collaborative method that combines a statistical analysis of unusual graph features related to atom topology. Simplified circuits can also be produced by using [...] Read more.
A quantitative study of the complicated three-dimensional structures of artificial atoms in the field of intense matter physics requires a collaborative method that combines a statistical analysis of unusual graph features related to atom topology. Simplified circuits can also be produced by using similar transformations to streamline complex circuits that need laborious mathematical calculations during analysis. These modifications can also be used to determine the number of spanning trees required for specific graph families. The explicit derivation of formulas to determine the number of spanning trees for novel pyramid graph types based on the Fritsch graph, which is one of only six graphs in which every neighborhood is a 4- or 5-vertex cycle, is the focus of our study. We conduct this by utilizing our understanding of difference equations, weighted generating function rules, and the strength of analogous transformations found in electrical circuits. Full article
(This article belongs to the Special Issue Graph Theory and Combinatorics: Theory and Applications)
Show Figures

Figure 1

13 pages, 382 KB  
Article
Determination of Stiffness Coefficients at the Internal Vertices of the Tree Based on a Finite Set of Eigenvalues of an Asymmetric Second-Order Linear Differential Operator
by Baltabek Kanguzhin, Zhalgas Kaiyrbek and Mergul Mustafina
Symmetry 2025, 17(8), 1263; https://doi.org/10.3390/sym17081263 - 7 Aug 2025
Viewed by 412
Abstract
A second-order linear differential operator A is defined on a tree of arbitrary topology. Any internal vertex P of the tree divides the tree into mp branches. The restrictions Ai,i=1,,mp of the [...] Read more.
A second-order linear differential operator A is defined on a tree of arbitrary topology. Any internal vertex P of the tree divides the tree into mp branches. The restrictions Ai,i=1,,mp of the operator A on each of these branches, where the vertex P is considered the root of the branch and the Dirichlet boundary condition is specified at the root. These restrictions must be, in a sense, asymmetric (not similar) to each other. Thus, the distinguished class of differential operators A turns out to have only simple eigenvalues. Moreover, the matching conditions at the internal vertices of the graph contain a set of parameters. These parameters are interpreted as stiffness coefficients. This paper proves that a finite set of eigenvalues allows one to uniquely restore the set of stiffness coefficients. The novelty of the work is the fact that it is sufficient to know a finite set of eigenvalues of intermediate Weinstein problems for uniquely restoring the required stiffness coefficients. We not only describe the results of selected studies but also compare them with each other and establish interconnections. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

23 pages, 6199 KB  
Article
PDAA: An End-to-End Polygon Dynamic Adjustment Algorithm for Building Footprint Extraction
by Longjie Luo, Jiangchen Cai, Bin Feng and Liufeng Tao
Remote Sens. 2025, 17(14), 2495; https://doi.org/10.3390/rs17142495 - 17 Jul 2025
Viewed by 1006
Abstract
Buildings are a significant component of urban space and are essential to smart cities, catastrophe monitoring, and land use planning. However, precisely extracting building polygons from remote sensing images remains difficult because of the variety of building designs and intricate backgrounds. This paper [...] Read more.
Buildings are a significant component of urban space and are essential to smart cities, catastrophe monitoring, and land use planning. However, precisely extracting building polygons from remote sensing images remains difficult because of the variety of building designs and intricate backgrounds. This paper proposes an end-to-end polygon dynamic adjustment algorithm (PDAA) to improve the accuracy and geometric consistency of building contour extraction by dynamically generating and optimizing polygon vertices. The method first locates building instances through the region of interest (RoI) to generate initial polygons, and then uses four core modules for collaborative optimization: (1) the feature enhancement module captures local detail features to improve the robustness of vertex positioning; (2) the contour vertex tuning module fine-tunes vertex coordinates through displacement prediction to enhance geometric accuracy; (3) the learnable redundant vertex removal module screens key vertices based on a classification mechanism to eliminate redundancy; and (4) the missing vertex completion module iteratively restores missed vertices to ensure the integrity of complex contours. PDAA dynamically adjusts the number of vertices to adapt to the geometric characteristics of different buildings, while simplifying the prediction process and reducing computational complexity. Experiments on public datasets such as WHU, Vaihingen, and Inria show that PDAA significantly outperforms existing methods in terms of average precision (AP) and polygon similarity (PolySim). It is at least 2% higher than existing methods in terms of average precision (AP), and the generated polygonal contours are closer to the real building geometry. Values of 75.4% AP and 84.9% PolySim were achieved on the WHU dataset, effectively solving the problems of redundant vertices and contour smoothing, and providing high-precision building vector data support for scenarios such as smart cities and emergency response. Full article
Show Figures

Figure 1

22 pages, 1233 KB  
Article
Radio Mean Labeling Algorithm, Its Complexity and Existence Results
by Meera Saraswathi, K. N. Meera and Yuqing Lin
Mathematics 2025, 13(13), 2057; https://doi.org/10.3390/math13132057 - 20 Jun 2025
Cited by 1 | Viewed by 983
Abstract
Radio mean labeling of a connected graph G is an assignment of distinct positive integers to the vertices of G satisfying a mathematical constraint called radio mean condition. The maximum label assigned to any vertex of G is called the [...] Read more.
Radio mean labeling of a connected graph G is an assignment of distinct positive integers to the vertices of G satisfying a mathematical constraint called radio mean condition. The maximum label assigned to any vertex of G is called the span of the radio mean labeling. The minimum span of all feasible radio mean labelings of G is the radio mean number of G, denoted by rmn(G). In our previous study, we proved that if G has order n, then rmn(G)[n,rmn(Pn)] where Pn is a path of order n. All graphs of diameters 1, 2 and 3 have the radio mean number equal to order n. However, they are not the only graphs on n vertices with radio mean number n. Graphs isomorphic to path Pn are the graphs having the maximum diameter among the set of all graphs of order n and they possess the maximum feasible radio mean number. In this paper, we show that, for any integer in the range of achievable radio mean numbers, there always exists a graph of order n with the given integer as its radio mean number. This is approached by introducing a special type of tree whose construction is detailed in the article. The task of assigning radio mean labels to a graph can be considered as an optimization problem. This paper critiques the limitations of existing Integer Linear Programming (ILP) models for assigning radio mean labeling to graphs and proposes a new ILP model. The existing ILP model does not guarantee that the vertex labels are distinct, positive and satisfy the radio mean condition, prompting the need for an improved approach. We propose a new ILP model which involves n2 constraints is the input graph’s order is n. We obtain a radio mean labeling of cycle of order 10 using the new ILP. In our previous study, we showed that, for any graph G, we can extend the radio mean labelings of its diametral paths to the vertex set of G and obtain radio mean labelings of G. This insight forms the basis for an algorithm presented in this paper to obtain radio mean labels for a given graph G with n vertices and diameter d. The correctness and complexity of this algorithm are analyzed in detail. Radio mean labelings have been proposed for cryptographic key generation in previous works, and the algorithm presented in this paper is general enough to support similar applications across various graph structures. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

17 pages, 3192 KB  
Article
Patterns of Change in Plant Leaf Functional Traits Along an Altitudinal Gradient in a Karst Climax Community
by Yang Wang, Ying Deng, Hong Zhao, Fangbing Li, Zuhong Fan, Tian Tian and Tu Feng
Agronomy 2025, 15(5), 1143; https://doi.org/10.3390/agronomy15051143 - 7 May 2025
Cited by 1 | Viewed by 1512
Abstract
Exploring the changes in plant leaf functional traits in response to altitude across various altitudinal gradients of climax communities in karst regions can elucidate the characteristics of survival strategy adaptations among plant communities. This understanding may also reveal the growth dynamics and driving [...] Read more.
Exploring the changes in plant leaf functional traits in response to altitude across various altitudinal gradients of climax communities in karst regions can elucidate the characteristics of survival strategy adaptations among plant communities. This understanding may also reveal the growth dynamics and driving factors of climax communities in unique habitats. In this study, we examined nine climax communities located in the karst region of Southwest China, categorizing them into three distinct altitude gradients: low-, middle-, and high-altitude communities. By integrating species characteristics and community structure, we analyzed the patterns of change in leaf functional traits among plant communities at different altitudinal gradients and the relationships between these functional traits and environmental factors across the varying altitudes. The results indicated the following: (1) There was a significant difference in the specific leaf area (SLA) of the community as altitude increased, with a gradual decrease observed. The traits exhibiting higher coefficients of variation (CVs) in the leaves of the karst vertex community included the leaf carbon-to-nitrogen ratio (LCN), leaf area (LA), and leaf dry matter content (LDMC). Additionally, the environmental factors with higher CVs included soil organic carbon (SOC), soil phosphorus content (SPC), and the soil carbon-to-phosphorus ratio (SCP). (2) Soil organic carbon content (SOC), total nitrogen content (SNC), carbon-to-phosphorus ratio (SCP), and nitrogen-to-phosphorus ratio (SNP) demonstrated significant differences with increasing altitude. (3) The primary environmental factors influencing plant communities in karst areas included soil nitrogen content (SNC), mean annual temperature (NJW), soil organic carbon content (SOC), soil phosphorus content (SPC), soil water content (SWC), and mean annual precipitation (NJS). Our results indicated that the variation in leaf functional traits with altitude in karst climax communities was inconsistent. Among these traits, the specific leaf area (SLA) showed the most significant variation, and karst climax communities appeared to adapt to environmental changes by regulating traits such as leaf area (LA), leaf dry matter content (LDMC), and leaf carbon-to-nitrogen ratio (LCN). Soil organic carbon (SOC) and soil phosphorus content (SPC) are key factors contributing to habitat heterogeneity in the karst region. The karst climax communities are influenced by both soil and climatic factors along the altitudinal gradient. As altitude increases, these communities tend to adopt a life strategy. Furthermore, high-altitude terminal communities in karst areas are more susceptible to environmental filtering, while low-altitude areas are more affected by limitations in similarity. Full article
Show Figures

Figure 1

13 pages, 1246 KB  
Article
Comparing Auto-Machine Learning and Expert-Designed Models in Diagnosing Vitreomacular Interface Disorders
by Ceren Durmaz Engin, Mahmut Ozan Gokkan, Seher Koksaldi, Mustafa Kayabasi, Ufuk Besenk, Mustafa Alper Selver and Andrzej Grzybowski
J. Clin. Med. 2025, 14(8), 2774; https://doi.org/10.3390/jcm14082774 - 17 Apr 2025
Cited by 1 | Viewed by 1368
Abstract
Background: The vitreomacular interface (VMI) encompasses a group of retinal disorders that significantly impact vision, requiring accurate classification for effective management. This study aims to compare the effectiveness of an expert-designed custom deep learning (DL) model and a code free Auto Machine Learning [...] Read more.
Background: The vitreomacular interface (VMI) encompasses a group of retinal disorders that significantly impact vision, requiring accurate classification for effective management. This study aims to compare the effectiveness of an expert-designed custom deep learning (DL) model and a code free Auto Machine Learning (ML) model in classifying optical coherence tomography (OCT) images of VMI disorders. Materials and Methods: A balanced dataset of OCT images across five classes—normal, epiretinal membrane (ERM), idiopathic full-thickness macular hole (FTMH), lamellar macular hole (LMH), and vitreomacular traction (VMT)—was used. The expert-designed model combined ResNet-50 and EfficientNet-B0 architectures with Monte Carlo cross-validation. The AutoML model was created on Google Vertex AI, which handled data processing, model selection, and hyperparameter tuning automatically. Performance was evaluated using average precision, precision, and recall metrics. Results: The expert-designed model achieved an overall balanced accuracy of 95.97% and a Matthews Correlation Coefficient (MCC) of 94.65%. Both models attained 100% precision and recall for normal cases. For FTMH, the expert model reached perfect precision and recall, while the AutoML model scored 97.8% average precision, and 97.4% recall. In VMT detection, the AutoML model showed 99.5% average precision with a slightly lower recall of 94.7% compared to the expert model’s 95%. For ERM, the expert model achieved 95% recall, while the AutoML model had higher precision at 93.9% but a lower recall of 79.5%. In LMH classification, the expert model exhibited 95% precision, compared to 72.3% for the AutoML model, with similar recall for both (88% and 87.2%, respectively). Conclusions: While the AutoML model demonstrated strong performance, the expert-designed model achieved superior accuracy across certain classes. AutoML platforms, although accessible to healthcare professionals, may require further advancements to match the performance of expert-designed models in clinical applications. Full article
(This article belongs to the Special Issue Artificial Intelligence and Eye Disease)
Show Figures

Figure 1

16 pages, 293 KB  
Article
Failed Skew Zero Forcing Numbers of Path Powers and Circulant Graphs
by Aidan Johnson, Andrew Vick, Rigoberto Flórez and Darren A. Narayan
AppliedMath 2025, 5(2), 32; https://doi.org/10.3390/appliedmath5020032 - 24 Mar 2025
Viewed by 663
Abstract
For a graph G, the zero forcing number of G, Z(G), is defined to be the minimum cardinality of a set S of vertices for which repeated applications of the forcing rule results in all vertices being [...] Read more.
For a graph G, the zero forcing number of G, Z(G), is defined to be the minimum cardinality of a set S of vertices for which repeated applications of the forcing rule results in all vertices being in S. The forcing rule is as follows: if a vertex v is in S, and exactly one neighbor u of v is not in S, then the vertex u is added to S in the subsequent iteration. Now, the failed zero forcing number of a graph is defined to be the maximum size of a set of vertices which does not force all of the vertices in the graph. A similar type of forcing is called skew zero forcing, which is defined so that if there is exactly one neighbor u of v that is not in S, then the vertex u is added to S in the next iteration. The key difference is that vertices that are not in S can force other vertices. The failed skew zero forcing number of a graph is denoted by F(G). At its core, the problem we consider is how to identify the tipping point at which information or infection will spread through a network or a population. The graphs we consider are where computers/routers or people are arranged in a linear or circular formation with varying proximities for contagion. Here, we present new results for failed skew zero forcing numbers of path powers and circulant graphs. Furthermore, we found that the failed skew zero forcing numbers of these families form interesting sequences with increasing n. Full article
Show Figures

Figure 1

15 pages, 560 KB  
Article
Entropies and Degree-Based Topological Indices of Generalized Sierpiński Graphs
by Si-Ao Xu, Jia-Dong Si and Jia-Bao Liu
Fractal Fract. 2025, 9(3), 190; https://doi.org/10.3390/fractalfract9030190 - 19 Mar 2025
Viewed by 1013
Abstract
Fractals are geometric patterns that appear self-similar across all length scales and are constructed by repeating a single unit on a regular basis. Entropy, as a core thermodynamic function, is an extension based on information theory (such as Shannon entropy) that is used [...] Read more.
Fractals are geometric patterns that appear self-similar across all length scales and are constructed by repeating a single unit on a regular basis. Entropy, as a core thermodynamic function, is an extension based on information theory (such as Shannon entropy) that is used to describe the topological structural complexity or degree of disorder in networks. Topological indices, as graph invariants, provide quantitative descriptors for characterizing global structural properties. In this paper, we investigate two types of generalized Sierpiński graphs constructed on the basis of different seed graphs, and employ six topological indices—the first Zagreb index, the second Zagreb index, the forgotten index, the augmented Zagreb index, the Sombor index, and the elliptic Sombor index—to analyze the corresponding entropy. We utilize the method of edge partition based on vertex degrees and derive analytical formulations for the first Zagreb entropy, the second Zagreb entropy, the forgotten entropy, the augmented Zagreb entropy, the Sombor entropy, and the elliptic Sombor entropy. This research approach, which integrates entropy with Sierpiński network characteristics, furnishes novel perspectives and instrumental tools for addressing challenges in chemical graph theory, computer networks, and other related fields. Full article
(This article belongs to the Special Issue Fractal Functions: Theoretical Research and Application Analysis)
Show Figures

Figure 1

19 pages, 5807 KB  
Article
BurgsVO: Burgs-Associated Vertex Offset Encoding Scheme for Detecting Rotated Ships in SAR Images
by Mingjin Zhang, Yaofei Li, Jie Guo, Yunsong Li and Xinbo Gao
Remote Sens. 2025, 17(3), 388; https://doi.org/10.3390/rs17030388 - 23 Jan 2025
Cited by 7 | Viewed by 1126
Abstract
Synthetic Aperture Radar (SAR) is a crucial remote sensing technology with significant advantages. Ship detection in SAR imagery has garnered significant attention. However, existing ship detection methods often overlook feature extraction, and the unique imaging mechanisms of SAR images hinder the direct application [...] Read more.
Synthetic Aperture Radar (SAR) is a crucial remote sensing technology with significant advantages. Ship detection in SAR imagery has garnered significant attention. However, existing ship detection methods often overlook feature extraction, and the unique imaging mechanisms of SAR images hinder the direct application of conventional natural image feature extraction techniques. Moreover, oriented bounding box-based detection methods often prioritize accuracy excessively, leading to increased parameters and computational costs, which in turn elevate computational load and model complexity. To address these issues, we propose a novel two-stage detector, Burgs-rooted vertex offset encoding scheme (BurgsVO), for detecting rotated ships in SAR images. BurgsVO consists of two key modules: the Burgs equation heuristics module, which facilitates feature extraction, and the average diagonal vertex offset (ADVO) encoding scheme, which significantly reduces computational costs. Specifically, the Burgs equation module integrates temporal information with spatial data for effective feature aggregation, establishing a strong foundation for subsequent object detection. The ADVO encoding scheme reduces parameters through anchor transformation, leveraging geometric similarities between quadrilaterals and triangles to further reduce computational costs. Experimental results on the RSSDD and RSDD benchmarks demonstrate that the proposed BurgsVO outperforms the state-of-the-art detectors in both accuracy and efficiency. Full article
Show Figures

Figure 1

16 pages, 8853 KB  
Article
The Practical Implications of Re-Referencing in ERP Studies: The Case of N400 in the Picture–Word Verification Task
by Vojislav Jovanović, Igor Petrušić, Vanja Ković and Andrej M. Savić
Diagnostics 2025, 15(2), 156; https://doi.org/10.3390/diagnostics15020156 - 11 Jan 2025
Viewed by 2891
Abstract
Background: The selection of an optimal referencing method in event-related potential (ERP) research has been a long-standing debate, as it can significantly influence results and lead to data misinterpretation. Such misinterpretation can produce flawed scientific conclusions, like the inaccurate localization of neural processes, [...] Read more.
Background: The selection of an optimal referencing method in event-related potential (ERP) research has been a long-standing debate, as it can significantly influence results and lead to data misinterpretation. Such misinterpretation can produce flawed scientific conclusions, like the inaccurate localization of neural processes, and in practical applications, such as using ERPs as biomarkers in medicine, it may result in incorrect diagnoses or ineffective treatments. In line with the development and advancement of good scientific practice (GSP) in ERP research, this study sought to address several questions regarding the most suitable digital reference for investigating the N400 ERP component. Methods: The study was conducted on 17 neurotypical participants. Based on previous research, the references evaluated included the common average reference (AVE), mean earlobe reference (EARS), left mastoid reference (L), mean mastoids reference (MM), neutral infinity reference (REST), and vertex reference (VERT). Results: The results showed that all digital references, except for VERT, successfully elicited the centroparietal N400 effect in the picture–word verification task. The AVE referencing method showed the most optimal set of metrics in terms of effect size and localization, although it also produced the smallest difference waves. The most similar topographic dynamics in the N400 window were observed between the AVE and REST referencing methods. Conclusions: As the most optimal regions of interest (ROI) for the picture–word elicited N400 effect, nine electrode sites spanning from superior frontocentral to parietal regions were identified, showing consistent effects across all referencing methods except VERT. Full article
(This article belongs to the Special Issue EEG Analysis in Diagnostics)
Show Figures

Figure 1

Back to TopTop