Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (61)

Search Parameters:
Keywords = scale-free graph

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 670 KiB  
Article
LDC-GAT: A Lyapunov-Stable Graph Attention Network with Dynamic Filtering and Constraint-Aware Optimization
by Liping Chen, Hongji Zhu and Shuguang Han
Axioms 2025, 14(7), 504; https://doi.org/10.3390/axioms14070504 - 27 Jun 2025
Viewed by 229
Abstract
Graph attention networks are pivotal for modeling non-Euclidean data, yet they face dual challenges: training oscillations induced by projection-based high-dimensional constraints and gradient anomalies due to poor adaptation to heterophilic structure. To address these issues, we propose LDC-GAT (Lyapunov-Stable Graph Attention Network with [...] Read more.
Graph attention networks are pivotal for modeling non-Euclidean data, yet they face dual challenges: training oscillations induced by projection-based high-dimensional constraints and gradient anomalies due to poor adaptation to heterophilic structure. To address these issues, we propose LDC-GAT (Lyapunov-Stable Graph Attention Network with Dynamic Filtering and Constraint-Aware Optimization), which jointly optimizes both forward and backward propagation processes. In the forward path, we introduce Dynamic Residual Graph Filtering, which integrates a tunable self-loop coefficient to balance neighborhood aggregation and self-feature retention. This filtering mechanism, constrained by a lower bound on Dirichlet energy, improves multi-head attention via multi-scale fusion and mitigates overfitting. In the backward path, we design the Fro-FWNAdam, a gradient descent algorithm guided by a learning-rate-aware perceptron. An explicit Frobenius norm bound on weights is derived from Lyapunov theory to form the basis of the perceptron. This stability-aware optimizer is embedded within a Frank–Wolfe framework with Nesterov acceleration, yielding a projection-free constrained optimization strategy that stabilizes training dynamics. Experiments on six benchmark datasets show that LDC-GAT outperforms GAT by 10.54% in classification accuracy, which demonstrates strong robustness on heterophilic graphs. Full article
(This article belongs to the Section Mathematical Analysis)
Show Figures

Figure 1

22 pages, 7220 KiB  
Article
Identifying Polycentric Urban Structure Using the Minimum Cycle Basis of Road Network as Building Blocks
by Yuanbiao Li, Tingyu Wang, Yu Zhao and Bo Yang
Entropy 2025, 27(6), 618; https://doi.org/10.3390/e27060618 - 11 Jun 2025
Viewed by 366
Abstract
A graph’s minimum cycle basis is defined as the smallest collection of cycles that exhibit linear independence in the cycle space, serving as fundamental building blocks for constructing any cyclic structure within the graph. These bases are useful in various contexts, including the [...] Read more.
A graph’s minimum cycle basis is defined as the smallest collection of cycles that exhibit linear independence in the cycle space, serving as fundamental building blocks for constructing any cyclic structure within the graph. These bases are useful in various contexts, including the intricate analysis of electrical networks, structural engineering endeavors, chemical processes, and surface reconstruction techniques, etc. This study investigates the urban road networks of six Chinese cities to analyze their topological features, node centrality, and robustness (resilience to traffic disruptions) using motif analysis and minimum cycle bases methodologies. Some interesting conclusions are obtained: the frequency of motifs containing cycles exceeds that of random networks with equivalent degree sequences; the frequency distribution of minimum cycle lengths and surface areas obeys the power-law distribution. The cycle contribution rate is introduced to investigate the centrality of nodes within road networks, and has a significant impact on the total number of cycles in the robustness analysis. Finally, we construct two types of cycle-based dual networks for urban road networks by representing cycles as nodes and establishing edges between two cycles sharing a common node and edge, respectively. The results show that cycle-based dual networks exhibit small-world and scale-free properties. The research facilitates a comprehensive understanding of the cycle structure characteristics in urban road networks, thereby providing a theoretical foundation for both subsequent modeling endeavors of transportation networks and optimization strategies for existing road infrastructure. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

17 pages, 3059 KiB  
Article
Helix Folding in One Dimension: Effects of Proline Co-Solvent on Free Energy Landscape of Hydrogen Bond Dynamics in Alanine Peptides
by Krzysztof Kuczera
Life 2025, 15(5), 809; https://doi.org/10.3390/life15050809 - 19 May 2025
Viewed by 511
Abstract
The effects of proline co-solvent on helix folding are explored through the single discrete coordinate of the number of helical hydrogen bonds. The analysis is based on multi-microsecond length molecular dynamics simulations of alanine-based helix-forming peptides, (ALA)n, of length n = 4, 8, [...] Read more.
The effects of proline co-solvent on helix folding are explored through the single discrete coordinate of the number of helical hydrogen bonds. The analysis is based on multi-microsecond length molecular dynamics simulations of alanine-based helix-forming peptides, (ALA)n, of length n = 4, 8, 15 and 21 residues, in an aqueous solution with 2 M concentration of proline. The effects of addition of proline on the free energy landscape for helix folding were analyzed using the graph-based Dijkstra algorithm, Optimal Dimensionality Reduction kinetic coarse graining, committor functions, as well as through the diffusion of the helix boundary. Viewed at a sufficiently long time-scale, helix folding in the coarse-grained hydrogen bond space follows a consecutive mechanism, with well-defined initiation and propagation phases, and an interesting set of intermediates. Proline addition slows down the folding relaxation of all four peptides, increases helix content and induces subtle mechanistic changes compared to pure water solvation. A general trend is for transition state shift towards earlier stages of folding in proline relative to water. For ALA5 and ALA8 direct folding is dominant. In ALA8 and ALA15 multiple pathways appear possible. For ALA21 a simple mechanism emerges, with a single path from helix to coil through a set of intermediates. Overall, this work provides new insights into effects of proline co-solvent on helix folding, complementary to more standard approaches based on three-dimensional molecular structures. Full article
(This article belongs to the Special Issue Applications of Molecular Dynamics to Biological Systems)
Show Figures

Figure 1

23 pages, 2146 KiB  
Article
Large-Scale Hyperspectral Image-Projected Clustering via Doubly Stochastic Graph Learning
by Nian Wang, Zhigao Cui, Yunwei Lan, Cong Zhang, Yuanliang Xue, Yanzhao Su and Aihua Li
Remote Sens. 2025, 17(9), 1526; https://doi.org/10.3390/rs17091526 - 25 Apr 2025
Cited by 1 | Viewed by 410
Abstract
Hyperspectral image (HSI) clustering has drawn more and more attention in recent years as it frees us from labor-intensive manual annotation. However, current works cannot fully enjoy the rich spatial and spectral information due to redundant spectral signatures and fixed anchor learning. Moreover, [...] Read more.
Hyperspectral image (HSI) clustering has drawn more and more attention in recent years as it frees us from labor-intensive manual annotation. However, current works cannot fully enjoy the rich spatial and spectral information due to redundant spectral signatures and fixed anchor learning. Moreover, the learned graph always obtains suboptimal results due to the separate affinity estimation and graph symmetry. To address the above challenges, in this paper, we propose large-scale hyperspectral image-projected clustering via doubly stochastic graph learning (HPCDL). Our HPCDL is a unified framework that learns a projected space to capture useful spectral information, simultaneously learning a pixel–anchor graph and an anchor–anchor graph. The doubly stochastic constraints are conducted to learn an anchor–anchor graph with strict probabilistic affinity, directly providing anchor cluster indicators via connectivity. Meanwhile, when using label propagation, pixel-level clustering results are obtained. An efficient optimization strategy is proposed to solve our HPCDL model, requiring monomial linear complexity concerning the number of pixels. Therefore, our HPCDL has the ability to deal with large-scale HSI datasets. Experiments on three datasets demonstrate the superiority of our HPCDL for both clustering performance and the time burden. Full article
(This article belongs to the Special Issue Remote Sensing Image Classification: Theory and Application)
Show Figures

Graphical abstract

16 pages, 716 KiB  
Article
Efficient Graph Representation Learning by Non-Local Information Exchange
by Ziquan Wei, Tingting Dan, Jiaqi Ding and Guorong Wu
Electronics 2025, 14(5), 1047; https://doi.org/10.3390/electronics14051047 - 6 Mar 2025
Viewed by 765
Abstract
Graphs are an effective data structure for characterizing ubiquitous connections as well as evolving behaviors that emerge in inter-wined systems. Limited by the stereotype of node-to-node connections, learning node representations is often confined in a graph diffusion process where local information has been [...] Read more.
Graphs are an effective data structure for characterizing ubiquitous connections as well as evolving behaviors that emerge in inter-wined systems. Limited by the stereotype of node-to-node connections, learning node representations is often confined in a graph diffusion process where local information has been excessively aggregated, as the random walk of graph neural networks (GNN) explores far-reaching neighborhoods layer-by-layer. In this regard, tremendous efforts have been made to alleviate feature over-smoothing issues such that current backbones can lend themselves to be used in a deep network architecture. However, compared to designing a new GNN, less attention has been paid to underlying topology by graph re-wiring, which mitigates not only flaws of the random walk but also the over-smoothing risk incurred by reducing unnecessary diffusion in deep layers. Inspired by the notion of non-local mean techniques in the area of image processing, we propose a non-local information exchange mechanism by establishing an express connection to the distant node, instead of propagating information along the (possibly very long) original pathway node-after-node. Since the process of seeking express connections throughout a graph can be computationally expensive in real-world applications, we propose a re-wiring framework (coined the express messenger wrapper) to progressively incorporate express links in a non-local manner, which allows us to capture multi-scale features without using a very deep model; our approach is thus free of the over-smoothing challenge. We integrate our express messenger wrapper with existing GNN backbones (either using graph convolution or tokenized transformer) and achieve a new record on the Roman-empire dataset as well as in terms of SOTA performance on both homophilous and heterophilous datasets. Full article
(This article belongs to the Special Issue Artificial Intelligence in Graphics and Images)
Show Figures

Figure 1

15 pages, 9408 KiB  
Article
Graph Isomorphic Network-Assisted Optimal Coordination of Wave Energy Converters Based on Maximum Power Generation
by Ashkan Safari, Afshin Rahimi and Hoda Sorouri
Electronics 2025, 14(4), 795; https://doi.org/10.3390/electronics14040795 - 18 Feb 2025
Viewed by 558
Abstract
Oceans are a major source of clean energy, harnessing the vast and consistent power of waves to generate electricity. Today, they are seen as a vital renewable and clean solution for transitioning to a complete fossil fuel-free future world. To get the most [...] Read more.
Oceans are a major source of clean energy, harnessing the vast and consistent power of waves to generate electricity. Today, they are seen as a vital renewable and clean solution for transitioning to a complete fossil fuel-free future world. To get the most out of ocean wave potential, Wave Energy Converters (WECs) are being used to harness the power of ocean waves into usable electrical energy. To this end, to maximize the power generated from the WECs, two strategies for WEC design improvement and optimal coordination can be considered. Among these, optimal coordination is the more straightforward method to implement. However, most of the recently developed coordination strategies are dynamic-based, encountering challenges as the system’s scale expands and grows larger. Consequently, a novel Graph Isomorphic Network (GIN)-based model is presented in this paper. The proposed model consists of the following five layers: the input graph, two GIN convolutional layers (GIN Conv.1, and 2), a mean pooling layer, and the output layer. The target of total generated power is predicted based on the features of the generated power from each WEC and the related spatial coordinates {xi,yi}. Subsequently, based on the anticipated total power considered by the model, the system enables maximum generation. The model performs spatial coordination analyses to present the optimal coordination for each WEC to achieve the objective of maximizing total generated power. The proposed model is evaluated through several Key Performance Indicators (KPIs), achieving the least number of errors in prediction and optimal coordination performances. Full article
(This article belongs to the Special Issue Advances in Renewable Energy and Electricity Generation)
Show Figures

Figure 1

24 pages, 12556 KiB  
Article
Evolutionary Game Strategy Research on PSC Inspection Based on Knowledge Graphs
by Chengyong Liu, Qi Wang, Banghao Xiang, Yi Xu and Langxiong Gan
J. Mar. Sci. Eng. 2024, 12(8), 1449; https://doi.org/10.3390/jmse12081449 - 21 Aug 2024
Cited by 3 | Viewed by 1281
Abstract
Port state control (PSC) inspections, considered a crucial means of maritime safety supervision, are viewed by the industry as a critical line of defense ensuring the stability of the international supply chain. Due to the high level of globalization and strong regional characteristics [...] Read more.
Port state control (PSC) inspections, considered a crucial means of maritime safety supervision, are viewed by the industry as a critical line of defense ensuring the stability of the international supply chain. Due to the high level of globalization and strong regional characteristics of PSC inspections, improving the accuracy of these inspections and efficiently utilizing inspection resources have become urgent issues. The construction of a PSC inspection ontology model from top to bottom, coupled with the integration of multisource data from bottom to top, is proposed in this paper. The RoBERTa-wwm-ext model is adopted as the entity recognition model, while the XGBoost4 model serves as the knowledge fusion model to establish the PSC inspection knowledge graph. Building upon an evolutionary game model of the PSC inspection knowledge graph, this study introduces an evolutionary game method to analyze the internal evolutionary dynamics of ship populations from a microscopic perspective. Through numerical simulations and standardization diffusion evolution simulations for ship support, the evolutionary impact of each parameter on the subgraph is examined. Subsequently, based on the results of the evolutionary game analysis, recommendations for PSC inspection auxiliary decision-making and related strategic suggestions are presented. The experimental results show that the RoBERTa-wwm-ext model and the XGBoost4 model used in the PSC inspection knowledge graph achieve superior performance in both entity recognition and knowledge fusion tasks, with the model accuracies surpassing those of other compared models. In the knowledge graph-based PSC inspection evolutionary game, the reward and punishment conditions (n, f) can reduce the burden of the standardization cost for safeguarding the ship. A ship is more sensitive to changes in the detention rate β than to changes in the inspection rate α. To a certain extent, the detention cost CDC plays a role similar to that of the detention rate β. In small-scale networks, relevant parameters in the ship’s standardization game have a more pronounced effect, with detention cost CDC having a greater impact than standardization cost CS on ship strategy choice and scale-free network evolution. Based on the experimental results, PSC inspection strategies are suggested. These strategies provide port state authorities with auxiliary decision-making tools for PSC inspections, promote the informatization of maritime regulation, and offer new insights for the study of maritime traffic safety management and PSC inspections. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

25 pages, 626 KiB  
Article
A Novel Design for Joint Collaborative NOMA Transmission with a Two–Hop Multi–Path UE Aggregation Mechanism
by Xinqi Zhao, Hua-Min Chen, Shaofu Lin, Hui Li and Tao Chen
Symmetry 2024, 16(8), 1052; https://doi.org/10.3390/sym16081052 - 15 Aug 2024
Viewed by 1271
Abstract
With the exponential growth of devices, particularly Internet of things (IoT) devices, connecting to wireless networks, existing networks face significant challenges. Spectral efficiency is crucial for uplink, which is the dominant form of asymmetrical network in today’s communication landscape, in large-scale connectivity scenarios. [...] Read more.
With the exponential growth of devices, particularly Internet of things (IoT) devices, connecting to wireless networks, existing networks face significant challenges. Spectral efficiency is crucial for uplink, which is the dominant form of asymmetrical network in today’s communication landscape, in large-scale connectivity scenarios. In this paper, an uplink transmission scenario is considered and user equipment (UE) aggregation is employed, wherein some users act as cooperative nodes (CNs), and help to forward received data from other users requiring coverage extension, reliability improvement, and data–rate enhancement. Non–orthogonal multiple access (NOMA) technology is introduced to improve spectral efficiency. To reduce the interference impact to guarantee the data rate, one UE can be assisted by multiple CNs, and these CNs and corresponding assisted UEs are clustered into joint transmission pairs (JTPs). Interference-free transmission can be achieved within each JTP by utilizing different successive interference cancellation (SIC) decoding orders. To explore SIC gains and maximize data rates in NOMA–based UE aggregation, we propose a primary user CN–based channel–sorting algorithm for JTP construction and apply a whale optimization algorithm for JTP power allocation. Additionally, a conflict graph is established among feasible JTPs, and a greedy strategy is employed to find the maximum weighted independent set (MWIS) of the conflict graph for subchannel allocation. Simulation results demonstrate that our joint collaborative NOMA (JC–NOMA) design with two–hop multi–path UE aggregation significantly improves spectral efficiency and capacity under limited spectral resources. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

13 pages, 1865 KiB  
Article
Design and Development of a Teaching–Learning Sequence about Deterministic Chaos Using Tracker Software
by Alessio Parlati, Giovanni Giuliana and Italo Testa
Educ. Sci. 2024, 14(8), 842; https://doi.org/10.3390/educsci14080842 - 5 Aug 2024
Viewed by 1357
Abstract
In this paper, we present the design, development, and pilot implementation of a Teaching–Learning Sequence (TLS) about the physics of deterministic chaos. The main aim of the activities is to let students become aware of two key ideas about deterministic chaos: (1) the [...] Read more.
In this paper, we present the design, development, and pilot implementation of a Teaching–Learning Sequence (TLS) about the physics of deterministic chaos. The main aim of the activities is to let students become aware of two key ideas about deterministic chaos: (1) the role of initial conditions and (2) the graphical representation in a momentum–position graph. To do so, the TLS is based on the observation and analysis of the trajectory of the free end of a double pendulum through the modeling software Tracker. In particular, the Tracker-based activities help students understand that, by modifying the well-known simple pendulum dynamic system into a double pendulum, long-time-scale predictability is lost, and a completely new behavior appears. The TLS was pilot tested in a remote teaching setting with about 70 Italian high school students (16–17 years old). The pretest analysis shows that before participating in the activities, students held typical misconceptions about chaotic behavior. Analysis of the written responses collected during and after implementation shows that the proposed activities allowed students to grasp the two key ideas about nondeterministic chaos. A possible integration of the TLS with an online simulation is finally discussed. Full article
Show Figures

Figure 1

19 pages, 3671 KiB  
Article
A Self-Adaptive Centrality Measure for Asset Correlation Networks
by Paolo Bartesaghi, Gian Paolo Clemente and Rosanna Grassi
Economies 2024, 12(7), 164; https://doi.org/10.3390/economies12070164 - 27 Jun 2024
Viewed by 2208
Abstract
We propose a new centrality measure based on a self-adaptive epidemic model characterized by an endogenous reinforcement mechanism in the transmission of information between nodes. We provide a strategy to assign to nodes a centrality score that depends, in an eigenvector centrality scheme, [...] Read more.
We propose a new centrality measure based on a self-adaptive epidemic model characterized by an endogenous reinforcement mechanism in the transmission of information between nodes. We provide a strategy to assign to nodes a centrality score that depends, in an eigenvector centrality scheme, on that of all the elements of the network, nodes and edges, connected to it. We parameterize this score as a function of a reinforcement factor, which for the first time implements the intensity of the interaction between the network of nodes and that of the edges. In this proposal, a local centrality measure representing the steady state of a diffusion process incorporates the global information encoded in the whole network. This measure proves effective in identifying the most influential nodes in the propagation of rumors/shocks/behaviors in a social network. In the context of financial networks, it allows us to highlight strategic assets on correlation networks. The dependence on a coupling factor between graph and line graph also enables the different asset responses in terms of ranking, especially on scale-free networks obtained as minimum spanning trees from correlation networks. Full article
Show Figures

Figure 1

21 pages, 13893 KiB  
Article
A Color- and Geometric-Feature-Based Approach for Denoising Three-Dimensional Cultural Relic Point Clouds
by Hongjuan Gao, Hui Wang and Shijie Zhao
Entropy 2024, 26(4), 319; https://doi.org/10.3390/e26040319 - 5 Apr 2024
Viewed by 1663
Abstract
In the acquisition process of 3D cultural relics, it is common to encounter noise. To facilitate the generation of high-quality 3D models, we propose an approach based on graph signal processing that combines color and geometric features to denoise the point cloud. We [...] Read more.
In the acquisition process of 3D cultural relics, it is common to encounter noise. To facilitate the generation of high-quality 3D models, we propose an approach based on graph signal processing that combines color and geometric features to denoise the point cloud. We divide the 3D point cloud into patches based on self-similarity theory and create an appropriate underlying graph with a Markov property. The features of the vertices in the graph are represented using 3D coordinates, normal vectors, and color. We formulate the point cloud denoising problem as a maximum a posteriori (MAP) estimation problem and use a graph Laplacian regularization (GLR) prior to identifying the most probable noise-free point cloud. In the denoising process, we moderately simplify the 3D point to reduce the running time of the denoising algorithm. The experimental results demonstrate that our proposed approach outperforms five competing methods in both subjective and objective assessments. It requires fewer iterations and exhibits strong robustness, effectively removing noise from the surface of cultural relic point clouds while preserving fine-scale 3D features such as texture and ornamentation. This results in more realistic 3D representations of cultural relics. Full article
Show Figures

Figure 1

19 pages, 1408 KiB  
Article
Airport Surface Arrival and Departure Scheduling Using Extended First-Come, First-Served Scheduler
by Bae-Seon Park and Hak-Tae Lee
Aerospace 2024, 11(1), 24; https://doi.org/10.3390/aerospace11010024 - 26 Dec 2023
Cited by 3 | Viewed by 2317
Abstract
This paper demonstrates the effectiveness of the Extended First-Come, First-Served (EFCFS) scheduler for integrated arrival and departure scheduling by comparing the scheduling results with the recorded operational data at Incheon International Airport (ICN), Republic of Korea. The EFCFS scheduler can handle multiple capacity- [...] Read more.
This paper demonstrates the effectiveness of the Extended First-Come, First-Served (EFCFS) scheduler for integrated arrival and departure scheduling by comparing the scheduling results with the recorded operational data at Incheon International Airport (ICN), Republic of Korea. The EFCFS scheduler can handle multiple capacity- or flow-rate-related constraints along the path of each flight, which is represented by a node–link graph structure, and can solve large-scale problems with low computational cost. However, few studies have attempted a systematic verification of the EFCFS scheduler by comparing the scheduling results with historical operational data. In this paper, flights are scheduled between gates and runways on the airport surface with detailed constraints such as runway wake turbulence separation minima and conflict-free taxiing. The scheduler is tested using historical flight data from 15 August 2022 at ICN. The input schedule is generated based on the flight plan data extracted from the Flight Operation Information System (FOIS) and airport surface detection equipment data, and the results are compared with the times extracted from the FOIS data. The scheduling results for 500 aircraft show that the average takeoff delay is reduced by about 19 min, while the average landing delay is increased by less than one minute when the gate occupancy constraint is not considered. The results also confirm that the EFCFS effectively utilizes the available time slots to reduce delays by switching the original departure or arrival orders for a small number of flights. Full article
(This article belongs to the Section Air Traffic and Transportation)
Show Figures

Figure 1

27 pages, 920 KiB  
Article
On Finding Optimal (Dynamic) Arborescences
by Joaquim Espada, Alexandre P. Francisco, Tatiana Rocher, Luís M. S. Russo and Cátia Vaz
Algorithms 2023, 16(12), 559; https://doi.org/10.3390/a16120559 - 6 Dec 2023
Viewed by 2356
Abstract
Let G=(V,E) be a directed and weighted graph with a vertex set V of size n and an edge set E of size m such that each edge (u,v)E has a [...] Read more.
Let G=(V,E) be a directed and weighted graph with a vertex set V of size n and an edge set E of size m such that each edge (u,v)E has a real-valued weight w(u,c). An arborescence in G is a subgraph T=(V,E) such that, for a vertex uV, which is the root, there is a unique path in T from u to any other vertex vV. The weight of T is the sum of the weights of its edges. In this paper, given G, we are interested in finding an arborescence in G with a minimum weight, i.e., an optimal arborescence. Furthermore, when G is subject to changes, namely, edge insertions and deletions, we are interested in efficiently maintaining a dynamic arborescence in G. This is a well-known problem with applications in several domains such as network design optimization and phylogenetic inference. In this paper, we revisit the algorithmic ideas proposed by several authors for this problem. We provide detailed pseudocode, as well as implementation details, and we present experimental results regarding large scale-free networks and phylogenetic inference. Our implementation is publicly available. Full article
Show Figures

Figure 1

11 pages, 881 KiB  
Article
Maze Solving Mobile Robot Based on Image Processing and Graph Theory
by Luis A. Avila-Sánchez, Carlos Sánchez-López, Rocío Ochoa-Montiel, Fredy Montalvo-Galicia, Luis A. Sánchez-Gaspariano, Carlos Hernández-Mejía and Hugo G. González-Hernández
Technologies 2023, 11(6), 171; https://doi.org/10.3390/technologies11060171 - 5 Dec 2023
Cited by 3 | Viewed by 4455
Abstract
Advances in the development of collision-free path planning algorithms are the main need not only to solve mazes with robotic systems, but also for their use in modern product transportation or green logistics systems and planning merchandise deliveries inside or outside a factory. [...] Read more.
Advances in the development of collision-free path planning algorithms are the main need not only to solve mazes with robotic systems, but also for their use in modern product transportation or green logistics systems and planning merchandise deliveries inside or outside a factory. This challenge increases as the complexity of the task in its structure also increases. This paper deals with the development of a novel methodology for solving mazes with a mobile robot, using image processing techniques and graph theory. The novelty is that the mobile robot can find the shortest path from a start-point to the end-point into irregular mazes with abundant irregular obstacles, a situation that is not far from reality. Maze information is acquired from an image and depending on the size of the mobile robot, a grid of nodes with the same dimensions of the maze is built. Another contribution of this paper is that the size of the maze can be scaled from 1 m × 1 m to 66 m × 66 m, maintaining the essence of the proposed collision-free path planning methodology. Afterwards, graph theory is used to find the shortest path within the grid of reduced nodes due to the elimination of those nodes absorbed by the irregular obstacles. To avoid the mobile robot to travel through those nodes very close to obstacles and borders, resulting in a collision, each image of the obstacle and border is dilated taking into account the size of the mobile robot. The methodology was validated with two case studies with a mobile robot in different mazes. We emphasize that the maze solution is found in a single computational step, from the maze image as input until the generation of the Path vector. Experimental results show the usefulness of the proposed methodology, which can be used in applications such as intelligent traffic control, military, agriculture and so on. Full article
(This article belongs to the Topic Advances in Mobile Robotics Navigation, 2nd Volume)
Show Figures

Figure 1

15 pages, 3004 KiB  
Article
An Extension of the Susceptible–Infected Model and Its Application to the Analysis of Information Dissemination in Social Networks
by Sergei Sidorov, Alexey Faizliev and Sophia Tikhonova
Modelling 2023, 4(4), 585-599; https://doi.org/10.3390/modelling4040033 - 15 Nov 2023
Cited by 2 | Viewed by 1263
Abstract
Social media significantly influences business, politics, and society. Easy access and interaction among users allow information to spread rapidly across social networks. Understanding how information is disseminated through these new publishing methods is crucial for political and marketing purposes. However, modeling and predicting [...] Read more.
Social media significantly influences business, politics, and society. Easy access and interaction among users allow information to spread rapidly across social networks. Understanding how information is disseminated through these new publishing methods is crucial for political and marketing purposes. However, modeling and predicting information diffusion is challenging due to the complex interactions between network users. This study proposes an analytical approach based on diffusion models to predict the number of social media users engaging in discussions on a topic. We develop a modified version of the susceptible–infected (SI) model that considers the heterogeneity of interactions between users in complex networks. Our model considers the network structure, abandons the assumption of homogeneous mixing, and focuses on information diffusion in scale-free networks. We provide explicit algorithms for modeling information propagation on different types of random graphs and real network structures. We compare our model with alternative approaches, both those considering network structure and those that do not. The accuracy of our model in predicting the number of informed nodes in simulated information diffusion networks demonstrates its effectiveness in describing and predicting information dissemination in social networks. This study highlights the potential of graph-based epidemic models in analyzing online discussion topics and understanding other phenomena spreading on social networks. Full article
Show Figures

Figure 1

Back to TopTop