Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (25)

Search Parameters:
Keywords = weighted graph decomposition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 2242 KB  
Article
Distributed Integrated Scheduling Algorithm for Identical Two-Workshop Based on the Improved Bipartite Graph
by Yingxin Wei, Wei Zhou, Jinghua Zhao, Zhenjiang Tan and Zhiqiang Xie
Sensors 2025, 25(24), 7500; https://doi.org/10.3390/s25247500 - 10 Dec 2025
Viewed by 434
Abstract
To address the issue of further collaboratively optimizing process continuity, time cost, and equipment utilization in identical two-workshop distributed integrated scheduling, an identical two-workshop distributed integrated scheduling algorithm based on the improved bipartite graph (DISA-IBG) is proposed. The method introduces an improved bipartite [...] Read more.
To address the issue of further collaboratively optimizing process continuity, time cost, and equipment utilization in identical two-workshop distributed integrated scheduling, an identical two-workshop distributed integrated scheduling algorithm based on the improved bipartite graph (DISA-IBG) is proposed. The method introduces an improved bipartite graph cyclic decomposition strategy that incorporates both the topological characteristics of the process tree and the dynamic resource constraints of the workshops. Based on the resulting substrings, a multi-substring weight scheduling strategy is constructed to achieve a systematic evaluation of substring priorities. Finally, a substring pre-allocation strategy is designed to simulate the scheduling process through virtual allocation, which enables dynamic adjustments to resource allocation schemes during the actual scheduling process. Experimental results demonstrate that the algorithm reduces the total product makespan to 37 h while improving the overall equipment utilization to 67.8%, thereby achieving the synchronous optimization of “shorter processing time and higher equipment efficiency.” This research provides a feasible scheduling framework for intelligent sensor-enabled manufacturing environments and lays the foundation for data-driven collaborative optimization in cyber-physical production systems. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

33 pages, 9222 KB  
Article
Mine Gas Time-Series Data Prediction and Fluctuation Monitoring Method Based on Decomposition-Enhanced Cross-Graph Forecasting and Anomaly Finding
by Linyu Yuan
Sensors 2025, 25(22), 7014; https://doi.org/10.3390/s25227014 - 17 Nov 2025
Viewed by 492
Abstract
Gas disasters in coal mines are the principal constraint on safe operations; accordingly, accurate gas time-series forecasting and real-time fluctuation monitoring are essential for prevention and early warning. A method termed Decomposition-Enhanced Cross-Graph Forecasting and Anomaly Finding is proposed. The Multi-Variate Variational Mode [...] Read more.
Gas disasters in coal mines are the principal constraint on safe operations; accordingly, accurate gas time-series forecasting and real-time fluctuation monitoring are essential for prevention and early warning. A method termed Decomposition-Enhanced Cross-Graph Forecasting and Anomaly Finding is proposed. The Multi-Variate Variational Mode Decomposition (MVMD) algorithm is refined by integrating wavelet denoising with an Entropy Weight Method (EWM) multi-index scheme (seven indicators, including SNR and PSNR; weight-solver error ≤ 0.001, defined as the maximum absolute change between successive weight vectors in the entropy-weight iteration). Through this optimisation, the decomposition parameters are selected as K = 4 (modes) and α = 1000, yielding effective noise reduction on 83,970 multi-channel records from longwall faces; after joint denoising, SSIM reaches 0.9849, representing an improvement of 0.5%–18.7% over standalone wavelet denoising. An interpretable Cross Interaction Refinement Graph Neural Network (CrossGNN) is then constructed. Shapley analysis is employed to quantify feature contributions; the m1t2 gas component attains a SHAP value of 0.025, which is 5.8× that of the wind-speed sensor. For multi-timestep prediction (T0–T2), the model achieves MAE = 0.008705754 and MSE = 0.000242083, which are 8.7% and 12.7% lower, respectively, than those of STGNN and MTGNN. For fluctuation detection, Pruned Exact Linear Time (PELT) with minimum segment length L_min = 58 is combined with a circular block bootstrap test to identify sudden-growth and high-fluctuation segments while controlling FDR = 0.10. Hasse diagrams are further used to elucidate dominance relations among components (e.g., m3t3, the third decomposed component of the T2 gas sensor). Field data analyses substantiate the effectiveness of the approach and provide technical guidance for the intellectualisation of coal-mine safety management. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

22 pages, 3419 KB  
Article
A Small-Sample Prediction Model for Ground Surface Settlement in Shield Tunneling Based on Adjacent-Ring Graph Convolutional Networks (GCN-SSPM)
by Jinpo Li, Haoxuan Huang and Gang Wang
Buildings 2025, 15(19), 3519; https://doi.org/10.3390/buildings15193519 - 30 Sep 2025
Viewed by 627
Abstract
In some projects, a lack of data causes problems for presenting an accurate prediction model for surface settlement caused by shield tunneling. Existing models often rely on large volumes of data and struggle to maintain accuracy and reliability in shield tunneling. In particular, [...] Read more.
In some projects, a lack of data causes problems for presenting an accurate prediction model for surface settlement caused by shield tunneling. Existing models often rely on large volumes of data and struggle to maintain accuracy and reliability in shield tunneling. In particular, the spatial dependency between adjacent rings is overlooked. To address these limitations, this study presents a small-sample prediction framework for settlement induced by shield tunneling, using an adjacent-ring graph convolutional network (GCN-SSPM). Gaussian smoothing, empirical mode decomposition (EMD), and principal component analysis (PCA) are integrated into the model, which incorporates spatial topological priors by constructing a ring-based adjacency graph to extract essential features. A dynamic ensemble strategy is further employed to enhance robustness across layered geological conditions. Monitoring data from the Wuhan Metro project is used to demonstrate that GCN-SSPM yields accurate and stable predictions, particularly in zones facing abrupt settlement shifts. Compared to LSTM+GRU+Attention and XGBoost, the proposed model reduces RMSE by over 90% (LSTM) and 75% (XGBoost), respectively, while achieving an R2 of about 0.71. Notably, the ensemble assigns over 70% of predictive weight to GCN-SSPM in disturbance-sensitive zones, emphasizing its effectiveness in capturing spatially coupled and nonlinear settlement behavior. The prediction error remains within ±1.2 mm, indicating strong potential for practical applications in intelligent construction and early risk mitigation in complex geological conditions. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

38 pages, 1930 KB  
Article
Existence, Stability, and Numerical Methods for Multi-Fractional Integro-Differential Equations with Singular Kernel
by Pratibha Verma and Wojciech Sumelka
Mathematics 2025, 13(16), 2656; https://doi.org/10.3390/math13162656 - 18 Aug 2025
Viewed by 1379
Abstract
This work investigates the solutions of fractional integro-differential equations (FIDEs) using a unique kernel operator within the Caputo framework. The problem is addressed using both analytical and numerical techniques. First, the two-step Adomian decomposition method (TSADM) is applied to obtain an exact solution [...] Read more.
This work investigates the solutions of fractional integro-differential equations (FIDEs) using a unique kernel operator within the Caputo framework. The problem is addressed using both analytical and numerical techniques. First, the two-step Adomian decomposition method (TSADM) is applied to obtain an exact solution (if it exists). In the second part, numerical methods are used to generate approximate solutions, complementing the analytical approach based on the Adomian decomposition method (ADM), which is further extended using the Sumudu and Shehu transform techniques in cases where TSADM fails to yield an exact solution. Additionally, we establish the existence and uniqueness of the solution via fixed-point theorems. Furthermore, the Ulam–Hyers stability of the solution is analyzed. A detailed error analysis is performed to assess the precision and performance of the developed approaches. The results are demonstrated through validated examples, supported by comparative graphs and detailed error norm tables (L, L2, and L1). The graphical and tabular comparisons indicate that the Sumudu-Adomian decomposition method (Sumudu-ADM) and the Shehu-Adomian decomposition method (Shehu-ADM) approaches provide highly accurate approximations, with Shehu-ADM often delivering enhanced performance due to its weighted formulation. The suggested approach is simple and effective, often producing accurate estimates in a few iterations. Compared to conventional numerical and analytical techniques, the presented methods are computationally less intensive and more adaptable to a broad class of fractional-order differential equations encountered in scientific applications. The adopted methods offer high accuracy, low computational cost, and strong adaptability, with potential for extension to variable-order fractional models. They are suitable for a wide range of complex systems exhibiting evolving memory behavior. Full article
(This article belongs to the Section E: Applied Mathematics)
Show Figures

Figure 1

26 pages, 4899 KB  
Article
SDDGRNets: Level–Level Semantically Decomposed Dynamic Graph Reasoning Network for Remote Sensing Semantic Change Detection
by Zhuli Xie, Gang Wan, Yunxia Yin, Guangde Sun and Dongdong Bu
Remote Sens. 2025, 17(15), 2641; https://doi.org/10.3390/rs17152641 - 30 Jul 2025
Cited by 1 | Viewed by 1283
Abstract
Semantic change detection technology based on remote sensing data holds significant importance for urban and rural planning decisions and the monitoring of ground objects. However, simple convolutional networks are limited by the receptive field, cannot fully capture detailed semantic information, and cannot effectively [...] Read more.
Semantic change detection technology based on remote sensing data holds significant importance for urban and rural planning decisions and the monitoring of ground objects. However, simple convolutional networks are limited by the receptive field, cannot fully capture detailed semantic information, and cannot effectively perceive subtle changes and constrain edge information. Therefore, a dynamic graph reasoning network with layer-by-layer semantic decomposition for semantic change detection in remote sensing data is developed in response to these limitations. This network aims to understand and perceive subtle changes in the semantic content of remote sensing data from the image pixel level. On the one hand, low-level semantic information and cross-scale spatial local feature details are obtained by dividing subspaces and decomposing convolutional layers with significant kernel expansion. Semantic selection aggregation is used to enhance the characterization of global and contextual semantics. Meanwhile, the initial multi-scale local spatial semantics are screened and re-aggregated to improve the characterization of significant features. On the other hand, at the encoding stage, the weight-sharing approach is employed to align the positions of ground objects in the change area and generate more comprehensive encoding information. Meanwhile, the dynamic graph reasoning module is used to decode the encoded semantics layer by layer to investigate the hidden associations between pixels in the neighborhood. In addition, the edge constraint module is used to constrain boundary pixels and reduce semantic ambiguity. The weighted loss function supervises and optimizes each module separately to enable the network to acquire the optimal feature representation. Finally, experimental results on three open-source datasets, such as SECOND, HIUSD, and Landsat-SCD, show that the proposed method achieves good performance, with an SCD score reaching 35.65%, 98.33%, and 67.29%, respectively. Full article
Show Figures

Graphical abstract

23 pages, 1650 KB  
Article
The EU Public Debt Synchronization: A Complex Networks Approach
by Fotios Gkatzoglou, Emmanouil Sofianos and Amélie Barbier-Gauchard
Economies 2025, 13(7), 186; https://doi.org/10.3390/economies13070186 - 27 Jun 2025
Cited by 3 | Viewed by 1373
Abstract
This study examines the evolution of public debt among the 27 EU member states using Graph Theory tools; the Threshold Weighted–Minimum Dominating Set (TW–MDS) and the k-core decomposition method, alongside a standard network quantitative metric, the density. By separating the data into three [...] Read more.
This study examines the evolution of public debt among the 27 EU member states using Graph Theory tools; the Threshold Weighted–Minimum Dominating Set (TW–MDS) and the k-core decomposition method, alongside a standard network quantitative metric, the density. By separating the data into three distinct periods, pre-crisis (2000–2007), European sovereign debt crisis (2008–2015), and post-crisis (2016–2023), we examine the potential synchronization of the debt ratios among EU countries through cross-correlations of the public debts. The findings reveal that public debt correlation was at its highest level during the 2008–2015 period, reflecting the universal impact of the crisis and the subsequent synchronized fiscal and monetary policy measures taken within EU. A significantly lower network density is observed in both the pre- and post-crisis periods. These results contribute to the overall debate on fiscal stability and policy coordination by showing how EU countries tend to align their fiscal behaviors during periods of crisis while behaving more independently during stable times. In addition, we yield a deeper insight into how economic shocks reorganize public debt interconnections within the crisis period. Finally, this analysis highlights to what extent European economic integration strengthens connections between the fiscal positions (through public debt) of the European Union member countries. Full article
Show Figures

Figure 1

19 pages, 2565 KB  
Article
Rolling Bearing Fault Diagnosis via Temporal-Graph Convolutional Fusion
by Fan Li, Yunfeng Li and Dongfeng Wang
Sensors 2025, 25(13), 3894; https://doi.org/10.3390/s25133894 - 23 Jun 2025
Cited by 1 | Viewed by 1501
Abstract
To address the challenge of incomplete fault feature extraction in rolling bearing fault diagnosis under small-sample conditions, this paper proposes a Temporal-Graph Convolutional Fusion Network (T-GCFN). The method enhances diagnostic robustness through collaborative extraction and dynamic fusion of features from time-domain and frequency-domain [...] Read more.
To address the challenge of incomplete fault feature extraction in rolling bearing fault diagnosis under small-sample conditions, this paper proposes a Temporal-Graph Convolutional Fusion Network (T-GCFN). The method enhances diagnostic robustness through collaborative extraction and dynamic fusion of features from time-domain and frequency-domain branches. First, Variational Mode Decomposition (VMD) was employed to extract time-domain Intrinsic Mode Functions (IMFs). These were then input into a Temporal Convolutional Network (TCN) to capture multi-scale temporal dependencies. Simultaneously, frequency-domain features obtained via Fast Fourier Transform (FFT) were used to construct a K-Nearest Neighbors (KNN) graph, which was processed by a Graph Convolutional Network (GCN) to identify spatial correlations. Subsequently, a channel attention fusion layer was designed. This layer utilized global max pooling and average pooling to compress spatio-temporal features. A shared Multi-Layer Perceptron (MLP) then established inter-channel dependencies to generate attention weights, enhancing critical features for more complete fault information extraction. Finally, a SoftMax classifier performed end-to-end fault recognition. Experiments demonstrated that the proposed method significantly improved fault recognition accuracy under small-sample scenarios. These results validate the strong adaptability of the T-GCFN mechanism. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

19 pages, 2467 KB  
Article
Wind Power Forecasting Based on Multi-Graph Neural Networks Considering External Disturbances
by Xiaoyin Xu, Zhumei Luo and Menglong Feng
Energies 2025, 18(11), 2969; https://doi.org/10.3390/en18112969 - 4 Jun 2025
Cited by 1 | Viewed by 1363
Abstract
Wind power forecasting is challenging because of complex, nonlinear relationships between inherent patterns and external disturbances. Though much progress has been achieved in deep learning approaches, existing methods cannot effectively decompose and model intertwined spatio-temporal dependencies. Current methods typically treat wind power as [...] Read more.
Wind power forecasting is challenging because of complex, nonlinear relationships between inherent patterns and external disturbances. Though much progress has been achieved in deep learning approaches, existing methods cannot effectively decompose and model intertwined spatio-temporal dependencies. Current methods typically treat wind power as a unified signal without explicitly separating inherent patterns from external influences, so they have limited prediction accuracy. This paper introduces a novel framework GCN-EIF that decouples external interference factors (EIFs) from inherent wind power patterns to achieve excellent prediction accuracy. Our innovation lies in the physically informed architecture that explicitly models the mathematical relationship: P(t)=Pinherent(t)+EIF(t). The framework adopts a three-component architecture consisting of (1) a multi-graph convolutional network using both geographical proximity and power correlation graphs to capture heterogeneous spatial dependencies between wind farms, (2) an attention-enhanced LSTM network that weights temporal features differentially based on their predictive significance, and (3) a specialized Conv2D mechanism to identify and isolate external disturbance patterns. A key methodological contribution is our signal decomposition strategy during the prediction phase, where an EIF is eliminated from historical data to better learn fundamental patterns, and then a predicted EIF is reintroduced for the target period, significantly reducing error propagation. Extensive experiments across diverse wind farm clusters and different weather conditions indicate that GCN-EIF achieves an 18.99% lower RMSE and 5.08% lower MAE than state-of-the-art methods. Meanwhile, real-time performance analysis confirms the model’s operational viability as it maintains excellent prediction accuracy (RMSE < 15) even at high data arrival rates (100 samples/second) while ensuring processing latency below critical thresholds (10 ms) under typical system loads. Full article
Show Figures

Figure 1

32 pages, 6565 KB  
Article
Sparse Feature-Weighted Double Laplacian Rank Constraint Non-Negative Matrix Factorization for Image Clustering
by Hu Ma, Ziping Ma, Huirong Li and Jingyu Wang
Mathematics 2024, 12(23), 3656; https://doi.org/10.3390/math12233656 - 22 Nov 2024
Cited by 2 | Viewed by 1234
Abstract
As an extension of non-negative matrix factorization (NMF), graph-regularized non-negative matrix factorization (GNMF) has been widely applied in data mining and machine learning, particularly for tasks such as clustering and feature selection. Traditional GNMF methods typically rely on predefined graph structures to guide [...] Read more.
As an extension of non-negative matrix factorization (NMF), graph-regularized non-negative matrix factorization (GNMF) has been widely applied in data mining and machine learning, particularly for tasks such as clustering and feature selection. Traditional GNMF methods typically rely on predefined graph structures to guide the decomposition process, using fixed data graphs and feature graphs to capture relationships between data points and features. However, these fixed graphs may limit the model’s expressiveness. Additionally, many NMF variants face challenges when dealing with complex data distributions and are vulnerable to noise and outliers. To overcome these challenges, we propose a novel method called sparse feature-weighted double Laplacian rank constraint non-negative matrix factorization (SFLRNMF), along with its extended version, SFLRNMTF. These methods adaptively construct more accurate data similarity and feature similarity graphs, while imposing rank constraints on the Laplacian matrices of these graphs. This rank constraint ensures that the resulting matrix ranks reflect the true number of clusters, thereby improving clustering performance. Moreover, we introduce a feature weighting matrix into the original data matrix to reduce the influence of irrelevant features and apply an L2,1/2 norm sparsity constraint in the basis matrix to encourage sparse representations. An orthogonal constraint is also enforced on the coefficient matrix to ensure interpretability of the dimensionality reduction results. In the extended model (SFLRNMTF), we introduce a double orthogonal constraint on the basis matrix and coefficient matrix to enhance the uniqueness and interpretability of the decomposition, thereby facilitating clearer clustering results for both rows and columns. However, enforcing double orthogonal constraints can reduce approximation accuracy, especially with low-rank matrices, as it restricts the model’s flexibility. To address this limitation, we introduce an additional factor matrix R, which acts as an adaptive component that balances the trade-off between constraint enforcement and approximation accuracy. This adjustment allows the model to achieve greater representational flexibility, improving reconstruction accuracy while preserving the interpretability and clustering clarity provided by the double orthogonality constraints. Consequently, the SFLRNMTF approach becomes more robust in capturing data patterns and achieving high-quality clustering results in complex datasets. We also propose an efficient alternating iterative update algorithm to optimize the proposed model and provide a theoretical analysis of its performance. Clustering results on four benchmark datasets demonstrate that our method outperforms competing approaches. Full article
Show Figures

Figure 1

10 pages, 390 KB  
Article
The High Relative Accuracy of Computations with Laplacian Matrices
by Héctor Orera and Juan Manuel Peña
Mathematics 2024, 12(22), 3491; https://doi.org/10.3390/math12223491 - 8 Nov 2024
Viewed by 1070
Abstract
This paper provides an efficient method to compute an LDU decomposition of the Laplacian matrix of a connected graph with high relative accuracy. Several applications of this method are presented. In particular, it can be applied to efficiently compute the eigenvalues [...] Read more.
This paper provides an efficient method to compute an LDU decomposition of the Laplacian matrix of a connected graph with high relative accuracy. Several applications of this method are presented. In particular, it can be applied to efficiently compute the eigenvalues of the mentioned Laplacian matrix. Moreover, the method can be extended to graphs with weighted edges. Full article
(This article belongs to the Special Issue Numerical Analysis and Matrix Computations: Theory and Applications)
Show Figures

Figure 1

20 pages, 7344 KB  
Article
Research on a Joint Extraction Method of Track Circuit Entities and Relations Integrating Global Pointer and Tensor Learning
by Yanrui Chen, Guangwu Chen and Peng Li
Sensors 2024, 24(22), 7128; https://doi.org/10.3390/s24227128 - 6 Nov 2024
Cited by 1 | Viewed by 1502
Abstract
To address the issue of efficiently reusing the massive amount of unstructured knowledge generated during the handling of track circuit equipment faults and to automate the construction of knowledge graphs in the railway maintenance domain, it is crucial to leverage knowledge extraction techniques [...] Read more.
To address the issue of efficiently reusing the massive amount of unstructured knowledge generated during the handling of track circuit equipment faults and to automate the construction of knowledge graphs in the railway maintenance domain, it is crucial to leverage knowledge extraction techniques to efficiently extract relational triplets from fault maintenance text data. Given the current lag in joint extraction technology within the railway domain and the inefficiency in resource utilization, this paper proposes a joint extraction model for track circuit entities and relations, integrating Global Pointer and tensor learning. Taking into account the associative characteristics of semantic relations, the nesting of domain-specific terms in the railway sector, and semantic diversity, this research views the relation extraction task as a tensor learning process and the entity recognition task as a span-based Global Pointer search process. First, a multi-layer dilate gated convolutional neural network with residual connections is used to extract key features and fuse the weighted information from the 12 different semantic layers of the RoBERTa-wwm-ext model, fully exploiting the performance of each encoding layer. Next, the Tucker decomposition method is utilized to capture the semantic correlations between relations, and an Efficient Global Pointer is employed to globally predict the start and end positions of subject and object entities, incorporating relative position information through rotary position embedding (RoPE). Finally, comparative experiments with existing mainstream joint extraction models were conducted, and the proposed model’s excellent performance was validated on the English public datasets NYT and WebNLG, the Chinese public dataset DuIE, and a private track circuit dataset. The F1 scores on the NYT, WebNLG, and DuIE public datasets reached 92.1%, 92.7%, and 78.2%, respectively. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

19 pages, 4574 KB  
Article
Multi-Objective Combinatorial Optimization Algorithm Based on Asynchronous Advantage Actor–Critic and Graph Transformer Networks
by Dongbao Jia, Ming Cao, Wenbin Hu, Jing Sun, Hui Li, Yichen Wang, Weijie Zhou, Tiancheng Yin and Ran Qian
Electronics 2024, 13(19), 3842; https://doi.org/10.3390/electronics13193842 - 28 Sep 2024
Cited by 3 | Viewed by 2583
Abstract
Multi-objective combinatorial optimization problems (MOCOPs) are designed to identify solution sets that optimally balance multiple competing objectives. Addressing the challenges inherent in applying deep reinforcement learning (DRL) to solve MOCOPs, such as model non-convergence, lengthy training periods, and insufficient diversity of solutions, this [...] Read more.
Multi-objective combinatorial optimization problems (MOCOPs) are designed to identify solution sets that optimally balance multiple competing objectives. Addressing the challenges inherent in applying deep reinforcement learning (DRL) to solve MOCOPs, such as model non-convergence, lengthy training periods, and insufficient diversity of solutions, this study introduces a novel multi-objective combinatorial optimization algorithm based on DRL. The proposed algorithm employs a uniform weight decomposition method to simplify complex multi-objective scenarios into single-objective problems and uses asynchronous advantage actor–critic (A3C) instead of conventional REINFORCE methods for model training. This approach effectively reduces variance and prevents the entrapment in local optima. Furthermore, the algorithm incorporates an architecture based on graph transformer networks (GTNs), which extends to edge feature representations, thus accurately capturing the topological features of graph structures and the latent inter-node relationships. By integrating a weight vector layer at the encoding stage, the algorithm can flexibly manage issues involving arbitrary weights. Experimental evaluations on the bi-objective traveling salesman problem demonstrate that this algorithm significantly outperforms recent similar efforts in terms of training efficiency and solution diversity. Full article
Show Figures

Figure 1

13 pages, 6786 KB  
Article
Investigation of a Method for Identifying Unbalanced States in Multi-Disk Rotor Systems: Analysis of Axis Motion Trajectory Features
by Jianjun Peng, En Dong, Fang Yang, Yuxiang Sun and Zhidan Zhong
Appl. Sci. 2024, 14(16), 6884; https://doi.org/10.3390/app14166884 - 6 Aug 2024
Viewed by 1161
Abstract
The operational state of a rotor system directly affects its working efficiency, and the axis trajectory can accurately characterize this state. Therefore, a method for extracting axis motion trajectory characteristics based on distance sequence representation is established. First, the axis trajectory sample signal [...] Read more.
The operational state of a rotor system directly affects its working efficiency, and the axis trajectory can accurately characterize this state. Therefore, a method for extracting axis motion trajectory characteristics based on distance sequence representation is established. First, the axis trajectory sample signal is constructed from the original vibration displacement signal. Singular value decomposition (SVD) is performed on the sample signal to obtain effective components, resulting in a purified and denoised axis motion trajectory signal. Next, the axis motion trajectory signal is centralized and normalized. Feature extraction is then performed on the axis motion trajectory signal. Based on the different curvatures of various regions in the axis motion trajectory graph, data points are adaptively selected. The distances between the selected data points and a unique fixed point are calculated in the two-dimensional plane, resulting in a feature signal that characterizes the axis motion trajectory graph. This completes the extraction of the axis motion trajectory characteristics. Different rotational speeds, additional weights, and changes in rotor arrangement types are applied to a multi-disk rotor test rig to obtain measured data for various unbalanced states, validating this method. The results show that this method effectively characterizes the axis motion trajectory with strong generality. Full article
Show Figures

Figure 1

23 pages, 40689 KB  
Article
Multiscale Feature Search-Based Graph Convolutional Network for Hyperspectral Image Classification
by Ke Wu, Yanting Zhan, Ying An and Suyi Li
Remote Sens. 2024, 16(13), 2328; https://doi.org/10.3390/rs16132328 - 26 Jun 2024
Cited by 9 | Viewed by 2249
Abstract
With the development of hyperspectral sensors, the availability of hyperspectral images (HSIs) has increased significantly, prompting advancements in deep learning-based hyperspectral image classification (HSIC) methods. Recently, graph convolutional networks (GCNs) have been proposed to process graph-structured data in non-Euclidean domains, and have been [...] Read more.
With the development of hyperspectral sensors, the availability of hyperspectral images (HSIs) has increased significantly, prompting advancements in deep learning-based hyperspectral image classification (HSIC) methods. Recently, graph convolutional networks (GCNs) have been proposed to process graph-structured data in non-Euclidean domains, and have been used for HSIC. The superpixel segmentation should be implemented first in the GCN-based methods, however, it is difficult to manually select the optimal superpixel segmentation sizes to obtain the useful information for classification. To solve this problem, we constructed a HSIC model based on a multiscale feature search-based graph convolutional network (MFSGCN) in this study. Firstly, pixel-level features of HSIs are extracted sequentially using 3D asymmetric decomposition convolution and 2D convolution. Then, superpixel-level features at different scales are extracted using multilayer GCNs. Finally, the neural architecture search (NAS) method is used to automatically assign different weights to different scales of superpixel features. Thus, a more discriminative feature map is obtained for classification. Compared with other GCN-based networks, the MFSGCN network can automatically capture features and obtain higher classification accuracy. The proposed MFSGCN model was implemented on three commonly used HSI datasets and compared to some state-of-the-art methods. The results confirm that MFSGCN effectively improves accuracy. Full article
(This article belongs to the Special Issue Deep Learning for the Analysis of Multi-/Hyperspectral Images II)
Show Figures

Figure 1

22 pages, 670 KB  
Article
Traffic Classification and Packet Scheduling Strategy with Deadline Constraints for Input-Queued Switches in Time-Sensitive Networking
by Ling Zheng, Guodong Wei, Keyao Zhang and Hongyun Chu
Electronics 2024, 13(3), 629; https://doi.org/10.3390/electronics13030629 - 2 Feb 2024
Cited by 4 | Viewed by 2298
Abstract
Deterministic transmission technology is a core key technology that supports deterministic real-time transmission requirements for industrial control in Time-Sensitive Networking (TSN). It requires each network node to have a deterministic forwarding delay to ensure the real-time end-to-end transmission of critical traffic streams. Therefore, [...] Read more.
Deterministic transmission technology is a core key technology that supports deterministic real-time transmission requirements for industrial control in Time-Sensitive Networking (TSN). It requires each network node to have a deterministic forwarding delay to ensure the real-time end-to-end transmission of critical traffic streams. Therefore, when forwarding data frames, the switch nodes must consider the time-limited requirements of the traffic. In the input-queued switch system, an algorithm for clock-synchronized deterministic network traffic classification scheduling (CSDN-TCS) is proposed to address the issue of whether a higher-quality-of-service (QoS) performance can be provided under packet deadline constraints. First, the scheduling problem of the switch is transformed into a decomposition problem of the traffic matrix. Secondly, the maximum weight-matching algorithm in graph theory is used to solve the matching results slot by slot. By fully utilizing the slot resources, as many packets as possible can be scheduled to be completed before the deadline arrives. For two types of packet scheduling problems, this paper uses the maximum flow algorithm with upper- and lower-bound constraints to move packets from a larger deadline set to idle slots in a smaller deadline set, enabling early transmission, reducing the average packet delay, and increasing system throughput. When there are three or more types of deadlines in the scheduling set, this scheduling problem is an NP-hard problem. We solve this problem by polling the two types of scheduling algorithms. In this paper, simulation experiments based on the switching size and line load are designed, and the Earliest Deadline First (EDF) algorithm and the Flow-Based Iterative Packet Scheduling (FIPS) algorithm are compared with the CSDN-TCS algorithm. The simulation results show that under the same conditions, the CSDN-TCS algorithm proposed in this paper outperforms the other two algorithms in terms of success rate, packet loss rate, average delay and throughput rate. Compared with the FIPS algorithm, the CSDN-TCS algorithm has lower time complexity under the same QoS performance. Full article
Show Figures

Figure 1

Back to TopTop