Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (49)

Search Parameters:
Keywords = issue with channel assignment

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 4558 KB  
Article
Remaining Useful Life Prediction of Rolling Bearings Based on an Improved U-Net and a Multi-Dimensional Hybrid Gated Attention Mechanism
by Hengdi Wang and Aodi Shi
Appl. Sci. 2025, 15(13), 7166; https://doi.org/10.3390/app15137166 - 25 Jun 2025
Viewed by 592
Abstract
In practical scenarios, rolling bearing vibration signals suffer from detail loss, and information loss occurs during feature dimensionality reduction and fusion, leading to inaccurate life prediction results. To address these issues, this paper first proposes a method for predicting the remaining useful life [...] Read more.
In practical scenarios, rolling bearing vibration signals suffer from detail loss, and information loss occurs during feature dimensionality reduction and fusion, leading to inaccurate life prediction results. To address these issues, this paper first proposes a method for predicting the remaining useful life (RUL) of bearings, which combines an improved U-Net for enhancing vibration signals and a multi-dimensional hybrid gated attention mechanism (MHGAM) for dynamic feature fusion. The enhanced U-Net effectively suppresses the loss of signal details, while the MHGAM adaptively constructs health indices through multi-dimensional weighting, significantly improving prediction accuracy. Initially, the improved U-Net is utilized for signal preprocessing. By comprehensively considering both channel and spatial dimensions, the MHGAM dynamically assigns fusion weights across different dimensions to construct a health index. Subsequently, the health index is used as input for the Bi-GRU network model to obtain the remaining life prediction results. Finally, comparative analyses between the proposed method and other RUL prediction methods are conducted using the IEEE PHM 2012 bearing dataset (Condition 1: rotational speed 1800 r/min with radial load 4000 N; Condition 2: rotational speed 1650 r/min with radial load 4200 N) and engineering test data (rotational speed 1800 r/min with radial load 4000 N). Experimental results from the IEEE PHM 2012 bearing dataset indicate that this method achieves a low mean root mean square error (RMSE = 0.0504) and mean absolute error (MAE = 0.0239). The engineering test verification results demonstrate that the mean values of RMSE and MAE for this method are 7.8% lower than those of the CNN-BiGRU benchmark model and 14.6% lower than those of the TCN-BiGRU model, respectively. In terms of comprehensive prediction performance scores, the average scores improve by 7.8% and 9.3 percentage points compared with the two benchmark models, respectively. Under various test conditions, the prediction results of this method exhibit commendable comprehensive performance, significantly enhancing the prediction accuracy of bearing remaining useful life. Full article
Show Figures

Figure 1

14 pages, 2210 KB  
Article
AMFFNet: Adaptive Multi-Scale Feature Fusion Network for Urban Image Semantic Segmentation
by Shuting Huang and Haiyan Huang
Electronics 2025, 14(12), 2344; https://doi.org/10.3390/electronics14122344 - 8 Jun 2025
Cited by 2 | Viewed by 793
Abstract
Urban image semantic segmentation faces challenges including the coexistence of multi-scale objects, blurred semantic relationships between complex structures, and dynamic occlusion interference. Existing methods often struggle to balance global contextual understanding of large scenes and fine-grained details of small objects due to insufficient [...] Read more.
Urban image semantic segmentation faces challenges including the coexistence of multi-scale objects, blurred semantic relationships between complex structures, and dynamic occlusion interference. Existing methods often struggle to balance global contextual understanding of large scenes and fine-grained details of small objects due to insufficient granularity in multi-scale feature extraction and rigid fusion strategies. To address these issues, this paper proposes an Adaptive Multi-scale Feature Fusion Network (AMFFNet). The network primarily consists of four modules: a Multi-scale Feature Extraction Module (MFEM), an Adaptive Fusion Module (AFM), an Efficient Channel Attention (ECA) module, and an auxiliary supervision head. Firstly, the MFEM utilizes multiple depthwise strip convolutions to capture features at various scales, effectively leveraging contextual information. Then, the AFM employs a dynamic weight assignment strategy to harmonize multi-level features, enhancing the network’s ability to model complex urban scene structures. Additionally, the ECA attention mechanism introduces cross-channel interactions and nonlinear transformations to mitigate the issue of small-object segmentation omissions. Finally, the auxiliary supervision head enables shallow features to directly affect the final segmentation results. Experimental evaluations on the CamVid and Cityscapes datasets demonstrate that the proposed network achieves superior mean Intersection over Union (mIoU) scores of 77.8% and 81.9%, respectively, outperforming existing methods. The results confirm that AMFFNet has a stronger ability to understand complex urban scenes. Full article
(This article belongs to the Topic Intelligent Image Processing Technology)
Show Figures

Figure 1

27 pages, 4279 KB  
Article
A Dynamic Assessment Model of Distributed Photovoltaic Carrying Capacity Based on Improved DeepLabv3+ and Game-Theoretic Combination Weighting
by Jie Ma, Shiwen Yan, Yang Zhao, Youwen Zhang, Xichao Du, Cuiping Li and Junhui Li
Processes 2025, 13(6), 1804; https://doi.org/10.3390/pr13061804 - 6 Jun 2025
Viewed by 554
Abstract
The traditional carrying capacity assessment method fails to effectively quantify the difference in spatial distribution of rooftop photovoltaic (PV) resources and ignores the temporal fluctuation of PV output and load demand, as well as the temporal and spatial matching characteristics of sources and [...] Read more.
The traditional carrying capacity assessment method fails to effectively quantify the difference in spatial distribution of rooftop photovoltaic (PV) resources and ignores the temporal fluctuation of PV output and load demand, as well as the temporal and spatial matching characteristics of sources and loads. This leads to problems such as a disconnect between the assessment and the actual grid acceptance capacity and insufficient dynamic adaptability. In response to the above issues, this paper proposes a dynamic assessment model for distributed photovoltaic carrying capacity based on the combination of improved DeepLabv3+ and game-theoretic weighted assignment. First, the DeepLabv3+ model was improved by integrating the Efficient Channel Attention (ECA) mechanism and the strip pooling (SP) module to enhance roof recognition accuracy. Ablation experiments showed that the mIoU increased to 77.53%, 6.29% higher than the original model. The simulation results in the summer scenario demonstrated that, with the optimal coordination of STMF and scene scoring, the comprehensive carrying coefficient reached 0.73. Next, a photovoltaic carrying capacity evaluation system was established, considering the source, grid, and load perspectives, with dynamic evaluation using a game-theory-based weighting method. Finally, a comprehensive carrying coefficient was introduced, accounting for the spatiotemporal match between photovoltaic output and load, leading to the development of a distributed photovoltaic carrying capacity model. The case study results show that, in summer, due to the optimal coordination of STMF and scene scoring, the comprehensive carrying coefficient reaches 0.73. With a total PV access capacity of 6.48 MW, all node voltages remain within limits, verifying the model’s effectiveness in grid adaptability. Full article
Show Figures

Figure 1

25 pages, 6196 KB  
Article
A Semi-Distributed Scheme for Mode Selection and Resource Allocation in Device-to-Device-Enabled Cellular Networks Using Matching Game and Reinforcement Learning
by Ibrahim Sami Attar, Nor Muzlifah Mahyuddin and M. H. D. Nour Hindia
Telecom 2025, 6(1), 12; https://doi.org/10.3390/telecom6010012 - 13 Feb 2025
Cited by 1 | Viewed by 914
Abstract
Device-to-Device (D2D) communication is a promising technological innovation that is significantly considered to have a substantial impact on the next generation of wireless communication systems. Modern wireless networks of the fifth generation (5G) and beyond (B5G) handle an increasing number of connected devices [...] Read more.
Device-to-Device (D2D) communication is a promising technological innovation that is significantly considered to have a substantial impact on the next generation of wireless communication systems. Modern wireless networks of the fifth generation (5G) and beyond (B5G) handle an increasing number of connected devices that require greater data rates while utilizing relatively low power consumption. In this study, we present joint mode selection, channel assignment, and power allocation issues in a semi-distributed D2D scheme (SD-scheme) that underlays cellular networks. The objective of this study is to enhance the data rate, Spectrum Efficiency (SE), and Energy Efficiency (EE) of the network while maintaining the performance of cellular users (CUs) by creating a threshold of data rate for each CU in the network. Practically, we propose a centralized approach to address the mode selection and channel assignment problems, employing greedy and matching algorithms, respectively. Moreover, we employed a State-Action-Reward-State-Action (SARSA)-based reinforcement learning (RL) algorithm for a distributed power allocation scheme. Furthermore, we suggest that the sub-channel of the CU is shared among several D2D pairs, and the optimum power is determined for each D2D pair sharing the same sub-channel, taking into consideration all types of interferences in the network. The simulation findings illustrate the enhancement in the performance of the proposed scheme in comparison to the benchmark schemes in terms of data rate, SE, and EE. Full article
(This article belongs to the Special Issue Advances in Wireless Communication: Applications and Developments)
Show Figures

Figure 1

20 pages, 5437 KB  
Article
Dynamic Calibration Method of Multichannel Amplitude and Phase Consistency in Meteor Radar
by Yujian Jin, Xiaolong Chen, Songtao Huang, Zhuo Chen, Jing Li and Wenhui Hao
Remote Sens. 2025, 17(2), 331; https://doi.org/10.3390/rs17020331 - 18 Jan 2025
Cited by 1 | Viewed by 1372
Abstract
Meteor radar is a widely used technique for measuring wind in the mesosphere and lower thermosphere, with the key advantage of being unaffected by terrestrial weather conditions, thus enabling continuous operation. In all-sky interferometric meteor radar systems, amplitude and phase consistencies between multiple [...] Read more.
Meteor radar is a widely used technique for measuring wind in the mesosphere and lower thermosphere, with the key advantage of being unaffected by terrestrial weather conditions, thus enabling continuous operation. In all-sky interferometric meteor radar systems, amplitude and phase consistencies between multiple channels exhibit dynamic variations over time, which can significantly degrade the accuracy of wind measurements. Despite the inherently dynamic nature of these inconsistencies, the majority of existing research predominantly employs static calibration methods to address these issues. In this study, we propose a dynamic adaptive calibration method that combines normalized least mean square and correlation algorithms, integrated with hardware design. We further assess the effectiveness of this method through numerical simulations and practical implementation on an independently developed meteor radar system with a five-channel receiver. The receiver facilitates the practical application of the proposed method by incorporating variable gain control circuits and high-precision synchronization analog-to-digital acquisition units, ensuring initial amplitude and phase consistency accuracy. In our dynamic calibration, initial coefficients are determined using a sliding correlation algorithm to assign preliminary weights, which are then refined through the proposed method. This method maximizes cross-channel consistencies, resulting in amplitude inconsistency of <0.0173 dB and phase inconsistency of <0.2064°. Repeated calibration experiments and their comparison with conventional static calibration methods demonstrate significant improvements in amplitude and phase consistency. These results validate the potential of the proposed method to enhance both the detection accuracy and wind inversion precision of meteor radar systems. Full article
Show Figures

Figure 1

16 pages, 3776 KB  
Article
MDA-DETR: Enhancing Offending Animal Detection with Multi-Channel Attention and Multi-Scale Feature Aggregation
by Haiyan Zhang, Huiqi Li, Guodong Sun and Feng Yang
Animals 2025, 15(2), 259; https://doi.org/10.3390/ani15020259 - 17 Jan 2025
Cited by 3 | Viewed by 1509
Abstract
Conflicts between humans and animals in agricultural and settlement areas have recently increased, resulting in significant resource loss and risks to human and animal lives. This growing issue presents a global challenge. This paper addresses the detection and identification of offending animals, particularly [...] Read more.
Conflicts between humans and animals in agricultural and settlement areas have recently increased, resulting in significant resource loss and risks to human and animal lives. This growing issue presents a global challenge. This paper addresses the detection and identification of offending animals, particularly in obscured or blurry nighttime images. This article introduces Multi-Channel Coordinated Attention and Multi-Dimension Feature Aggregation (MDA-DETR). It integrates multi-scale features for enhanced detection accuracy, employing a Multi-Channel Coordinated Attention (MCCA) mechanism to incorporate location, semantic, and long-range dependency information and a Multi-Dimension Feature Aggregation Module (DFAM) for cross-scale feature aggregation. Additionally, the VariFocal Loss function is utilized to assign pixel weights, enhancing detail focus and maintaining accuracy. In the dataset section, this article uses a dataset from the Northeast China Tiger and Leopard National Park, which includes images of six common offending animal species. In the comprehensive experiments on the dataset, the mAP50 index of MDA-DETR was 1.3%, 0.6%, 0.3%, 3%, 1.1%, and 0.5% higher than RT-DETR-r18, yolov8n, yolov9-C, DETR, Deformable-detr, and DCA-yolov8, respectively, indicating that MDA-DETR is superior to other advanced methods. Full article
(This article belongs to the Special Issue Animal–Computer Interaction: Advances and Opportunities)
Show Figures

Figure 1

20 pages, 3563 KB  
Article
EDANet: Efficient Dynamic Alignment of Small Target Detection Algorithm
by Gaofeng Zhu, Fenghua Zhu, Zhixue Wang, Shengli Yang and Zheng Li
Electronics 2025, 14(2), 242; https://doi.org/10.3390/electronics14020242 - 8 Jan 2025
Cited by 1 | Viewed by 1094
Abstract
Unmanned aerial vehicles (UAVs) integrated with computer vision technology have emerged as an effective method for information acquisition in various applications. However, due to the small proportion of target pixels and susceptibility to background interference in multi-angle UAV imaging, missed detections and false [...] Read more.
Unmanned aerial vehicles (UAVs) integrated with computer vision technology have emerged as an effective method for information acquisition in various applications. However, due to the small proportion of target pixels and susceptibility to background interference in multi-angle UAV imaging, missed detections and false results frequently occur. To address this issue, a small target detection algorithm, EDANet, is proposed based on YOLOv8. First, the backbone network is replaced by EfficientNet, which can dynamically explore the network size and the image resolution using a scaling factor. Second, the EC2f feature extraction module is designed to achieve unique coding in different directions through parallel branches. The position information is effectively embedded in the channel attention to enhance the spatial representation ability of features. To mitigate the low utilization of small target pixels, we introduce the DTADH detection module, which facilitates feature fusion via a feature-sharing interactive network. Simultaneously, a task alignment predictor assigns classification and localization tasks. In this way, not only is feature utilization optimized, but also the number of parameters is reduced. Finally, leveraging logic and feature knowledge distillation, we employ binary probability mapping of soft labels and a soft label weighting strategy to enhance the algorithm’s learning capabilities in target classification and localization. Experimental validation on the UAV aerial dataset VisDrone2019 demonstrates that EDANet outperforms existing methods, reducing GFLOPs by 39.3% and improving Map by 4.6%. Full article
Show Figures

Figure 1

18 pages, 2220 KB  
Article
AFN-Net: Adaptive Fusion Nucleus Segmentation Network Based on Multi-Level U-Net
by Ming Zhao, Yimin Yang, Bingxue Zhou, Quan Wang and Fu Li
Sensors 2025, 25(2), 300; https://doi.org/10.3390/s25020300 - 7 Jan 2025
Cited by 1 | Viewed by 995
Abstract
The task of nucleus segmentation plays an important role in medical image analysis. However, due to the challenge of detecting small targets and complex boundaries in datasets, traditional methods often fail to achieve satisfactory results. Therefore, a novel nucleus segmentation method based on [...] Read more.
The task of nucleus segmentation plays an important role in medical image analysis. However, due to the challenge of detecting small targets and complex boundaries in datasets, traditional methods often fail to achieve satisfactory results. Therefore, a novel nucleus segmentation method based on the U-Net architecture is proposed to overcome this issue. Firstly, we introduce a Weighted Feature Enhancement Unit (WFEU) in the encoder decoder fusion stage of U-Net. By assigning learnable weights to different feature maps, the network can adaptively enhance key features and suppress irrelevant or secondary features, thus maintaining high-precision segmentation performance in complex backgrounds. In addition, to further improve the performance of the network under different resolution features, we designed a Double-Stage Channel Optimization Module (DSCOM) in the first two layers of the model. This DSCOM effectively preserves high-resolution information and improves the segmentation accuracy of small targets and boundary regions through multi-level convolution operations and channel optimization. Finally, we proposed an Adaptive Fusion Loss Module (AFLM) that effectively balances different lossy targets by dynamically adjusting weights, thereby further improving the model’s performance in segmentation region consistency and boundary accuracy while maintaining classification accuracy. The experimental results on 2018 Data Science Bowl demonstrate that, compared to state-of-the-art segmentation models, our method shows significant advantages in multiple key metrics. Specifically, our model achieved an IOU score of 0.8660 and a Dice score of 0.9216, with a model parameter size of only 7.81 M. These results illustrate that the method proposed in this paper not only excels in the segmentation of complex shapes and small targets but also significantly enhances overall performance at lower computational costs. This research offers new insights and references for model design in future medical image segmentation tasks. Full article
(This article belongs to the Special Issue Medical Imaging and Sensing Technologies)
Show Figures

Figure 1

19 pages, 9256 KB  
Article
Application of Hybrid Attention Mechanisms in Lithological Classification with Multisource Data: A Case Study from the Altay Orogenic Belt
by Dong Li, Jinlin Wang, Kefa Zhou, Jiantao Bi, Qing Zhang, Wei Wang, Guangjun Qu, Chao Li, Heshun Qiu, Tao Liao, Chong Zhao and Yingpeng Lu
Remote Sens. 2024, 16(21), 3958; https://doi.org/10.3390/rs16213958 - 24 Oct 2024
Viewed by 1053
Abstract
Multisource data fusion technology integrates the strengths of various data sources, addressing the limitations of relying on a single source. Therefore, it has been widely applied in fields such as lithological classification and mineral exploration. However, traditional deep learning algorithms fail to distinguish [...] Read more.
Multisource data fusion technology integrates the strengths of various data sources, addressing the limitations of relying on a single source. Therefore, it has been widely applied in fields such as lithological classification and mineral exploration. However, traditional deep learning algorithms fail to distinguish the importance of different features effectively during fusion, leading to insufficient focus in the model. To address this issue, this paper introduces a ResHA network based on a hybrid attention mechanism to fuse features from ASTER remote sensing images, geochemical data, and DEM data. A case study was conducted in the Altay Orogenic Belt to demonstrate the lithological classification process. This study explored the impact of the submodule order on the hybrid attention mechanism and compared the results with those of MLP, KNN, RF, and SVM models. The experimental results show that (1) the ResHA network with hybrid attention mechanisms assigned reasonable weights to the feature sets, allowing the model to focus on key features closely related to the task. This resulted in a 7.99% improvement in classification accuracy compared with that of traditional models, significantly increasing the precision of lithological classification. (2) The combination of channel attention followed by spatial attention achieved the highest overall accuracy, 98.06%. Full article
Show Figures

Figure 1

18 pages, 3297 KB  
Article
Computation Offloading Strategy for Detection Task in Railway IoT with Integrated Sensing, Storage, and Computing
by Qichang Guo, Zhanyue Xu, Jiabin Yuan and Yifei Wei
Electronics 2024, 13(15), 2982; https://doi.org/10.3390/electronics13152982 - 29 Jul 2024
Cited by 2 | Viewed by 1301
Abstract
Online detection devices, powered by artificial intelligence technologies, enable the comprehensive and continuous detection of high-speed railways (HSRs). However, the computation-intensive and latency-sensitive nature of these detection tasks often exceeds local processing capabilities. Mobile Edge Computing (MEC) emerges as a key solution in [...] Read more.
Online detection devices, powered by artificial intelligence technologies, enable the comprehensive and continuous detection of high-speed railways (HSRs). However, the computation-intensive and latency-sensitive nature of these detection tasks often exceeds local processing capabilities. Mobile Edge Computing (MEC) emerges as a key solution in the railway Internet of Things (IoT) scenario to address these challenges. Nevertheless, the rapidly varying channel conditions in HSR scenarios pose significant challenges for efficient resource allocation. In this paper, a computation offloading system model for detection tasks in the railway IoT scenario is proposed. This system includes direct and relay transmission models, incorporating Non-Orthogonal Multiple Access (NOMA) technology. This paper focuses on the offloading strategy for subcarrier assignment, mode selection, relay power allocation, and computing resource management within this system to minimize the average delay ratio (the ratio of delay to the maximum tolerable delay). However, this optimization problem is a complex Mixed-Integer Non-Linear Programming (MINLP) problem. To address this, we present a low-complexity subcarrier allocation algorithm to reduce the dimensionality of decision-making actions. Furthermore, we propose an improved Deep Deterministic Policy Gradient (DDPG) algorithm that represents discrete variables using selection probabilities to handle the hybrid action space problem. Our results indicate that the proposed system model adapts well to the offloading issues of detection tasks in HSR scenarios, and the improved DDPG algorithm efficiently identifies optimal computation offloading strategies. Full article
(This article belongs to the Special Issue Control Systems Design for Connected and Autonomous Vehicles)
Show Figures

Figure 1

17 pages, 287 KB  
Article
Priority-Based Capacity Allocation for Hierarchical Distributors with Limited Production Capacity
by Jun Tong, Xiaotao Zhou and Lei Lei
Mathematics 2024, 12(14), 2237; https://doi.org/10.3390/math12142237 - 18 Jul 2024
Viewed by 1229
Abstract
This paper studies the issue of capacity allocation in multi-rank distribution channel management, a topic that has been significantly overlooked in the existing literature. Departing from conventional approaches, hierarchical priority rules are introduced as constraints, and an innovative assignment integer programming model focusing [...] Read more.
This paper studies the issue of capacity allocation in multi-rank distribution channel management, a topic that has been significantly overlooked in the existing literature. Departing from conventional approaches, hierarchical priority rules are introduced as constraints, and an innovative assignment integer programming model focusing on capacity selection is formulated. This model goes beyond merely optimizing profit or cost, aiming instead to enhance the overall business orientation of the firm. We propose a greedy algorithm and a priority-based binary particle swarm optimization (PB-BPSO) algorithm. Our numerical results indicate that both algorithms exhibit strong optimization capabilities and rapid solution speeds, especially in large-scale scenarios. Moreover, the model is validated through empirical tests using real-world data. The results demonstrate that the proposed approaches can provide actionable strategies to operators, in practice. Full article
(This article belongs to the Special Issue Machine Learning Methods and Mathematical Modeling with Applications)
Show Figures

Figure 1

15 pages, 2850 KB  
Article
Residual Spatiotemporal Convolutional Neural Network Based on Multisource Fusion Data for Approaching Precipitation Forecasting
by Tianpeng Zhang, Donghai Wang, Lindong Huang, Yihao Chen and Enguang Li
Atmosphere 2024, 15(6), 628; https://doi.org/10.3390/atmos15060628 - 24 May 2024
Viewed by 1308
Abstract
Approaching precipitation forecast refers to the prediction of precipitation within a short time scale, which is usually regarded as a spatiotemporal sequence prediction problem based on radar echo maps. However, due to its reliance on single-image prediction, it lacks good capture of sudden [...] Read more.
Approaching precipitation forecast refers to the prediction of precipitation within a short time scale, which is usually regarded as a spatiotemporal sequence prediction problem based on radar echo maps. However, due to its reliance on single-image prediction, it lacks good capture of sudden severe convective events and physical constraints, which may lead to prediction ambiguities and issues such as false alarms and missed alarms. Therefore, this study dynamically combines meteorological elements from surface observations with upper-air reanalysis data to establish complex nonlinear relationships among meteorological variables based on multisource data. We design a Residual Spatiotemporal Convolutional Network (ResSTConvNet) specifically for this purpose. In this model, data fusion is achieved through the channel attention mechanism, which assigns weights to different channels. Feature extraction is conducted through simultaneous three-dimensional and two-dimensional convolution operations using a pure convolutional structure, allowing the learning of spatiotemporal feature information. Finally, feature fitting is accomplished through residual connections, enhancing the model’s predictive capability. Furthermore, we evaluate the performance of our model in 0–3 h forecasting. The results show that compared with baseline methods, this network exhibits significantly better performance in predicting heavy rainfall. Moreover, as the forecast lead time increases, the spatial features of the forecast results from our network are richer than those of other baseline models, leading to more accurate predictions of precipitation intensity and coverage area. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

14 pages, 3327 KB  
Article
Load Prediction in Double-Channel Residual Self-Attention Temporal Convolutional Network with Weight Adaptive Updating in Cloud Computing
by Jiang Lin and Yepeng Guan
Sensors 2024, 24(10), 3181; https://doi.org/10.3390/s24103181 - 17 May 2024
Viewed by 1215
Abstract
When resource demand increases and decreases rapidly, container clusters in the cloud environment need to respond to the number of containers in a timely manner to ensure service quality. Resource load prediction is a prominent challenge issue with the widespread adoption of cloud [...] Read more.
When resource demand increases and decreases rapidly, container clusters in the cloud environment need to respond to the number of containers in a timely manner to ensure service quality. Resource load prediction is a prominent challenge issue with the widespread adoption of cloud computing. A novel cloud computing load prediction method has been proposed, the Double-channel residual Self-attention Temporal convolutional Network with Weight adaptive updating (DSTNW), in order to make the response of the container cluster more rapid and accurate. A Double-channel Temporal Convolution Network model (DTN) has been developed to capture long-term sequence dependencies and enhance feature extraction capabilities when the model handles long load sequences. Double-channel dilated causal convolution has been adopted to replace the single-channel dilated causal convolution in the DTN. A residual temporal self-attention mechanism (SM) has been proposed to improve the performance of the network and focus on features with significant contributions from the DTN. DTN and SM jointly constitute a dual-channel residual self-attention temporal convolutional network (DSTN). In addition, by evaluating the accuracy aspects of single and stacked DSTNs, an adaptive weight strategy has been proposed to assign corresponding weights for the single and stacked DSTNs, respectively. The experimental results highlight that the developed method has outstanding prediction performance for cloud computing in comparison with some state-of-the-art methods. The proposed method achieved an average improvement of 24.16% and 30.48% on the Container dataset and Google dataset, respectively. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

24 pages, 6115 KB  
Article
An Intelligent Diagnostic Method for Wear Depth of Sliding Bearings Based on MGCNN
by Jingzhou Dai, Ling Tian and Haotian Chang
Machines 2024, 12(4), 266; https://doi.org/10.3390/machines12040266 - 16 Apr 2024
Cited by 8 | Viewed by 2051
Abstract
Sliding bearings are vital components in modern industry, exerting a crucial influence on equipment performance, with wear being one of their primary failure modes. In addressing the issue of wear diagnosis in sliding bearings, this paper proposes an intelligent diagnostic method based on [...] Read more.
Sliding bearings are vital components in modern industry, exerting a crucial influence on equipment performance, with wear being one of their primary failure modes. In addressing the issue of wear diagnosis in sliding bearings, this paper proposes an intelligent diagnostic method based on a multiscale gated convolutional neural network (MGCNN). The proposed method allows for the quantitative inference of the maximum wear depth (MWD) of sliding bearings based on online vibration signals. The constructed model adopts a dual-path parallel structure in both the time and frequency domains to process bearing vibration signals, ensuring the integrity of information transmission through residual network connections. In particular, a multiscale gated convolution (MGC) module is constructed, which utilizes convolutional network layers to extract features from sample sequences. This module incorporates multiple scale channels, including long-term, medium-term, and short-term cycles, to fully extract information from vibration signals. Furthermore, gated units are employed to adaptively assign weights to feature vectors, enabling control of information flow direction. Experimental results demonstrate that the proposed method outperforms the traditional CNN model and shallow machine learning model, offering promising support for equipment condition monitoring and predictive maintenance. Full article
Show Figures

Figure 1

23 pages, 2569 KB  
Article
Explainable Learning-Based Timeout Optimization for Accurate and Efficient Elephant Flow Prediction in SDNs
by Ling Xia Liao, Changqing Zhao, Roy Xiaorong Lai and Han-Chieh Chao
Sensors 2024, 24(3), 963; https://doi.org/10.3390/s24030963 - 1 Feb 2024
Viewed by 1599
Abstract
Accurately and efficiently predicting elephant flows (elephants) is crucial for optimizing network performance and resource utilization. Current prediction approaches for software-defined networks (SDNs) typically rely on complete traffic and statistics moving from switches to controllers. This leads to an extra control channel bandwidth [...] Read more.
Accurately and efficiently predicting elephant flows (elephants) is crucial for optimizing network performance and resource utilization. Current prediction approaches for software-defined networks (SDNs) typically rely on complete traffic and statistics moving from switches to controllers. This leads to an extra control channel bandwidth occupation and network delay. To address this issue, this paper proposes a prediction strategy based on incomplete traffic that is sampled by the timeouts for the installation or reactivation of flow entries. The strategy involves assigning a very short hard timeout (Tinitial) to flow entries and then increasing it at a rate of r until flows are identified as elephants or out of their lifespans. Predicted elephants are switched to an idle timeout of 5 s. Logistic regression is used to model elephants based on a complete dataset. Bayesian optimization is then used to tune the trained model Tinitial and r over the incomplete dataset. The process of feature selection, model learning, and optimization is explained. An extensive evaluation shows that the proposed approach can achieve over 90% generalization accuracy over 7 different datasets, including campus, backbone, and the Internet of Things (IoT). Elephants can be correctly predicted for about half of their lifetime. The proposed approach can significantly reduce the controller–switch interaction in campus and IoT networks, although packet completion approaches may need to be applied in networks with a short mean packet inter-arrival time. Full article
Show Figures

Figure 1

Back to TopTop