Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (42)

Search Parameters:
Keywords = end-to-cloud coordinated

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 4220 KB  
Article
A Novel Multi-Server Federated Learning Framework in Vehicular Edge Computing
by Fateme Mazloomi, Shahram Shah Heydari and Khalil El-Khatib
Future Internet 2025, 17(7), 315; https://doi.org/10.3390/fi17070315 - 19 Jul 2025
Viewed by 477
Abstract
Federated learning (FL) has emerged as a powerful approach for privacy-preserving model training in autonomous vehicle networks, where real-world deployments rely on multiple roadside units (RSUs) serving heterogeneous clients with intermittent connectivity. While most research focuses on single-server or hierarchical cloud-based FL, multi-server [...] Read more.
Federated learning (FL) has emerged as a powerful approach for privacy-preserving model training in autonomous vehicle networks, where real-world deployments rely on multiple roadside units (RSUs) serving heterogeneous clients with intermittent connectivity. While most research focuses on single-server or hierarchical cloud-based FL, multi-server FL can alleviate the communication bottlenecks of traditional setups. To this end, we propose an edge-based, multi-server FL (MS-FL) framework that combines performance-driven aggregation at each server—including statistical weighting of peer updates and outlier mitigation—with an application layer handover protocol that preserves model updates when vehicles move between RSU coverage areas. We evaluate MS-FL on both MNIST and GTSRB benchmarks under shard- and Dirichlet-based non-IID splits, comparing it against single-server FL and a two-layer edge-plus-cloud baseline. Over multiple communication rounds, MS-FL with the Statistical Performance-Aware Aggregation method and Dynamic Weighted Averaging Aggregation achieved up to a 20-percentage-point improvement in accuracy and consistent gains in precision, recall, and F1-score (95% confidence), while matching the low latency of edge-only schemes and avoiding the extra model transfer delays of cloud-based aggregation. These results demonstrate that coordinated cooperation among servers based on model quality and seamless handovers can accelerate convergence, mitigate data heterogeneity, and deliver robust, privacy-aware learning in connected vehicle environments. Full article
Show Figures

Figure 1

25 pages, 2908 KB  
Article
Secure and Scalable File Encryption for Cloud Systems via Distributed Integration of Quantum and Classical Cryptography
by Changjong Kim, Seunghwan Kim, Kiwook Sohn, Yongseok Son, Manish Kumar and Sunggon Kim
Appl. Sci. 2025, 15(14), 7782; https://doi.org/10.3390/app15147782 - 11 Jul 2025
Viewed by 649
Abstract
We propose a secure and scalable file-encryption scheme for cloud systems by integrating Post-Quantum Cryptography (PQC), Quantum Key Distribution (QKD), and Advanced Encryption Standard (AES) within a distributed architecture. While prior studies have primarily focused on secure key exchange or authentication protocols (e.g., [...] Read more.
We propose a secure and scalable file-encryption scheme for cloud systems by integrating Post-Quantum Cryptography (PQC), Quantum Key Distribution (QKD), and Advanced Encryption Standard (AES) within a distributed architecture. While prior studies have primarily focused on secure key exchange or authentication protocols (e.g., layered PQC-QKD key distribution), our scheme extends beyond key management by implementing a distributed encryption architecture that protects large-scale files through integrated PQC, QKD, and AES. To support high-throughput encryption, our proposed scheme partitions the target file into fixed-size subsets and distributes them across slave nodes, each performing parallel AES encryption using a locally reconstructed key from a PQC ciphertext. Each slave node receives a PQC ciphertext that encapsulates the AES key, along with a PQC secret key masked using QKD based on the BB84 protocol, both of which are centrally generated and managed by the master node for secure coordination. In addition, an encryption and transmission pipeline is designed to overlap I/O, encryption, and communication, thereby reducing idle time and improving resource utilization. The master node performs centralized decryption by collecting encrypted subsets, recovering the AES key, and executing decryption in parallel. Our evaluation using a real-world medical dataset shows that the proposed scheme achieves up to 2.37× speedup in end-to-end runtime and up to 8.11× speedup in encryption time over AES (Original). In addition to performance gains, our proposed scheme maintains low communication cost, stable CPU utilization across distributed nodes, and negligible overhead from quantum key management. Full article
(This article belongs to the Special Issue AI-Enabled Next-Generation Computing and Its Applications)
Show Figures

Figure 1

26 pages, 5672 KB  
Review
Development Status and Trend of Mine Intelligent Mining Technology
by Zhuo Wang, Lin Bi, Jinbo Li, Zhaohao Wu and Ziyu Zhao
Mathematics 2025, 13(13), 2217; https://doi.org/10.3390/math13132217 - 7 Jul 2025
Cited by 1 | Viewed by 1355
Abstract
Intelligent mining technology, as the core driving force for the digital transformation of the mining industry, integrates cyber-physical systems, artificial intelligence, and industrial internet technologies to establish a “cloud–edge–end” collaborative system. In this paper, the development trajectory of intelligent mining technology has been [...] Read more.
Intelligent mining technology, as the core driving force for the digital transformation of the mining industry, integrates cyber-physical systems, artificial intelligence, and industrial internet technologies to establish a “cloud–edge–end” collaborative system. In this paper, the development trajectory of intelligent mining technology has been systematically reviewed, which has gone through four stages: stand-alone automation, integrated automation and informatization, digital and intelligent initial, and comprehensive intelligence. And the current development status of “cloud–edge–end” technologies has been reviewed: (i) The end layer achieves environmental state monitoring and precise control through a multi-source sensing network and intelligent equipment. (ii) The edge layer leverages 5G and edge computing to accomplish real-time data processing, 3D dynamic modeling, and safety early warning. (iii) The cloud layer realizes digital planning and intelligent decision-making, based on the industrial Internet platform. The three-layer collaboration forms a “perception–analysis–decision–execution” closed loop. Currently, there are still many challenges in the development of the technology, including the lack of a standardization system, the bottleneck of multi-source heterogeneous data fusion, the lack of a cross-process coordination of the equipment, and the shortage of interdisciplinary talents. Accordingly, this paper focuses on future development trends from four aspects, providing systematic solutions for a safe, efficient, and sustainable mining operation. Technological evolution will accelerate the formation of an intelligent ecosystem characterized by “standard-driven, data-empowered, equipment-autonomous, and human–machine collaboration”. Full article
(This article belongs to the Special Issue Mathematical Modeling and Analysis in Mining Engineering)
Show Figures

Figure 1

15 pages, 2944 KB  
Article
Fruit Orchard Canopy Recognition and Extraction of Characteristics Based on Millimeter-Wave Radar
by Yinlong Jiang, Jieli Duan, Yang Li, Jiaxiang Yu, Zhou Yang and Xing Xu
Agriculture 2025, 15(13), 1342; https://doi.org/10.3390/agriculture15131342 - 22 Jun 2025
Viewed by 492
Abstract
Fruit orchard canopy recognition and characteristic extraction are the key problems faced in orchard precision production. To this end, we built a fruit tree canopy detection platform based on millimeter-wave radar, verified the feasibility of millimeter-wave radar from the two perspectives of fruit [...] Read more.
Fruit orchard canopy recognition and characteristic extraction are the key problems faced in orchard precision production. To this end, we built a fruit tree canopy detection platform based on millimeter-wave radar, verified the feasibility of millimeter-wave radar from the two perspectives of fruit orchard canopy recognition and canopy characteristic extraction, and explored the detection accuracy of millimeter-wave radar under spray conditions. For fruit orchard canopy recognition, based on the DBSCAN algorithm, an ellipsoid model adaptive clustering algorithm based on a variable-axis (E-DBSCAN) was proposed. The feasibility of the proposed algorithm was verified in the real operation scene of the orchard. The results show that the F1 score of the proposed algorithm was 96.7%, the precision rate was 93.5%, and the recall rate was 95.1%, which effectively improves the recognition accuracy of the classical DBSCAN algorithm in multi-density point cloud clustering. Regarding the extraction of the canopy characteristics of fruit trees, the RANSAC algorithm and coordinate method were used to extract crown width and plant height, respectively, and a point cloud density adaptive Alpha_shape algorithm was proposed to extract volume. The number of point clouds, crown width, plant height, and volume value under spray conditions and normal conditions were compared and analyzed. The average relative errors of crown width, plant height, and volume were 2.1%, 2.3%, and 4.2%, respectively, indicating that the spray had little effect on the extraction of canopy characteristics by millimeter-wave radar, which could inform spray-related decisions for precise applications. Full article
(This article belongs to the Special Issue Agricultural Machinery and Technology for Fruit Orchard Management)
Show Figures

Figure 1

44 pages, 823 KB  
Review
A Systematic Literature Review of DDS Middleware in Robotic Systems
by Muhammad Liman Gambo, Abubakar Danasabe, Basem Almadani, Farouq Aliyu, Abdulrahman Aliyu and Esam Al-Nahari
Robotics 2025, 14(5), 63; https://doi.org/10.3390/robotics14050063 - 14 May 2025
Cited by 1 | Viewed by 3560
Abstract
The increasing demand for automation has led to the complexity of the design and operation of robotic systems. This paper presents a systematic literature review (SLR) focused on the applications and challenges of Data Distribution Service (DDS)-based middleware in robotics from 2006 to [...] Read more.
The increasing demand for automation has led to the complexity of the design and operation of robotic systems. This paper presents a systematic literature review (SLR) focused on the applications and challenges of Data Distribution Service (DDS)-based middleware in robotics from 2006 to 2024. We explore the pivotal role of DDS in facilitating efficient communication across heterogeneous robotic systems, enabling seamless integration of actuators, sensors, and computational elements. Our review identifies key applications of DDS in various robotic domains, including multi-robot coordination, real-time data processing, and cloud–edge–end fusion architectures, which collectively enhance the performance and scalability of robotic operations. Furthermore, we identify several challenges associated with implementing DDS in robotic systems, such as security vulnerabilities, performance and scalability requirements, and the complexities of real-time data transmission. By analyzing recent advancements and case studies, we provide insights into the potential of DDS to overcome these challenges while ensuring robust and reliable communication in dynamic environments. This paper aims to contribute to the transformative impact of DDS-based middleware in robotics, offering a comprehensive overview of its benefits, applications, and security implications. Our findings underscore the necessity for continued research and development in this area, paving the way for more resilient and intelligent robotic systems that operate effectively in real-world scenarios. This review not only fills existing gaps in the literature but also serves as a foundational resource for researchers and practitioners seeking to leverage DDS in the design and implementation of next-generation robotic solutions. Full article
(This article belongs to the Special Issue Innovations in the Internet of Robotic Things (IoRT))
Show Figures

Figure 1

21 pages, 2923 KB  
Article
Multi-Scale Classification and Contrastive Regularization: Weakly Supervised Large-Scale 3D Point Cloud Semantic Segmentation
by Jingyi Wang, Jingyang He, Yu Liu, Chen Chen, Maojun Zhang and Hanlin Tan
Remote Sens. 2024, 16(17), 3319; https://doi.org/10.3390/rs16173319 - 7 Sep 2024
Viewed by 1837
Abstract
With the proliferation of large-scale 3D point cloud datasets, the high cost of per-point annotation has spurred the development of weakly supervised semantic segmentation methods. Current popular research mainly focuses on single-scale classification, which fails to address the significant feature scale differences between [...] Read more.
With the proliferation of large-scale 3D point cloud datasets, the high cost of per-point annotation has spurred the development of weakly supervised semantic segmentation methods. Current popular research mainly focuses on single-scale classification, which fails to address the significant feature scale differences between background and objects in large scenes. Therefore, we propose MCCR (Multi-scale Classification and Contrastive Regularization), an end-to-end semantic segmentation framework for large-scale 3D scenes under weak supervision. MCCR first aggregates features and applies random downsampling to the input data. Then, it captures the local features of a random point based on multi-layer features and the input coordinates. These features are then fed into the network to obtain the initial and final prediction results, and MCCR iteratively trains the model using strategies such as contrastive learning. Notably, MCCR combines multi-scale classification with contrastive regularization to fully exploit multi-scale features and weakly labeled information. We investigate both point-level and local contrastive regularization to leverage point cloud augmentor and local semantic information and introduce a Decoupling Layer to guide the loss optimization in different spaces. Results on three popular large-scale datasets, S3DIS, SemanticKITTI and SensatUrban, demonstrate that our model achieves state-of-the-art (SOTA) performance on large-scale outdoor datasets with only 0.1% labeled points for supervision, while maintaining strong performance on indoor datasets. Full article
Show Figures

Figure 1

25 pages, 3120 KB  
Article
A New Efficient Ship Detection Method Based on Remote Sensing Images by Device–Cloud Collaboration
by Tao Liu, Yun Ye, Zhengling Lei, Yuchi Huo, Xiaocai Zhang, Fang Wang, Mei Sha and Huafeng Wu
J. Mar. Sci. Eng. 2024, 12(8), 1422; https://doi.org/10.3390/jmse12081422 - 17 Aug 2024
Cited by 1 | Viewed by 1539
Abstract
Fast and accurate detection of ship objects in remote sensing images must overcome two critical problems: the complex content of remote sensing images and the large number of small objects reduce ship detection efficiency. In addition, most existing deep learning-based object detection models [...] Read more.
Fast and accurate detection of ship objects in remote sensing images must overcome two critical problems: the complex content of remote sensing images and the large number of small objects reduce ship detection efficiency. In addition, most existing deep learning-based object detection models require vast amounts of computation for training and prediction, making them difficult to deploy on mobile devices. This paper focuses on an efficient and lightweight ship detection model. A new efficient ship detection model based on device–cloud collaboration is proposed, which achieves joint optimization by fusing the semantic segmentation module and the object detection module. We migrate model training, image storage, and semantic segmentation, which require a lot of computational power, to the cloud. For the front end, we design a mask-based detection module that ignores the computation of nonwater regions and reduces the generation and postprocessing time of candidate bounding boxes. In addition, the coordinate attention module and confluence algorithm are introduced to better adapt to the environment with dense small objects and substantial occlusion. Experimental results show that our device–cloud collaborative approach reduces the computational effort while improving the detection speed by 42.6% and also outperforms other methods in terms of detection accuracy and number of parameters. Full article
Show Figures

Figure 1

13 pages, 1571 KB  
Article
R-PointNet: Robust 3D Object Recognition Network for Real-World Point Clouds Corruption
by Zhongyuan Zhang, Lichen Lin and Xiaoli Zhi
Appl. Sci. 2024, 14(9), 3649; https://doi.org/10.3390/app14093649 - 25 Apr 2024
Cited by 2 | Viewed by 2198
Abstract
Point clouds obtained with 3D scanners in realistic scenes inevitably contain corruption, including noise and outliers. Traditional algorithms for cleaning point cloud corruption require the selection of appropriate parameters based on the characteristics of the scene, data, and algorithm, which means that their [...] Read more.
Point clouds obtained with 3D scanners in realistic scenes inevitably contain corruption, including noise and outliers. Traditional algorithms for cleaning point cloud corruption require the selection of appropriate parameters based on the characteristics of the scene, data, and algorithm, which means that their performance is highly dependent on the experience and adaptation of the algorithm itself to the application. Three-dimensional object recognition networks for real-world recognition tasks can take the raw point cloud as input and output the recognition results directly. Current 3D object recognition networks generally acquire uniform sampling points by farthest point sampling (FPS) to extract features. However, sampled defective points from FPS lower the recognition accuracy by affecting the aggregated global feature. To deal with this issue, we design a compensation module, named offset-adjustment (OA). It can adaptively adjust the coordinates of sampled defective points based on neighbors and improve local feature extraction to enhance network robustness. Furthermore, we employ the OA module to build an end-to-end network based on PointNet++ framework for robust point cloud recognition, named R-PointNet. Experiments show that R-PointNet reaches state-of-the-art performance by 92.5% of recognition accuracy on ModelNet40, and significantly outperforms previous networks by 3–7.7% on the corruption dataset ModelNet40-C for robustness benchmark. Full article
(This article belongs to the Special Issue Advanced 2D/3D Computer Vision Technology and Applications)
Show Figures

Figure 1

15 pages, 9853 KB  
Article
Trajectory Planning of Shape-Following Laser Cleaning Robot for the Aircraft Radar Radome Coating
by Zhen Zeng, Chengzhao Jiang, Shanting Ding, Qinyang Li, Zhongsheng Zhai and Daizhe Chen
Appl. Sci. 2024, 14(3), 1163; https://doi.org/10.3390/app14031163 - 30 Jan 2024
Cited by 2 | Viewed by 1807
Abstract
At present, aircraft radome coating cleaning mainly relies on manual and chemical methods. In view of this situation, this study presents a trajectory planning method based on a three-dimensional (3D) surface point cloud for a laser-enabled coating cleaning robot. An automated trajectory planning [...] Read more.
At present, aircraft radome coating cleaning mainly relies on manual and chemical methods. In view of this situation, this study presents a trajectory planning method based on a three-dimensional (3D) surface point cloud for a laser-enabled coating cleaning robot. An automated trajectory planning scheme is proposed to utilize 3D laser scanning to acquire point cloud data and avoid the dependence on traditional teaching–playback paradigms. A principal component analysis (PCA) algorithm incorporating additional principal direction determination for point cloud alignment is introduced to facilitate subsequent point cloud segmentation. The algorithm can adjust the coordinate system and align with the desired point cloud segmentation direction efficiently and conveniently. After preprocessing and coordinate system adjustment of the point cloud, a projection-based point cloud segmentation technique is proposed, enabling the slicing division of the point cloud model and extraction of cleaning target positions from each slice. Subsequently, the normal vectors of the cleaning positions are estimated, and trajectory points are biased along these vectors to determine the end effector’s orientation. Finally, B-spline curve fitting and layered smooth connection methods are employed to generate the cleaning path. Experimental results demonstrate that the proposed method offers efficient and precise trajectory planning for the aircraft radar radome coating laser cleaning and avoids the need for a prior teaching process so it could enhance the automation level in coating cleaning tasks. Full article
(This article belongs to the Special Issue Advances in Robot Path Planning, Volume II)
Show Figures

Figure 1

23 pages, 15746 KB  
Article
IMUC: Edge–End–Cloud Integrated Multi-Unmanned System Payload Management and Computing Platform
by Jie Tang, Ruofei Zhong, Ruizhuo Zhang and Yan Zhang
Drones 2024, 8(1), 19; https://doi.org/10.3390/drones8010019 - 12 Jan 2024
Cited by 1 | Viewed by 2727
Abstract
Multi-unmanned systems are primarily composed of unmanned vehicles, drones, and multi-legged robots, among other unmanned robotic devices. By integrating and coordinating the operation of these robotic devices, it is possible to achieve collaborative multitasking and autonomous operations in various environments. In the field [...] Read more.
Multi-unmanned systems are primarily composed of unmanned vehicles, drones, and multi-legged robots, among other unmanned robotic devices. By integrating and coordinating the operation of these robotic devices, it is possible to achieve collaborative multitasking and autonomous operations in various environments. In the field of surveying and mapping, the traditional single-type unmanned device data collection mode is no longer sufficient to meet the data acquisition tasks in complex spatial scenarios (such as low-altitude, surface, indoor, underground, etc.). Faced with the data collection requirements in complex spaces, employing different types of robots for collaborative operations is an important means to improve operational efficiency. Additionally, the limited computational and storage capabilities of unmanned systems themselves pose significant challenges to multi-unmanned systems. Therefore, this paper designs an edge–end–cloud integrated multi-unmanned system payload management and computing platform (IMUC) that combines edge, end, and cloud computing. By utilizing the immense computational power and storage resources of the cloud, the platform enables cloud-based online task management and data acquisition visualization for multi-unmanned systems. The platform addresses the high complexity of task execution in various scenarios by considering factors such as space, time, and task completion. It performs data collection tasks at the end terminal, optimizes processing at the edge, and finally transmits the data to the cloud for visualization. The platform seamlessly integrates edge computing, terminal devices, and cloud resources, achieving efficient resource utilization and distributed execution of computing tasks. Test results demonstrate that the platform can successfully complete the entire process of payload management and computation for multi-unmanned systems in complex scenarios. The platform exhibits low response time and produces normal routing results, greatly enhancing operational efficiency in the field. These test results validate the practicality and reliability of the platform, providing a new approach for efficient operations of multi-unmanned systems in surveying and mapping requirements, combining cloud computing with the construction of smart cities. Full article
(This article belongs to the Special Issue Resilient UAV Autonomy and Remote Sensing)
Show Figures

Figure 1

16 pages, 3803 KB  
Article
Robust Point Cloud Registration Network for Complex Conditions
by Ruidong Hao, Zhongwei Wei, Xu He, Kaifeng Zhu, Jiawei He, Jun Wang, Muyu Li, Lei Zhang, Zhuang Lv, Xin Zhang and Qiwen Zhang
Sensors 2023, 23(24), 9837; https://doi.org/10.3390/s23249837 - 15 Dec 2023
Cited by 5 | Viewed by 2737
Abstract
Point cloud registration is widely used in autonomous driving, SLAM, and 3D reconstruction, and it aims to align point clouds from different viewpoints or poses under the same coordinate system. However, point cloud registration is challenging in complex situations, such as a large [...] Read more.
Point cloud registration is widely used in autonomous driving, SLAM, and 3D reconstruction, and it aims to align point clouds from different viewpoints or poses under the same coordinate system. However, point cloud registration is challenging in complex situations, such as a large initial pose difference, high noise, or incomplete overlap, which will cause point cloud registration failure or mismatching. To address the shortcomings of the existing registration algorithms, this paper designed a new coarse-to-fine registration two-stage point cloud registration network, CCRNet, which utilizes an end-to-end form to perform the registration task for point clouds. The multi-scale feature extraction module, coarse registration prediction module, and fine registration prediction module designed in this paper can robustly and accurately register two point clouds without iterations. CCRNet can link the feature information between two point clouds and solve the problems of high noise and incomplete overlap by using a soft correspondence matrix. In the standard dataset ModelNet40, in cases of large initial pose difference, high noise, and incomplete overlap, the accuracy of our method, compared with the second-best popular registration algorithm, was improved by 7.0%, 7.8%, and 22.7% on the MAE, respectively. Experiments showed that our CCRNet method has advantages in registration results in a variety of complex conditions. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

19 pages, 6118 KB  
Article
Precise Short-Term Small-Area Sunshine Forecasting for Optimal Seedbed Scheduling in Plant Factories
by Liang Gong, Fei Huang, Wei Zhang, Yanming Li and Chengliang Liu
Agriculture 2023, 13(9), 1790; https://doi.org/10.3390/agriculture13091790 - 9 Sep 2023
Viewed by 1746
Abstract
Photosynthesis is one of the key issues for vertical cultivation in plant factories, and efficient natural sunlight utilization requires predicting the light falling on each seedbed in a real-time manner. However, public weather services neither provide sunshine data nor meet spatial resolution requirement. [...] Read more.
Photosynthesis is one of the key issues for vertical cultivation in plant factories, and efficient natural sunlight utilization requires predicting the light falling on each seedbed in a real-time manner. However, public weather services neither provide sunshine data nor meet spatial resolution requirement. Facing these short-term and small-area weather forecasting challenges, we propose a cross-scale approach to infer seedbed-sized areas of sunshine from the city-level public weather services, and then design a seedbed rotation scheduling system for optimal natural sunlight utilization. First, an end-edge-cloud coordinated computing architecture was employed to concurrently aggregate the multi-scale data from weather satellites to sunshine sensors in the plant factory. Second, the small area of sunshine deterministically depends on the meteorological data given a fixed environment, and this correlation was described by a hybrid mapping model, which combined the long short-term memory (LSTM) and gradient boosting decision tree (GBDT) algorithms to form the LSTM-GBDT hybrid prediction algorithm (LGHPA). By training the LGHPA with historical local sensory sunshine and the city-scale meteorological data, the hourly sunshine on a seedbed can be predicted from the public weather forecasting service. Finally, a dynamic seedbed scheduling scheme was constructed to provide uniform solar energy absorption according to the one-hour-ahead radiation estimation. Experiment results show that the hourly sunshine prediction error was less than 18.44% over a seasonal period and the deviation for different solar absorption by seedbeds with rotation capability is less than 7.1%. Consequently, it was demonstrated that the application of short-term, small-area sunshine forecasting improved the performance of seedbed rotation for uniformly absorbed solar radiation. The proposed method verifies the feasibility of precisely predicting small-area sunshine down to the seedbed scale by leveraging a model-based approach and a cloud-edge-end merged cybernetic computing paradigm. Full article
Show Figures

Figure 1

20 pages, 13848 KB  
Article
Learning Contours for Point Cloud Completion
by Jiabo Xu, Zeyun Wan and Jingbo Wei
Remote Sens. 2023, 15(17), 4338; https://doi.org/10.3390/rs15174338 - 3 Sep 2023
Cited by 2 | Viewed by 2448
Abstract
The integrity of a point cloud frequently suffers from discontinuous material surfaces or coarse sensor resolutions. Existing methods focus on reconstructing the overall structure, but salient points or small irregular surfaces are difficult to be predicted. Toward this issue, we propose a new [...] Read more.
The integrity of a point cloud frequently suffers from discontinuous material surfaces or coarse sensor resolutions. Existing methods focus on reconstructing the overall structure, but salient points or small irregular surfaces are difficult to be predicted. Toward this issue, we propose a new end-to-end neural network for point cloud completion. To avoid non-uniform point density, the regular voxel centers are selected as reference points. The encoder and decoder are designed with Patchify, transformers, and multilayer perceptrons. An implicit classifier is incorporated in the decoder to mark the valid voxels that are allowed for diffusion after removing vacant grids from completion. With newly designed loss function, the classifier is trained to learn the contours, which helps to identify the grids that are difficult to be judged for diffusion. The effectiveness of the proposed model is validated in the experiments on the indoor ShapeNet dataset, the outdoor KITTI dataset, and the airbone laser dataset by competing with state-of-the-art methods, which show that our method can predict more accurate point coordinates with rich details and uniform point distributions. Full article
Show Figures

Graphical abstract

20 pages, 6683 KB  
Article
Survey of Point Cloud Registration Methods and New Statistical Approach
by Jaroslav Marek and Pavel Chmelař
Mathematics 2023, 11(16), 3564; https://doi.org/10.3390/math11163564 - 17 Aug 2023
Cited by 5 | Viewed by 2891
Abstract
The use of a 3D range scanning device for autonomous object description or unknown environment mapping leads to the necessity of improving computer methods based on identical point pairs from different point clouds (so-called registration problem). The registration problem and three-dimensional transformation of [...] Read more.
The use of a 3D range scanning device for autonomous object description or unknown environment mapping leads to the necessity of improving computer methods based on identical point pairs from different point clouds (so-called registration problem). The registration problem and three-dimensional transformation of coordinates still require further research. The paper attempts to guide the reader through the vast field of existing registration methods so that he can choose the appropriate approach for his particular problem. Furthermore, the article contains a regression method that enables the estimation of the covariance matrix of the transformation parameters and the calculation of the uncertainty of the estimated points. This makes it possible to extend existing registration methods with uncertainty estimates and to improve knowledge about the performed registration. The paper’s primary purpose is to present a survey of known methods and basic estimation theory concepts for the point cloud registration problem. The focus will be on the guiding principles of the estimation theory: ICP algorithm; Normal Distribution Transform; Feature-based registration; Iterative dual correspondences; Probabilistic iterative correspondence method; Point-based registration; Quadratic patches; Likelihood-field matching; Conditional random fields; Branch-and-bound registration; PointReg. The secondary purpose of this article is to show an innovative statistical model for this transformation problem. The new theory needs known covariance matrices of identical point coordinates. An unknown rotation matrix and shift vector have been estimated using a nonlinear regression model with nonlinear constraints. The paper ends with a relevant numerical example. Full article
Show Figures

Figure 1

20 pages, 6339 KB  
Article
Jointly Optimize Partial Computation Offloading and Resource Allocation in Cloud-Fog Cooperative Networks
by Wenle Bai and Ying Wang
Electronics 2023, 12(15), 3224; https://doi.org/10.3390/electronics12153224 - 26 Jul 2023
Cited by 8 | Viewed by 1849
Abstract
Fog computing has become a hot topic in recent years as it provides cloud computing resources to the network edge in a distributed manner that can respond quickly to intensive tasks from different user equipment (UE) applications. However, since fog resources are also [...] Read more.
Fog computing has become a hot topic in recent years as it provides cloud computing resources to the network edge in a distributed manner that can respond quickly to intensive tasks from different user equipment (UE) applications. However, since fog resources are also limited, considering the number of Internet of Things (IoT) applications and the demand for traffic, designing an effective offload strategy and resource allocation scheme to reduce the offloading cost of UE systems is still an important challenge. To this end, this paper investigates the problem of partial offloading and resource allocation under a cloud-fog coordination network architecture, which is formulated as a mixed integer nonlinear programming (MINLP). Bring in a new weighting metric-cloud resource rental cost. The optimization function of offloading cost is defined as a weighted sum of latency, energy consumption, and cloud rental cost. Under the fixed offloading decision condition, two sub-problems of fog computing resource allocation and user transmission power allocation are proposed and solved using convex optimization techniques and Karush-Kuhn-Tucker (KKT) conditions, respectively. The sampling process of the inner loop of the simulated annealing (SA) algorithm is improved, and a memory function is added to obtain the novel simulated annealing (N-SA) algorithm used to solve the optimal value offloading problem corresponding to the optimal resource allocation problem. Through extensive simulation experiments, it is shown that the N-SA algorithm obtains the optimal solution quickly and saves 17% of the system cost compared to the greedy offloading and joint resource allocation (GO-JRA) algorithm. Full article
Show Figures

Figure 1

Back to TopTop