Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,543)

Search Parameters:
Keywords = cloud computing systems

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 15860 KB  
Article
Robot Object Detection and Tracking Based on Image–Point Cloud Instance Matching
by Hongxing Wang, Rui Zhu, Zelin Ye and Yaxin Li
Sensors 2026, 26(2), 718; https://doi.org/10.3390/s26020718 - 21 Jan 2026
Abstract
Effectively fusing the rich semantic information from camera images with the high-precision geometric measurements provided by LiDAR point clouds is a key challenge in mobile robot environmental perception. To address this problem, this paper proposes a highly extensible instance-aware fusion framework designed to [...] Read more.
Effectively fusing the rich semantic information from camera images with the high-precision geometric measurements provided by LiDAR point clouds is a key challenge in mobile robot environmental perception. To address this problem, this paper proposes a highly extensible instance-aware fusion framework designed to achieve efficient alignment and unified modeling of heterogeneous sensory data. The proposed approach adopts a modular processing pipeline. First, semantic instance masks are extracted from RGB images using an instance segmentation network, and a projection mechanism is employed to establish spatial correspondences between image pixels and LiDAR point cloud measurements. Subsequently, three-dimensional bounding boxes are reconstructed through point cloud clustering and geometric fitting, and a reprojection-based validation mechanism is introduced to ensure consistency across modalities. Building upon this representation, the system integrates a data association module with a Kalman filter-based state estimator to form a closed-loop multi-object tracking framework. Experimental results on the KITTI dataset demonstrate that the proposed system achieves strong 2D and 3D detection performance across different difficulty levels. In multi-object tracking evaluation, the method attains a MOTA score of 47.8 and an IDF1 score of 71.93, validating the stability of the association strategy and the continuity of object trajectories in complex scenes. Furthermore, real-world experiments on a mobile computing platform show an average end-to-end latency of only 173.9 ms, while ablation studies further confirm the effectiveness of individual system components. Overall, the proposed framework exhibits strong performance in terms of geometric reconstruction accuracy and tracking robustness, and its lightweight design and low latency satisfy the stringent requirements of practical robotic deployment. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

28 pages, 685 KB  
Article
Data-Centric Serverless Computing with LambdaStore
by Kai Mast, Suyan Qu, Aditya Jain, Andrea Arpaci-Dusseau and Remzi Arpaci-Dusseau
Software 2026, 5(1), 5; https://doi.org/10.3390/software5010005 - 21 Jan 2026
Abstract
LambdaStore is a data-centric serverless platform that breaks the split between stateless functions and external storage in classic cloud computing platforms. By scheduling serverless invocations near data instead of pulling data to compute, LambdaStore substantially reduces the state access cost that dominates today’s [...] Read more.
LambdaStore is a data-centric serverless platform that breaks the split between stateless functions and external storage in classic cloud computing platforms. By scheduling serverless invocations near data instead of pulling data to compute, LambdaStore substantially reduces the state access cost that dominates today’s serverless workloads. Leveraging its transactional storage engine, LambdaStore delivers serializable guarantees and exactly-once semantics across chains of lambda invocations—a capability missing in current Function-as-a-Service offerings. We make three key contributions: (1) an object-oriented programming model that ties function invocations with its data; (2) a transaction layer with adaptive lock granularity and an optimistic concurrency control protocol designed for serverless workloads to keep contention low while preserving serializability; and (3) an elastic storage system that preserves the elasticity of the serverless paradigm while lambda functions run close to their data. Under read-heavy workloads, LambdaStore lifts throughput by orders of magnitude over existing serverless platforms while holding end-to-end latency below 20 ms. Full article
Show Figures

Figure 1

18 pages, 3461 KB  
Article
Real Time IoT Low-Cost Air Quality Monitoring System
by Silvian-Marian Petrică, Ioana Făgărășan, Nicoleta Arghira and Iulian Munteanu
Sustainability 2026, 18(2), 1074; https://doi.org/10.3390/su18021074 - 21 Jan 2026
Abstract
This paper proposes a complete solution, implementing a low-cost, energy-independent, network-connected, and scalable environmental air parameter monitoring system. It features a remote sensing module which provides environmental data to a cloud-based server and a software application for real-time and historical data processing, standardized [...] Read more.
This paper proposes a complete solution, implementing a low-cost, energy-independent, network-connected, and scalable environmental air parameter monitoring system. It features a remote sensing module which provides environmental data to a cloud-based server and a software application for real-time and historical data processing, standardized air quality indices computations, and a comprehensive visualization of environmental parameters evolutions. A fully operational prototype was built around a low-cost micro-controller connected to low-cost air parameter sensors and a GSM modem, powered by a stand-alone renewable energy-based power supply. The associated software platform has been developed by using Microsoft Power Platform technologies. The collected data is transmitted from sensors to a remote server via the GSM modem using custom-built JSON structures. From there, data is extracted and forwarded to a database accessible to users through a dedicated application. The overall accuracy of the air quality monitoring system has been thoroughly validated both in controlled indoor environment and against a trusted outdoor air quality reference station. The proposed air parameters monitoring solution paves the way for future research actions, such as the classification of polluted sites or prediction of air parameter variations in the site of interest. Full article
(This article belongs to the Section Air, Climate Change and Sustainability)
Show Figures

Figure 1

25 pages, 7167 KB  
Article
Edge-Enhanced YOLOV8 for Spacecraft Instance Segmentation in Cloud-Edge IoT Environments
by Ming Chen, Wenjie Chen, Yanfei Niu, Ping Qi and Fucheng Wang
Future Internet 2026, 18(1), 59; https://doi.org/10.3390/fi18010059 - 20 Jan 2026
Abstract
The proliferation of smart devices and the Internet of Things (IoT) has led to massive data generation, particularly in complex domains such as aerospace. Cloud computing provides essential scalability and advanced analytics for processing these vast datasets. However, relying solely on the cloud [...] Read more.
The proliferation of smart devices and the Internet of Things (IoT) has led to massive data generation, particularly in complex domains such as aerospace. Cloud computing provides essential scalability and advanced analytics for processing these vast datasets. However, relying solely on the cloud introduces significant challenges, including high latency, network congestion, and substantial bandwidth costs, which are critical for real-time on-orbit spacecraft services. Cloud-edge Internet of Things (cloud-edge IoT) computing emerges as a promising architecture to mitigate these issues by pushing computation closer to the data source. This paper proposes an improved YOLOV8-based model specifically designed for edge computing scenarios within a cloud-edge IoT framework. By integrating the Cross Stage Partial Spatial Pyramid Pooling Fast (CSPPF) module and the WDIOU loss function, the model achieves enhanced feature extraction and localization accuracy without significantly increasing computational cost, making it suitable for deployment on resource-constrained edge devices. Meanwhile, by processing image data locally at the edge and transmitting only the compact segmentation results to the cloud, the system effectively reduces bandwidth usage and supports efficient cloud-edge collaboration in IoT-based spacecraft monitoring systems. Experimental results show that, compared to the original YOLOV8 and other mainstream models, the proposed model demonstrates superior accuracy and instance segmentation performance at the edge, validating its practicality in cloud-edge IoT environments. Full article
(This article belongs to the Special Issue Convergence of IoT, Edge and Cloud Systems)
Show Figures

Figure 1

42 pages, 8865 KB  
Article
Vertically Constrained LiDAR-Inertial SLAM in Dynamic Environments
by Shuangfeng Wei, Junfeng Qiu, Anpeng Shen, Keming Qu and Tong Yang
Appl. Sci. 2026, 16(2), 1046; https://doi.org/10.3390/app16021046 - 20 Jan 2026
Abstract
With the advancement of Light Detection and Ranging (LiDAR) technology and computer science, LiDAR–Inertial Simultaneous Localization and Mapping (SLAM) has become essential in autonomous driving, robotic navigation, and 3D reconstruction. However, dynamic objects such as pedestrians and vehicles, with complex terrain conditions, pose [...] Read more.
With the advancement of Light Detection and Ranging (LiDAR) technology and computer science, LiDAR–Inertial Simultaneous Localization and Mapping (SLAM) has become essential in autonomous driving, robotic navigation, and 3D reconstruction. However, dynamic objects such as pedestrians and vehicles, with complex terrain conditions, pose serious challenges to existing SLAM systems. These factors introduce artifacts into the acquired point clouds and result in significant vertical drift in SLAM trajectories. To address these challenges, this study focuses on controlling vertical drift errors in LiDAR–Inertial SLAM systems operating in dynamic environments. The research focuses on three key aspects: ground point segmentation, dynamic artifact removal, and vertical drift optimization. In order to improve the robustness of ground point segmentation operations, this study proposes a method based on a concentric sector model. This method divides point clouds into concentric regions and fits flat surfaces within each region to accurately extract ground points. To mitigate the impact of dynamic objects on map quality, this study proposes a removal algorithm that combines multi-frame residual analysis with curvature-based filtering. Specifically, the algorithm tracks residual changes in non-ground points across consecutive frames to detect inconsistencies caused by motion, while curvature features are used to further distinguish moving objects from static structures. This combined approach enables effective identification and removal of dynamic artifacts, resulting in a reduction in vertical drift. Full article
48 pages, 8070 KB  
Article
ResQConnect: An AI-Powered Multi-Agentic Platform for Human-Centered and Resilient Disaster Response
by Savinu Aththanayake, Chemini Mallikarachchi, Janeesha Wickramasinghe, Sajeev Kugarajah, Dulani Meedeniya and Biswajeet Pradhan
Sustainability 2026, 18(2), 1014; https://doi.org/10.3390/su18021014 - 19 Jan 2026
Viewed by 47
Abstract
Effective disaster management is critical for safeguarding lives, infrastructure and economies in an era of escalating natural hazards like floods and landslides. Despite advanced early-warning systems and coordination frameworks, a persistent “last-mile” challenge undermines response effectiveness: transforming fragmented and unstructured multimodal data into [...] Read more.
Effective disaster management is critical for safeguarding lives, infrastructure and economies in an era of escalating natural hazards like floods and landslides. Despite advanced early-warning systems and coordination frameworks, a persistent “last-mile” challenge undermines response effectiveness: transforming fragmented and unstructured multimodal data into timely and accountable field actions. This paper introduces ResQConnect, a human-centered, AI-powered multimodal multi-agent platform that bridges this gap by directly linking incident intake to coordinated disaster response operations in hazard-prone regions. ResQConnect integrates three key components. It uses an agentic Retrieval-Augmented Generation (RAG) workflow in which specialized language-model agents extract metadata, refine queries, check contextual adequacy and generate actionable task plans using a curated, hazard-specific knowledge base. The contribution lies in structuring the RAG for correctness, safety and procedural grounding in high-risk settings. The platform introduces an Adaptive Event-Triggered (AET) multi-commodity routing algorithm that decides when to re-optimize routes, balancing responsiveness, computational cost and route stability under dynamic disaster conditions. Finally, ResQConnect deploys a compressed, domain-specific language model on mobile devices to provide policy-aligned guidance when cloud connectivity is limited or unavailable. Across realistic flood and landslide scenarios, ResQConnect improved overall task-quality scores from 61.4 to 82.9 (+21.5 points) over a standard RAG baseline, reduced solver calls by up to 85% compared to continuous re-optimization while remaining within 7–12% of optimal response time, and delivered fully offline mobile guidance with sub-500 ms response latency and 54 tokens/s throughput on commodity smartphones. Overall, ResQConnect demonstrates a practical and resilient approach to AI-augmented disaster response. From a sustainability perspective, the proposed system contributes to Sustainable Development Goal (SDG) 11 by improving the speed and coordination of disaster response. It also supports SDG 13 by strengthening adaptation and readiness for climate-driven hazards. ResQConnect is validated using real-world flood and landslide disaster datasets, ensuring realistic incidents, constraints and operational conditions. Full article
(This article belongs to the Section Environmental Sustainability and Applications)
Show Figures

Figure 1

42 pages, 3816 KB  
Article
Dynamic Decision-Making for Resource Collaboration in Complex Computing Networks: A Differential Game and Intelligent Optimization Approach
by Cai Qi and Zibin Zhang
Mathematics 2026, 14(2), 320; https://doi.org/10.3390/math14020320 - 17 Jan 2026
Viewed by 161
Abstract
End–edge–cloud collaboration enables significant improvements in system resource utilization by integrating heterogeneous resources while ensuring application-level quality of service (QoS). However, achieving efficient collaborative decision-making in such architectures poses critical challenges within dynamic and complex computing network environments, including dynamic resource allocation, incentive [...] Read more.
End–edge–cloud collaboration enables significant improvements in system resource utilization by integrating heterogeneous resources while ensuring application-level quality of service (QoS). However, achieving efficient collaborative decision-making in such architectures poses critical challenges within dynamic and complex computing network environments, including dynamic resource allocation, incentive alignment between cloud and edge entities, and multi-objective optimization. To address these issues, this paper proposes a dynamic resource optimization framework for complex cloud–edge collaborative networks, decomposing the problem into two hierarchical decision schemes: cloud-level coordination and edge-side coordination, thereby achieving adaptive resource orchestration across the End–edge–cloud continuum. Furthermore, leveraging differential game theory, we model the dynamic resource allocation and cooperation incentives between cloud and edge nodes, and derive a feedback Nash equilibrium to maximize the overall system utility, effectively resolving the inherent conflicts of interest in cloud–edge collaboration. Additionally, we formulate a joint optimization model for energy consumption and latency, and propose an Improved Discrete Artificial Hummingbird Algorithm (IDAHA) to achieve an optimal trade-off between these competing objectives, addressing the challenge of multi-objective coordination from the user perspective. Extensive simulation results demonstrate that the proposed methods exhibit superior performance in multi-objective optimization, incentive alignment, and dynamic resource decision-making, significantly enhancing the adaptability and collaborative efficiency of complex cloud–edge networks. Full article
(This article belongs to the Special Issue Dynamic Analysis and Decision-Making in Complex Networks)
Show Figures

Figure 1

36 pages, 10413 KB  
Article
An Open-Source CAD Framework Based on Point-Cloud Modeling and Script-Based Rendering: Development and Application
by Angkush Kumar Ghosh
Machines 2026, 14(1), 107; https://doi.org/10.3390/machines14010107 - 16 Jan 2026
Viewed by 124
Abstract
Script-based computer-aided design tools offer accessible and customizable environments, but their broader adoption is limited by the cognitive and computational difficulty of describing curved, irregular, or free-form geometries through code. This study addresses this challenge by contributing a unified, open-source framework that enables [...] Read more.
Script-based computer-aided design tools offer accessible and customizable environments, but their broader adoption is limited by the cognitive and computational difficulty of describing curved, irregular, or free-form geometries through code. This study addresses this challenge by contributing a unified, open-source framework that enables concept-to-model transformation through 2D point-based representations. Unlike previous ad hoc methods, this framework systematically integrates an interactive point-cloud modeling layer with modular systems for curve construction, point generation, transformation, sequencing, and formatting, together with script-based rendering functions. This framework allows users to generate geometrically valid models without navigating the heavy geometric calculations, strict syntax requirements, and debugging demands typical of script-based workflows. Structured case studies demonstrate the underlying workflow across mechanical, artistic, and handcrafted forms, contributing empirical evidence of its applicability to diverse tasks ranging from mechanical component modeling to cultural heritage digitization and reverse engineering. Comparative analysis demonstrates that the framework reduces user-facing code volume by over 97% compared to traditional scripting and provides a lightweight, noise-free alternative to traditional hardware-based reverse engineering by allowing users to define clean geometry from the outset. The findings confirm that the framework generates fabrication-ready outputs—including volumetric models and vector representations—suitable for various manufacturing contexts. All systems and rendering functions are made publicly available, enabling the entire pipeline to be performed using free tools. By establishing a practical and reproducible basis for point-based modeling, this study contributes to the advancement of computational design practice and supports the wider adoption of script-based design workflows. Full article
(This article belongs to the Special Issue Advances in Computer-Aided Technology, 3rd Edition)
Show Figures

Figure 1

23 pages, 16288 KB  
Article
End-Edge-Cloud Collaborative Monitoring System with an Intelligent Multi-Parameter Sensor for Impact Anomaly Detection in GIL Pipelines
by Qi Li, Kun Zeng, Yaojun Zhou, Xiongyao Xie and Genji Tang
Sensors 2026, 26(2), 606; https://doi.org/10.3390/s26020606 - 16 Jan 2026
Viewed by 93
Abstract
Gas-insulated transmission lines (GILs) are increasingly deployed in dense urban power networks, where complex construction activities may introduce external mechanical impacts and pose risks to pipeline structural integrity. However, existing GIL monitoring approaches mainly emphasize electrical and gas-state parameters, while lightweight solutions capable [...] Read more.
Gas-insulated transmission lines (GILs) are increasingly deployed in dense urban power networks, where complex construction activities may introduce external mechanical impacts and pose risks to pipeline structural integrity. However, existing GIL monitoring approaches mainly emphasize electrical and gas-state parameters, while lightweight solutions capable of rapidly detecting and localizing impact-induced structural anomalies remain limited. To address this gap, this paper proposes an intelligent end-edge-cloud monitoring system for impact anomaly detection in GIL pipelines. Numerical simulations are first conducted to analyze the dynamic response characteristics of the pipeline under impacts of varying magnitudes, orientations, and locations, revealing the relationship between impact scenarios and vibration mode evolution. An end-tier multi-parameter intelligent sensor is then developed, integrating triaxial acceleration and angular velocity measurement with embedded lightweight computing. Laboratory impact experiments are performed to acquire sensor data, which are used to train and validate a multi-class extreme gradient boosting (XGBoost) model deployed at the edge tier for accurate impact-location identification. Results show that, even with a single sensor positioned at the pipeline midpoint, fusing acceleration and angular velocity features enables reliable discrimination of impact regions. Finally, a lightweight cloud platform is implemented for visualizing structural responses and environmental parameters with downsampled edge-side data. The proposed system achieves rapid sensor-level anomaly detection, precise edge-level localization, and unified cloud-level monitoring, offering a low-cost and easily deployable solution for GIL structural health assessment. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

50 pages, 3712 KB  
Article
Explainable AI and Multi-Agent Systems for Energy Management in IoT-Edge Environments: A State of the Art Review
by Carlos Álvarez-López, Alfonso González-Briones and Tiancheng Li
Electronics 2026, 15(2), 385; https://doi.org/10.3390/electronics15020385 - 15 Jan 2026
Viewed by 126
Abstract
This paper reviews Artificial Intelligence techniques for distributed energy management, focusing on integrating machine learning, reinforcement learning, and multi-agent systems within IoT-Edge-Cloud architectures. As energy infrastructures become increasingly decentralized and heterogeneous, AI must operate under strict latency, privacy, and resource constraints while remaining [...] Read more.
This paper reviews Artificial Intelligence techniques for distributed energy management, focusing on integrating machine learning, reinforcement learning, and multi-agent systems within IoT-Edge-Cloud architectures. As energy infrastructures become increasingly decentralized and heterogeneous, AI must operate under strict latency, privacy, and resource constraints while remaining transparent and auditable. The study examines predictive models ranging from statistical time series approaches to machine learning regressors and deep neural architectures, assessing their suitability for embedded deployment and federated learning. Optimization methods—including heuristic strategies, metaheuristics, model predictive control, and reinforcement learning—are analyzed in terms of computational feasibility and real-time responsiveness. Explainability is treated as a fundamental requirement, supported by model-agnostic techniques that enable trust, regulatory compliance, and interpretable coordination in multi-agent environments. The review synthesizes advances in MARL for decentralized control, communication protocols enabling interoperability, and hardware-aware design for low-power edge devices. Benchmarking guidelines and key performance indicators are introduced to evaluate accuracy, latency, robustness, and transparency across distributed deployments. Key challenges remain in stabilizing explanations for RL policies, balancing model complexity with latency budgets, and ensuring scalable, privacy-preserving learning under non-stationary conditions. The paper concludes by outlining a conceptual framework for explainable, distributed energy intelligence and identifying research opportunities to build resilient, transparent smart energy ecosystems. Full article
Show Figures

Figure 1

29 pages, 2558 KB  
Article
IDN-MOTSCC: Integration of Deep Neural Network with Hybrid Meta-Heuristic Model for Multi-Objective Task Scheduling in Cloud Computing
by Mohit Kumar, Rama Kant, Brijesh Kumar Gupta, Azhar Shadab, Ashwani Kumar and Krishna Kant
Computers 2026, 15(1), 57; https://doi.org/10.3390/computers15010057 - 14 Jan 2026
Viewed by 297
Abstract
Cloud computing covers a wide range of practical applications and diverse domains, yet resource scheduling and task scheduling remain significant challenges. To address this, different task scheduling algorithms are implemented across various computing systems to allocate tasks to machines, thereby enhancing performance through [...] Read more.
Cloud computing covers a wide range of practical applications and diverse domains, yet resource scheduling and task scheduling remain significant challenges. To address this, different task scheduling algorithms are implemented across various computing systems to allocate tasks to machines, thereby enhancing performance through data mapping. To meet these challenges, a novel task scheduling model is proposed using a hybrid meta-heuristic integration with a deep learning approach. We employed this novel task scheduling model to integrate deep learning with an optimized DNN, fine-tuned using improved grey wolf–horse herd optimization, with the aim of optimizing cloud-based task allocation and overcoming makespan constraints. Initially, a user initiates a task or request within the cloud environment. Then, these tasks are assigned to Virtual Machines (VMs). Since the scheduling algorithm is constrained by the makespan objective, an optimized Deep Neural Network (DNN) model is developed to perform optimal task scheduling. Random solutions are provided to the optimized DNN, where the hidden neuron count is tuned optimally by the proposed Improved Grey Wolf–Horse Herd Optimization (IGW-HHO) algorithm. The proposed IGW-HHO algorithm is derived from both conventional Grey Wolf Optimization (GWO) and Horse Herd Optimization (HHO). The optimal solutions are acquired from the optimized DNN and processed by the proposed algorithm to efficiently allocate tasks to VMs. The experimental results are validated using various error measures and convergence analysis. The proposed DNN-IGW-HHO model achieved a lower cost function compared to other optimization methods, with a reduction of 1% compared to PSO, 3.5% compared to WOA, 2.7% compared to GWO, and 0.7% compared to HHO. The proposed task scheduling model achieved the minimal Mean Absolute Error (MAE), with performance improvements of 31% over PSO, 20.16% over WOA, 41.72% over GWO, and 9.11% over HHO. Full article
(This article belongs to the Special Issue Operations Research: Trends and Applications)
Show Figures

Figure 1

25 pages, 4540 KB  
Article
Vision-Guided Grasp Planning for Prosthetic Hands with AABB-Based Object Representation
by Shifa Sulaiman, Akash Bachhar, Ming Shen and Simon Bøgh
Robotics 2026, 15(1), 22; https://doi.org/10.3390/robotics15010022 - 14 Jan 2026
Viewed by 128
Abstract
Recent advancements in prosthetic technology have increasingly focused on enhancing dexterity and autonomy through intelligent control systems. Vision-based approaches offer promising results for enabling prosthetic hands to interact more naturally with diverse objects in dynamic environments. Building on this foundation, the paper presents [...] Read more.
Recent advancements in prosthetic technology have increasingly focused on enhancing dexterity and autonomy through intelligent control systems. Vision-based approaches offer promising results for enabling prosthetic hands to interact more naturally with diverse objects in dynamic environments. Building on this foundation, the paper presents a vision-guided grasping algorithm for a prosthetic hand, integrating perception, planning, and control for dexterous manipulation. A camera mounted on the set up captures the scene, and a Bounding Volume Hierarchy (BVH)-based vision algorithm is employed to segment an object for grasping and define its bounding box. Grasp contact points are then computed by generating candidate trajectories using Rapidly-exploring Random Tree Star (RRT*) algorithm, and selecting fingertip end poses based on the minimum Euclidean distance between these trajectories and the object’s point cloud. Each finger’s grasp pose is determined independently, enabling adaptive, object-specific configurations. Damped Least Square (DLS) based Inverse kinematics solver is used to compute the corresponding joint angles, which are subsequently transmitted to the finger actuators for execution. Our intention in this work was to present a proof-of-concept pipeline demonstrating that fingertip poses derived from a simple, computationally lightweight geometric representation, specifically an AABB-based segmentation can be successfully propagated through per-finger planning and executed in real time on the Linker Hand O7 platform. The proposed method is validated in simulation, and experimental integration on a Linker Hand O7 platform. Full article
(This article belongs to the Section Sensors and Control in Robotics)
Show Figures

Figure 1

22 pages, 2001 KB  
Article
A Hybrid CNN-LSTM Architecture for Seismic Event Detection Using High-Rate GNSS Velocity Time Series
by Deniz Başar and Rahmi Nurhan Çelik
Sensors 2026, 26(2), 519; https://doi.org/10.3390/s26020519 - 13 Jan 2026
Viewed by 133
Abstract
Global Navigation Satellite Systems (GNSS) have become essential tools in geomatics engineering for precise positioning, cadastral surveys, topographic mapping, and deformation monitoring. Recent advances integrate GNSS with emerging technologies such as artificial intelligence (AI), machine learning (ML), cloud computing, and unmanned aerial systems [...] Read more.
Global Navigation Satellite Systems (GNSS) have become essential tools in geomatics engineering for precise positioning, cadastral surveys, topographic mapping, and deformation monitoring. Recent advances integrate GNSS with emerging technologies such as artificial intelligence (AI), machine learning (ML), cloud computing, and unmanned aerial systems (UAS), which have greatly improved accuracy, efficiency, and analytical capabilities in managing geospatial big data. In this study, we propose a hybrid Convolutional Neural Network–Long Short Term Memory (CNN-LSTM) architecture for seismic detection using high-rate (5 Hz) GNSS velocity time series. The model is trained on a large synthetic dataset generated by and real high-rate GNSS non-event data. Model performance was evaluated using real event and non-event data through an event-based approach. The results demonstrate that a hybrid deep-learning architecture can provide a reliable framework for seismic detection with high-rate GNSS velocity time series. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

14 pages, 617 KB  
Article
Integrating ESP32-Based IoT Architectures and Cloud Visualization to Foster Data Literacy in Early Engineering Education
by Jael Zambrano-Mieles, Miguel Tupac-Yupanqui, Salutar Mari-Loardo and Cristian Vidal-Silva
Computers 2026, 15(1), 51; https://doi.org/10.3390/computers15010051 - 13 Jan 2026
Viewed by 143
Abstract
This study presents the design and implementation of a full-stack IoT ecosystem based on ESP32 microcontrollers and web-based visualization dashboards to support scientific reasoning in first-year engineering students. The proposed architecture integrates a four-layer model—perception, network, service, and application—enabling students to deploy real-time [...] Read more.
This study presents the design and implementation of a full-stack IoT ecosystem based on ESP32 microcontrollers and web-based visualization dashboards to support scientific reasoning in first-year engineering students. The proposed architecture integrates a four-layer model—perception, network, service, and application—enabling students to deploy real-time environmental monitoring systems for agriculture and beekeeping. Through a sixteen-week Project-Based Learning (PBL) intervention with 91 participants, we evaluated how this technological stack influences technical proficiency. Results indicate that the transition from local code execution to cloud-based telemetry increased perceived learning confidence from μ=3.9 (Challenge phase) to μ=4.6 (Reflection phase) on a 5-point scale. Furthermore, 96% of students identified the visualization dashboards as essential Human–Computer Interfaces (HCI) for debugging, effectively bridging the gap between raw sensor data and evidence-based argumentation. These findings demonstrate that integrating open-source IoT architectures provides a scalable mechanism to cultivate data literacy in early engineering education. Full article
Show Figures

Figure 1

27 pages, 1127 KB  
Review
Evolution and Emerging Frontiers in Point Cloud Technology
by Wenjuan Wang, Haleema Ehsan, Shi Qiu, Tariq Ur Rahman, Jin Wang and Qasim Zaheer
Electronics 2026, 15(2), 341; https://doi.org/10.3390/electronics15020341 - 13 Jan 2026
Viewed by 183
Abstract
Point cloud intelligence integrates advanced technologies such as Light Detection and Ranging (LiDAR), photogrammetry, and Artificial Intelligence (AI) to transform transportation infrastructure management. This review highlights state-of-the-art advancements in denoising, registration, segmentation, and surface reconstruction. A detailed case study on three-dimensional (3D) mesh [...] Read more.
Point cloud intelligence integrates advanced technologies such as Light Detection and Ranging (LiDAR), photogrammetry, and Artificial Intelligence (AI) to transform transportation infrastructure management. This review highlights state-of-the-art advancements in denoising, registration, segmentation, and surface reconstruction. A detailed case study on three-dimensional (3D) mesh generation for railway fastener monitoring showcases how these techniques address challenges like noise and computational complexity while enabling precise and efficient infrastructure maintenance. By demonstrating practical applications and identifying future research directions, this work underscores the transformative potential of point cloud intelligence in supporting predictive maintenance, digital twins, and sustainable transportation systems. Full article
Show Figures

Figure 1

Back to TopTop