Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,208)

Search Parameters:
Keywords = artificial clouds

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 1014 KB  
Review
Advances in IoT, AI, and Sensor-Based Technologies for Disease Treatment, Health Promotion, Successful Ageing, and Ageing Well
by Yuzhou Qian and Keng Leng Siau
Sensors 2025, 25(19), 6207; https://doi.org/10.3390/s25196207 - 7 Oct 2025
Viewed by 399
Abstract
Recent advancements in the Internet of Things (IoT) and artificial intelligence (AI) are unlocking transformative opportunities across society. One of the most critical challenges addressed by these technologies is the ageing population, which presents mounting concerns for healthcare systems and quality of life [...] Read more.
Recent advancements in the Internet of Things (IoT) and artificial intelligence (AI) are unlocking transformative opportunities across society. One of the most critical challenges addressed by these technologies is the ageing population, which presents mounting concerns for healthcare systems and quality of life worldwide. By supporting continuous monitoring, personal care, and data-driven decision-making, IoT and AI are shifting healthcare delivery from a reactive approach to a proactive one. This paper presents a comprehensive overview of IoT-based systems with a particular focus on the Internet of Healthcare Things (IoHT) and their integration with AI, referred to as the Artificial Intelligence of Things (AIoT). We illustrate the operating procedures of IoHT systems in detail. We highlight their applications in disease management, health promotion, and active ageing. Key enabling technologies, including cloud computing, edge computing architectures, machine learning, and smart sensors, are examined in relation to continuous health monitoring, personalized interventions, and predictive decision support. This paper also indicates potential challenges that IoHT systems face, including data privacy, ethical concerns, and technology transition and aversion, and it reviews corresponding defense mechanisms from perception, policy, and technology levels. Future research directions are discussed, including explainable AI, digital twins, metaverse applications, and multimodal sensor fusion. By integrating IoT and AI, these systems offer the potential to support more adaptive and human-centered healthcare delivery, ultimately improving treatment outcomes and supporting healthy ageing. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

12 pages, 284 KB  
Article
AI-Enabled Secure and Scalable Distributed Web Architecture for Medical Informatics
by Marian Ileana, Pavel Petrov and Vassil Milev
Appl. Sci. 2025, 15(19), 10710; https://doi.org/10.3390/app151910710 - 4 Oct 2025
Viewed by 325
Abstract
Current medical informatics systems face critical challenges, including limited scalability across distributed institutions, insufficient real-time AI-driven decision support, and lack of standardized interoperability for heterogeneous medical data exchange. To address these challenges, this paper proposes a novel distributed web system architecture for medical [...] Read more.
Current medical informatics systems face critical challenges, including limited scalability across distributed institutions, insufficient real-time AI-driven decision support, and lack of standardized interoperability for heterogeneous medical data exchange. To address these challenges, this paper proposes a novel distributed web system architecture for medical informatics, integrating artificial intelligence techniques and cloud-based services. The system ensures interoperability via HL7 FHIR standards and preserves data privacy and fault tolerance across interconnected medical institutions. A hybrid AI pipeline combining principal component analysis (PCA), K-Means clustering, and convolutional neural networks (CNNs) is applied to diffusion tensor imaging (DTI) data for early detection of neurological anomalies. The architecture leverages containerized microservices orchestrated with Docker Swarm, enabling adaptive resource management and high availability. Experimental validation confirms reduced latency, improved system reliability, and enhanced compliance with medical data exchange protocols. Results demonstrate superior performance with an average latency of 94 ms, a diagnostic accuracy of 91.3%, and enhanced clinical workflow efficiency compared to traditional monolithic architectures. The proposed solution successfully addresses scalability limitations while maintaining data security and regulatory compliance across multi-institutional deployments. This work contributes to the advancement of intelligent, interoperable, and scalable e-health infrastructures aligned with the evolution of digital healthcare ecosystems. Full article
(This article belongs to the Special Issue Data Science and Medical Informatics)
Show Figures

Figure 1

25 pages, 737 KB  
Systematic Review
A Systematic Literature Review on the Implementation and Challenges of Zero Trust Architecture Across Domains
by Sadaf Mushtaq, Muhammad Mohsin and Muhammad Mujahid Mushtaq
Sensors 2025, 25(19), 6118; https://doi.org/10.3390/s25196118 - 3 Oct 2025
Viewed by 515
Abstract
The Zero Trust Architecture (ZTA) model has emerged as a foundational cybersecurity paradigm that eliminates implicit trust and enforces continuous verification across users, devices, and networks. This study presents a systematic literature review of 74 peer-reviewed articles published between 2016 and 2025, spanning [...] Read more.
The Zero Trust Architecture (ZTA) model has emerged as a foundational cybersecurity paradigm that eliminates implicit trust and enforces continuous verification across users, devices, and networks. This study presents a systematic literature review of 74 peer-reviewed articles published between 2016 and 2025, spanning domains such as cloud computing (24 studies), Internet of Things (11), healthcare (7), enterprise and remote work systems (6), industrial and supply chain networks (5), mobile networks (5), artificial intelligence and machine learning (5), blockchain (4), big data and edge computing (3), and other emerging contexts (4). The analysis shows that authentication, authorization, and access control are the most consistently implemented ZTA components, whereas auditing, orchestration, and environmental perception remain underexplored. Across domains, the main challenges include scalability limitations, insufficient lightweight cryptographic solutions for resource-constrained systems, weak orchestration mechanisms, and limited alignment with regulatory frameworks such as GDPR and HIPAA. Cross-domain comparisons reveal that cloud and enterprise systems demonstrate relatively mature implementations, while IoT, blockchain, and big data deployments face persistent performance and compliance barriers. Overall, the findings highlight both the progress and the gaps in ZTA adoption, underscoring the need for lightweight cryptography, context-aware trust engines, automated orchestration, and regulatory integration. This review provides a roadmap for advancing ZTA research and practice, offering implications for researchers, industry practitioners, and policymakers seeking to enhance cybersecurity resilience. Full article
Show Figures

Figure 1

30 pages, 4602 KB  
Article
Intelligent Fault Diagnosis of Ball Bearing Induction Motors for Predictive Maintenance Industrial Applications
by Vasileios I. Vlachou, Theoklitos S. Karakatsanis, Stavros D. Vologiannidis, Dimitrios E. Efstathiou, Elisavet L. Karapalidou, Efstathios N. Antoniou, Agisilaos E. Efraimidis, Vasiliki E. Balaska and Eftychios I. Vlachou
Machines 2025, 13(10), 902; https://doi.org/10.3390/machines13100902 - 2 Oct 2025
Viewed by 348
Abstract
Induction motors (IMs) are crucial in many industrial applications, offering a cost-effective and reliable source of power transmission and generation. However, their continuous operation imposes considerable stress on electrical and mechanical parts, leading to progressive wear that can cause unexpected system shutdowns. Bearings, [...] Read more.
Induction motors (IMs) are crucial in many industrial applications, offering a cost-effective and reliable source of power transmission and generation. However, their continuous operation imposes considerable stress on electrical and mechanical parts, leading to progressive wear that can cause unexpected system shutdowns. Bearings, which enable shaft motion and reduce friction under varying loads, are the most failure-prone components, with bearing ball defects representing most severe mechanical failures. Early and accurate fault diagnosis is therefore essential to prevent damage and ensure operational continuity. Recent advances in the Internet of Things (IoT) and machine learning (ML) have enabled timely and effective predictive maintenance strategies. Among various diagnostic parameters, vibration analysis has proven particularly effective for detecting bearing faults. This study proposes a hybrid diagnostic framework for induction motor bearings, combining vibration signal analysis with Support Vector Machines (SVMs) and Artificial Neural Networks (ANNs) in an IoT-enabled Industry 4.0 architecture. Statistical and frequency-domain features were extracted, reduced using Principal Component Analysis (PCA), and classified with SVMs and ANNs, achieving over 95% accuracy. The novelty of this work lies in the hybrid integration of interpretable and non-linear ML models within an IoT-based edge–cloud framework. Its main contribution is a scalable and accurate real-time predictive maintenance solution, ensuring high diagnostic reliability and seamless integration in Industry 4.0 environments. Full article
(This article belongs to the Special Issue Vibration Detection of Induction and PM Motors)
Show Figures

Figure 1

15 pages, 1081 KB  
Article
Digital Tools for Decision Support in Social Rehabilitation
by Valeriya Gribova and Elena Shalfeeva
J. Pers. Med. 2025, 15(10), 468; https://doi.org/10.3390/jpm15100468 - 1 Oct 2025
Viewed by 164
Abstract
Objectives: The process of social rehabilitation involves several stages, from assessing an individual’s condition and determining their potential for rehabilitation to implementing a personalized plan with continuous monitoring of progress. Advances in information technology, including artificial intelligence, enable the use of software-assisted [...] Read more.
Objectives: The process of social rehabilitation involves several stages, from assessing an individual’s condition and determining their potential for rehabilitation to implementing a personalized plan with continuous monitoring of progress. Advances in information technology, including artificial intelligence, enable the use of software-assisted solutions for objective assessments and personalized rehabilitation strategies. The research aims to present interconnected semantic models that represent expandable knowledge in the field of rehabilitation, as well as an integrated framework and methodology for constructing virtual assistants and personalized decision support systems based on these models. Materials and Methods: The knowledge and data accumulated in these areas require special tools for their representation, access, and use. To develop a set of models that form the basis of decision support systems in rehabilitation, it is necessary to (1) analyze the domain, identify concepts and group them by type, and establish a set of resources that should contain knowledge for intellectual support; (2) create a set of semantic models to represent knowledge for the rehabilitation of patients. The ontological approach, combined with the cloud cover of the IACPaaS platform, has been proposed. Results: This paper presents a suite of semantic models and a methodology for implementing decision support systems capable of expanding rehabilitation knowledge through updated regulatory frameworks and empirical data. Conclusions: The potential advantage of such systems is the combination of the most relevant knowledge with a high degree of personalization in rehabilitation planning. Full article
(This article belongs to the Section Personalized Medical Care)
Show Figures

Figure 1

24 pages, 4942 KB  
Article
ConvNet-Generated Adversarial Perturbations for Evaluating 3D Object Detection Robustness
by Temesgen Mikael Abraha, John Brandon Graham-Knight, Patricia Lasserre, Homayoun Najjaran and Yves Lucet
Sensors 2025, 25(19), 6026; https://doi.org/10.3390/s25196026 - 1 Oct 2025
Viewed by 235
Abstract
This paper presents a novel adversarial Convolutional Neural Network (ConvNet) method for generating adversarial perturbations in 3D point clouds, enabling gradient-free robustness evaluation of object detection systems at inference time. Unlike existing iterative gradient methods, our approach embeds the ConvNet directly into the [...] Read more.
This paper presents a novel adversarial Convolutional Neural Network (ConvNet) method for generating adversarial perturbations in 3D point clouds, enabling gradient-free robustness evaluation of object detection systems at inference time. Unlike existing iterative gradient methods, our approach embeds the ConvNet directly into the detection pipeline at the voxel feature level. The ConvNet is trained to maximize detection loss while maintaining perturbations within sensor error bounds through multi-component loss constraints (intensity, bias, and imbalance terms). Evaluation on a Sparsely Embedded Convolutional Detection (SECOND) detector with the KITTI dataset shows 8% overall mean Average Precision (mAP) degradation, while CenterPoint on NuScenes exhibits 24% weighted mAP reduction across 10 object classes. Analysis reveals an inverse relationship between object size and adversarial vulnerability: smaller objects (pedestrians: 13%, cyclists: 14%) show higher vulnerability compared to larger vehicles (cars: 0.2%) on KITTI, with similar patterns on NuScenes, where barriers (68%) and pedestrians (32%) are most affected. Despite perturbations remaining within typical sensor error margins (mean L2 norm of 0.09% for KITTI, 0.05% for NuScenes, corresponding to 0.9–2.6 cm at typical urban distances), substantial detection failures occur. The key novelty is training a ConvNet to learn effective adversarial perturbations during a one-time training phase and then using the trained network for gradient-free robustness evaluation during inference, requiring only a forward pass through the ConvNet (1.2–2.0 ms overhead) instead of iterative gradient computation, making continuous vulnerability monitoring practical for autonomous driving safety assessment. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

27 pages, 4067 KB  
Article
Opportunities for Adapting Data Write Latency in Geo-Distributed Replicas of Multicloud Systems
by Olha Kozina, José Machado, Maksym Volk, Hennadii Heiko, Volodymyr Panchenko, Mykyta Kozin and Maryna Ivanova
Future Internet 2025, 17(10), 442; https://doi.org/10.3390/fi17100442 - 28 Sep 2025
Viewed by 217
Abstract
This paper proposes an AI-based approach to adapting the data write latency in multicloud systems (MCSs) that supports data consistency across geo-distributed replicas of cloud service providers (CSPs). The proposed approach allows for dynamically forming adaptation scenarios based on the proposed model of [...] Read more.
This paper proposes an AI-based approach to adapting the data write latency in multicloud systems (MCSs) that supports data consistency across geo-distributed replicas of cloud service providers (CSPs). The proposed approach allows for dynamically forming adaptation scenarios based on the proposed model of multi-criteria optimization of data write latency. The generated adaptation scenarios are aimed at maintaining the required data write latency under changes in the intensity of the incoming request flow and network transmission time between replicas in CSPs. To generate adaptation scenarios, the features of the algorithmic Latord method of data consistency, are used. To determine the threshold values and predict the external parameters affecting the data write latency, we propose using learning AI models. An artificial neural network is used to form rules for changing the parameters of the Latord method when the external operating conditions of MCSs change. The features of the Latord method that influence data write latency are demonstrated by the results of simulation experiments on three MCSs with different configurations. To confirm the effectiveness of the developed approach, an adaptation scenario was considered that allows reducing the data write latency by 13% when changing the standard deviation of network transmission time between DCs of MCS. Full article
(This article belongs to the Special Issue Artificial Intelligence and Control Systems for Industry 4.0 and 5.0)
Show Figures

Figure 1

20 pages, 1367 KB  
Review
AI-Integrated QSAR Modeling for Enhanced Drug Discovery: From Classical Approaches to Deep Learning and Structural Insight
by Mahesh Koirala, Lindy Yan, Zoser Mohamed and Mario DiPaola
Int. J. Mol. Sci. 2025, 26(19), 9384; https://doi.org/10.3390/ijms26199384 - 25 Sep 2025
Viewed by 694
Abstract
Integrating artificial intelligence (AI) with the Quantitative Structure-Activity Relationship (QSAR) has transformed modern drug discovery by empowering faster, more accurate, and scalable identification of therapeutic compounds. This review outlines the evolution from classical QSAR methods, such as multiple linear regression and partial least [...] Read more.
Integrating artificial intelligence (AI) with the Quantitative Structure-Activity Relationship (QSAR) has transformed modern drug discovery by empowering faster, more accurate, and scalable identification of therapeutic compounds. This review outlines the evolution from classical QSAR methods, such as multiple linear regression and partial least squares, to advanced machine learning and deep learning approaches, including graph neural networks and SMILES-based transformers. Molecular docking and molecular dynamics simulations are presented as cooperative tools that boost the mechanistic consideration and structural insight into the ligand-target interactions. Discussions on using PROTACs and targeted protein degradation, ADMET prediction, and public databases and cloud-based platforms to democratize access to computational modeling are well presented with priority. Challenges related to authentication, interpretability, regulatory standards, and ethical concerns are examined, along with emerging patterns in AI-driven drug development. This review is a guideline for using computational models and databases in explainable, data-rich and profound drug discovery pipelines. Full article
Show Figures

Graphical abstract

22 pages, 4173 KB  
Article
A Novel Nighttime Sea Fog Detection Method Based on Generative Adversarial Networks
by Wuyi Qiu, Xiaoqun Cao and Shuo Ma
Remote Sens. 2025, 17(19), 3285; https://doi.org/10.3390/rs17193285 - 24 Sep 2025
Viewed by 315
Abstract
Nighttime sea fog exhibits high frequency and prolonged duration, posing significant risks to maritime navigation safety. Current detection methods primarily rely on the dual-infrared channel brightness temperature difference technique, which faces challenges such as threshold selection difficulties and a tendency toward overestimation. In [...] Read more.
Nighttime sea fog exhibits high frequency and prolonged duration, posing significant risks to maritime navigation safety. Current detection methods primarily rely on the dual-infrared channel brightness temperature difference technique, which faces challenges such as threshold selection difficulties and a tendency toward overestimation. In contrast, the VIIRS Day/Night Band (DNB) offers exceptional nighttime visible-like cloud imaging capabilities, offering a new solution to alleviate the overestimation issues inherent in infrared detection algorithms. Recent advances in artificial intelligence have further addressed the threshold selection problem in traditional detection methods. Leveraging these developments, this study proposes a novel generative adversarial network model incorporating attention mechanisms (SEGAN) to achieve accurate nighttime sea fog detection using DNB data. Experimental results demonstrate that SEGAN achieves satisfactory performance, with probability of detection, false alarm rate, and critical success index reaching 0.8708, 0.0266, and 0.7395, respectively. Compared with the operational infrared detection algorithm, these metrics show improvements of 0.0632, 0.0287, and 0.1587. Notably, SEGAN excels at detecting sea fog obscured by thin cloud cover, a scenario where conventional infrared detection algorithms typically fail. SEGAN emphasizes semantic consistency in its output, endowing it with enhanced robustness across varying sea fog concentrations. Full article
Show Figures

Figure 1

17 pages, 2191 KB  
Article
Integration of Industry 5.0 Technologies in the Concrete Industry: An Analysis of the Impact of AI-Based Virtual Assistants
by Carlos Torregrosa Bonet, Francisco Antonio Lloret Abrisqueta and Antonio Guerrero González
Appl. Sci. 2025, 15(18), 10147; https://doi.org/10.3390/app151810147 - 17 Sep 2025
Viewed by 374
Abstract
The construction industry, traditionally lagging behind in terms of digitalization, faces significant challenges in its transition to Industry 4.0, which is characterized by the use of advanced technologies such as artificial intelligence (AI), the Industrial Internet of Things (IIoT), and cloud computing. This [...] Read more.
The construction industry, traditionally lagging behind in terms of digitalization, faces significant challenges in its transition to Industry 4.0, which is characterized by the use of advanced technologies such as artificial intelligence (AI), the Industrial Internet of Things (IIoT), and cloud computing. This article presents the development and implementation of an AI-based virtual assistant, designed to optimize the operation and maintenance of concrete production plants. The assistant helps reduce the margin of human error, improve operational efficiency, and facilitate continuous training for operators. These advancements foster a more collaborative and digitalized environment, while also generating environmental, economic, and social benefits: reduced material and energy waste, lower carbon footprint, increased workplace safety, and strengthened organizational resilience. The results show high accuracy in voice transcription (96%) and a 100% success rate in responding to technical queries, demonstrating its effectiveness as a support tool in industrial settings. Based on these findings, it is concluded that the incorporation of AI-based virtual assistants promotes a more sustainable and responsible production model, aligned with the Sustainable Development Goals of the 2030 Agenda, and anticipates the principles of Industry 5.0 by promoting symbiotic collaboration between humans and technology. This innovation represents a key advancement in transforming the concrete industry, contributing to productivity, environmental sustainability, and workplace well-being in the sector. Full article
Show Figures

Figure 1

28 pages, 1206 KB  
Review
How Is Artificial Intelligence Transforming the Intersection of Pediatric and Special Care Dentistry? A Scoping Review of Current Applications and Ethical Considerations
by Ali A. Assiry, Rawan S. Alrehaili, Abdulaziz Mahnashi, Hadia Alkam, Roaa Mahdi, Razan Hakami, Reem Alshammakhy, Walaa Almallahi, Yomna Alhawsah and Ahmed S. Khalil
Prosthesis 2025, 7(5), 119; https://doi.org/10.3390/prosthesis7050119 - 17 Sep 2025
Viewed by 771
Abstract
Background: Artificial intelligence (AI) is influencing pediatric dentistry by supporting diagnostic accuracy, optimizing treatment planning, and improving patient care, especially for children with special needs. Previous studies explored various aspects of AI in pediatric dentistry and special care dentistry, predominantly focusing on clinical [...] Read more.
Background: Artificial intelligence (AI) is influencing pediatric dentistry by supporting diagnostic accuracy, optimizing treatment planning, and improving patient care, especially for children with special needs. Previous studies explored various aspects of AI in pediatric dentistry and special care dentistry, predominantly focusing on clinical implementation or technical advancements. However, no prior review has specifically addressed its application at the intersection of pediatric dentistry and special care dentistry, particularly with respect to ethical and environmental perspectives. Objective: This scoping review provides a comprehensive synthesis of AI technologies in pediatric dentistry with a dedicated focus on children with special health care needs. It aims to critically evaluate current applications and examine the clinical, ethical, and environmental implementation challenges unique to these populations. Methods: A structured literature search was conducted in PubMed, Scopus, and Web of Science from inception to August 2025, using predefined inclusion and exclusion criteria. Eligible studies investigated AI applications in pediatric dental care or special needs contexts. Studies were synthesized narratively according to thematic domains. Results: Sixty-five studies met the inclusion criteria. Thematic synthesis identified nine domains of AI application: (1) diagnostic imaging and caries detection, (2) three-dimensional imaging, (3) interceptive and preventive orthodontics, (4) chatbots and teledentistry, (5) decision support, patient engagement and predictive analytics, (6) pain assessment and discomfort monitoring, (7) behavior management, (8) behavior modeling, and (9) ethical considerations and challenges. The majority of studies were conducted in general pediatric populations, with relatively few specifically addressing children with special health care needs. Conclusions: AI in pediatric dentistry is most developed in diagnostic imaging and caries detection, while applications in teledentistry and predictive analytics remain emerging, and areas such as pain assessment, behavior management, and behavior modelling are still exploratory. Evidence for children with special health care needs is limited and seldom validated, highlighting the need for focused research in this group. Ethical deployment of AI in pediatric dentistry requires safeguarding data privacy, minimizing algorithmic bias, preventing overtreatment, and reducing the carbon footprint of cloud-based technologies. Full article
Show Figures

Figure 1

22 pages, 2692 KB  
Article
Low-Cost AI-Enabled Optoelectronic Wearable for Gait and Breathing Monitoring: Design, Validation, and Applications
by Samilly Morau, Leandro Macedo, Eliton Morais, Rafael Menegardo, Jan Nedoma, Radek Martinek and Arnaldo Leal-Junior
Biosensors 2025, 15(9), 612; https://doi.org/10.3390/bios15090612 - 16 Sep 2025
Viewed by 556
Abstract
This paper presents the development of an optoelectronic wearable sensor system for portable monitoring of the movement and physiological parameters of patients. The sensor system is based on a low-cost inertial measurement unit (IMU) and an optical fiber-integrated chest belt for breathing rate [...] Read more.
This paper presents the development of an optoelectronic wearable sensor system for portable monitoring of the movement and physiological parameters of patients. The sensor system is based on a low-cost inertial measurement unit (IMU) and an optical fiber-integrated chest belt for breathing rate monitoring with wireless connection with a gateway connected to the cloud. The sensors also use artificial intelligence algorithms for clustering, classification, and regression of the data. Results show a root mean squared error (RMSE) between the reference data and the proposed breathing rate sensor of 0.6 BPM, whereas RMSEs of 0.037 m/s2 and 0.27 °/s are obtained for the acceleration and angular velocity analysis, respectively. For the sensor validation under different movement analysis protocols, the balance and Timed up and Go (TUG) tests performed with 12 subjects demonstrate the feasibility of the proposed device for biomechanical and physical therapy protocols’ automatization and assessment. The balance tests were performed in two different conditions, with a wider and narrower base, whereas the TUG tests were made with the combination of cognitive and motor tests. The results demonstrate the influence of the change of base on the balance analysis as well as the dual task effect on the scores during the TUG testing, where the combination between motor and cognitive tests lead to smaller scores on the TUG tests due to the increase of complexity of the task. Therefore, the proposed approach results in a low-cost and fully automated sensor system that can be used in different protocols for physical rehabilitation. Full article
(This article belongs to the Special Issue Wearable Biosensors and Health Monitoring)
Show Figures

Figure 1

21 pages, 5960 KB  
Article
Improving the Quality of LiDAR Point Cloud Data in Greenhouse Environments
by Gaoshoutong Si, Peter Ling, Sami Khanal and Heping Zhu
Agronomy 2025, 15(9), 2200; https://doi.org/10.3390/agronomy15092200 - 16 Sep 2025
Viewed by 402
Abstract
Automated crop monitoring in controlled environments is imperative for enhancing crop productivity. The availability of small unmanned aerial systems (sUAS) and cost-effective LiDAR sensors present an opportunity to conveniently gather high-quality data for crop monitoring. The LiDAR-collected point cloud data, however, often encounter [...] Read more.
Automated crop monitoring in controlled environments is imperative for enhancing crop productivity. The availability of small unmanned aerial systems (sUAS) and cost-effective LiDAR sensors present an opportunity to conveniently gather high-quality data for crop monitoring. The LiDAR-collected point cloud data, however, often encounter challenges such as occlusions and low point density that can be addressed by acquiring additional data from multiple flight paths. This study evaluated the performance of using an Iterative Closest Point (ICP)-based algorithm for registering sUAS-based LiDAR point clouds collected in the greenhouse environment. To address the issue of objects that may cause ICP or local feature-based registration to mismatch correspondences, this study developed a robust registration pipeline. First, the geometric centroid of the ground floor boundary was leveraged to improve the initial alignment, and then piecewise ICP was implemented to achieve fine registration. The evaluation of point cloud registration performance included visualization, root mean square error (RMSE), volume estimation of reference objects, and the distribution of point cloud density. The best RMSE dropped from 20.4 cm to 2.4 cm, and point cloud density improved after registration, and the volume-estimation error for reference objects dropped from 72% (single view) to 6% (post-registration). This study presents a promising approach to point cloud registration that outperforms conventional ICP in greenhouse layouts while eliminating the need for artificial reference objects. Full article
Show Figures

Figure 1

23 pages, 5510 KB  
Article
Research on Intelligent Generation of Line Drawings from Point Clouds for Ancient Architectural Heritage
by Shuzhuang Dong, Dan Wu, Weiliang Kong, Wenhu Liu and Na Xia
Buildings 2025, 15(18), 3341; https://doi.org/10.3390/buildings15183341 - 15 Sep 2025
Viewed by 305
Abstract
Addressing the inefficiency, subjective errors, and limited adaptability of existing methods for surveying complex ancient structures, this study presents an intelligent hierarchical algorithm for generating line drawings guided by structured architectural features. Leveraging point cloud data, our approach integrates prior semantic and structural [...] Read more.
Addressing the inefficiency, subjective errors, and limited adaptability of existing methods for surveying complex ancient structures, this study presents an intelligent hierarchical algorithm for generating line drawings guided by structured architectural features. Leveraging point cloud data, our approach integrates prior semantic and structural knowledge of ancient buildings to establish a multi-granularity feature extraction framework encompassing local geometric features (normal vectors, curvature, Simplified Point Feature Histograms-SPFH), component-level semantic features (utilizing enhanced PointNet++ segmentation and geometric graph matching for specialized elements), and structural relationships (adjacency analysis, hierarchical support inference). This framework autonomously achieves intelligent layer assignment, line type/width selection based on component semantics, vectorization optimization via orthogonal and hierarchical topological constraints, and the intelligent generation of sectional views and symbolic annotations. We implemented an algorithmic toolchain using the AutoCAD Python API (pyautocad version 0.5.0) within the AutoCAD 2023 environment. Validation on point cloud datasets from two representative ancient structures—Guanchang No. 11 (Luoyuan County, Fujian) and Li Tianda’s Residence (Langxi County, Anhui)—demonstrates the method’s effectiveness in accurately identifying key components (e.g., columns, beams, Dougong brackets), generating engineering-standard line drawings with significantly enhanced efficiency over traditional approaches, and robustly handling complex architectural geometries. This research delivers an efficient, reliable, and intelligent solution for digital preservation, restoration design, and information archiving of ancient architectural heritage. Full article
Show Figures

Figure 1

22 pages, 1572 KB  
Article
Collaborative Optimization of Cloud–Edge–Terminal Distribution Networks Combined with Intelligent Integration Under the New Energy Situation
by Fei Zhou, Chunpeng Wu, Yue Wang, Qinghe Ye, Zhenying Tai, Haoyi Zhou and Qingyun Sun
Mathematics 2025, 13(18), 2924; https://doi.org/10.3390/math13182924 - 10 Sep 2025
Viewed by 463
Abstract
The complex electricity consumption situation on the customer side and large-scale wind and solar power generation have gradually shifted the traditional “source-follow-load” model in the power system towards the “source-load interaction” model. At present, the voltage regulation methods require excessive computing resources to [...] Read more.
The complex electricity consumption situation on the customer side and large-scale wind and solar power generation have gradually shifted the traditional “source-follow-load” model in the power system towards the “source-load interaction” model. At present, the voltage regulation methods require excessive computing resources to accurately predict the fluctuating load under the new energy structure. However, with the development of artificial intelligence and cloud computing, more methods for processing big data have emerged. This paper proposes a new method for electricity consumption analysis that combines traditional mathematical statistics with machine learning to overcome the limitations of non-intrusive load detection methods and develop a distributed optimization of cloud–edge–device distribution networks based on electricity consumption. Aiming at problems such as overfitting and the demand for accurate short-term renewable power generation prediction, it is proposed to use the long short-term memory method to process time series data, and an improved algorithm is developed in combination with error feedback correction. The R2 value of the coupling algorithm reaches 0.991, while the values of RMSE, MAPE and MAE are 1347.2, 5.36 and 199.4, respectively. Power prediction cannot completely eliminate errors. It is necessary to combine the consistency algorithm to construct the regulation strategy. Under the regulation strategy, stability can be achieved after 25 iterations, and the optimal regulation is obtained. Finally, the cloud–edge–device distributed coevolution model of the power grid is obtained to achieve the economy of power grid voltage control. Full article
Show Figures

Figure 1

Back to TopTop