Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,318)

Search Parameters:
Keywords = Cloud Platforms

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 5895 KB  
Article
Intelligent 3D Potato Cutting Simulation System Based on Multi-View Images and Point Cloud Fusion
by Ruize Xu, Chen Chen, Fanyi Liu and Shouyong Xie
Agriculture 2025, 15(19), 2088; https://doi.org/10.3390/agriculture15192088 - 7 Oct 2025
Abstract
The quality of seed pieces is crucial for potato planting. Each seed piece should contain viable potato eyes and maintain a uniform size for mechanized planting. However, existing intelligent methods are limited by a single view, making it difficult to satisfy both requirements [...] Read more.
The quality of seed pieces is crucial for potato planting. Each seed piece should contain viable potato eyes and maintain a uniform size for mechanized planting. However, existing intelligent methods are limited by a single view, making it difficult to satisfy both requirements simultaneously. To address this problem, we present an intelligent 3D potato cutting simulation system. A sparse 3D point cloud of the potato is reconstructed from multi-perspective images, which are acquired with a single-camera rotating platform. Subsequently, the 2D positions of potato eyes in each image are detected using deep learning, from which their 3D positions are mapped via back-projection and a clustering algorithm. Finally, the cutting paths are optimized by a Bayesian optimizer, which incorporates both the potato’s volume and the locations of its eyes, and generates cutting schemes suitable for different potato size categories. Experimental results showed that the system achieved a mean absolute percentage error of 2.16% (95% CI: 1.60–2.73%) for potato volume estimation, a potato eye detection precision of 98%, and a recall of 94%. The optimized cutting plans showed a volume coefficient of variation below 0.10 and avoided damage to the detected potato eyes, producing seed pieces that each contained potato eyes. This work demonstrates that the system can effectively utilize the detected potato eye information to obtain seed pieces containing potato eyes and having uniform size. The proposed system provides a feasible pathway for high-precision automated seed potato cutting. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

18 pages, 3402 KB  
Article
Monocular Modeling of Non-Cooperative Space Targets Under Adverse Lighting Conditions
by Hao Chi, Ken Chen and Jiwen Zhang
Aerospace 2025, 12(10), 901; https://doi.org/10.3390/aerospace12100901 - 7 Oct 2025
Abstract
Accurate modeling of non-cooperative space targets remains a significant challenge, particularly under complex illumination conditions. A hybrid virtual–real framework is proposed that integrates photometric compensation, 3D reconstruction, and visibility determination to enhance the robustness and accuracy of monocular-based modeling systems. To overcome the [...] Read more.
Accurate modeling of non-cooperative space targets remains a significant challenge, particularly under complex illumination conditions. A hybrid virtual–real framework is proposed that integrates photometric compensation, 3D reconstruction, and visibility determination to enhance the robustness and accuracy of monocular-based modeling systems. To overcome the breakdown of the classical photometric constancy assumption under varying illumination, a compensation-based photometric model is formulated and implemented. A point cloud–driven virtual space is constructed and refined through Poisson surface reconstruction, enabling per-pixel depth, normal, and visibility information to be efficiently extracted via GPU-accelerated rendering. An illumination-aware visibility model further distinguishes self-occluded and shadowed regions, allowing for selective pixel usage during photometric optimization, while motion parameter estimation is stabilized by analyzing angular velocity precession. Experiments conducted on both Unity3D-based simulations and a semi-physical platform with robotic hardware and a sunlight simulator demonstrate that the proposed method consistently outperforms conventional feature-based and direct SLAM approaches in trajectory accuracy and 3D reconstruction quality. These results highlight the effectiveness and practical significance of incorporating virtual space feedback for non-cooperative space target modeling. Full article
(This article belongs to the Section Astronautics & Space Science)
Show Figures

Figure 1

28 pages, 11737 KB  
Article
Comparative Evaluation of SNO and Double Difference Calibration Methods for FY-3D MERSI TIR Bands Using MODIS/Aqua as Reference
by Shufeng An, Fuzhong Weng, Xiuzhen Han and Chengzhi Ye
Remote Sens. 2025, 17(19), 3353; https://doi.org/10.3390/rs17193353 - 2 Oct 2025
Abstract
Radiometric consistency across satellite platforms is fundamental to producing high-quality Climate Data Records (CDRs). Because different cross-calibration methods have distinct advantages and limitations, comparative evaluation is necessary to ensure record accuracy. This study presents a comparative assessment of two widely applied calibration approaches—Simultaneous [...] Read more.
Radiometric consistency across satellite platforms is fundamental to producing high-quality Climate Data Records (CDRs). Because different cross-calibration methods have distinct advantages and limitations, comparative evaluation is necessary to ensure record accuracy. This study presents a comparative assessment of two widely applied calibration approaches—Simultaneous Nadir Overpass (SNO) and Double Difference (DD)—for the thermal infrared (TIR) bands of FY-3D MERSI. MODIS/Aqua serves as the reference sensor, while radiative transfer simulations driven by ERA5 inputs are generated with the Advanced Radiative Transfer Modeling System (ARMS) to support the analysis. The results show that SNO performs effectively when matchup samples are sufficiently large and globally representative but is less applicable under sparse temporal sampling or orbital drift. In contrast, the DD method consistently achieves higher calibration accuracy for MERSI Bands 24 and 25 under clear-sky conditions. It reduces mean biases from ~−0.5 K to within ±0.1 K and lowers RMSE from ~0.6 K to 0.3–0.4 K during 2021–2022. Under cloudy conditions, DD tends to overcorrect because coefficients derived from clear-sky simulations are not directly transferable to cloud-covered scenes, whereas SNO remains more stable though less precise. Overall, the results suggest that the two methods exhibit complementary strengths, with DD being preferable for high-accuracy calibration in clear-sky scenarios and SNO offering greater stability across variable atmospheric conditions. Future work will validate both methods under varied surface and atmospheric conditions and extend their use to additional sensors and spectral bands. Full article
Show Figures

Figure 1

26 pages, 7079 KB  
Article
Hydrological Response Analysis Using Remote Sensing and Cloud Computing: Insights from the Chalakudy River Basin, Kerala
by Gudihalli Munivenkatappa Rajesh, Sajeena Shaharudeen, Fahdah Falah Ben Hasher and Mohamed Zhran
Water 2025, 17(19), 2869; https://doi.org/10.3390/w17192869 - 1 Oct 2025
Abstract
Hydrological modeling is critical for assessing water availability and guiding sustainable resource management, particularly in monsoon-dependent, data-scarce basins such as the Chalakudy River Basin (CRB) in Kerala, India. This study integrated the Soil Conservation Service Curve Number (SCS-CN) method within the Google Earth [...] Read more.
Hydrological modeling is critical for assessing water availability and guiding sustainable resource management, particularly in monsoon-dependent, data-scarce basins such as the Chalakudy River Basin (CRB) in Kerala, India. This study integrated the Soil Conservation Service Curve Number (SCS-CN) method within the Google Earth Engine (GEE) platform, making novel use of multi-source, open access datasets (CHIRPS precipitation, MODIS land cover and evapotranspiration, and OpenLand soil data) to estimate spatially distributed long-term runoff (2001–2023). Model calibration against observed runoff showed strong performance (NSE = 0.86, KGE = 0.81, R2 = 0.83, RMSE = 29.37 mm and ME = 13.48 mm), validating the approach. Over 75% of annual runoff occurs during the southwest monsoon (June–September), with July alone contributing 220.7 mm. Seasonal assessments highlighted monsoonal excesses and dry-season deficits, while water balance correlated strongly with rainfall (r = 0.93) and runoff (r = 0.94) but negatively with evapotranspiration (r = –0.87). Time-series analysis indicated a slight rise in rainfall, a decline in evapotranspiration, and a marginal improvement in water balance, implying gradual enhancement of regional water availability. Spatial analysis revealed a west–east gradient in precipitation, evapotranspiration, and water balance, producing surpluses in lowlands and deficits in highlands. These findings underscore the potential of cloud-based hydrological modeling to capture spatiotemporal dynamics of hydrological variables and support climate-resilient water management in monsoon-driven and data-scarce river basins. Full article
(This article belongs to the Section Hydrology)
Show Figures

Figure 1

21 pages, 2975 KB  
Article
ARGUS: An Autonomous Robotic Guard System for Uncovering Security Threats in Cyber-Physical Environments
by Edi Marian Timofte, Mihai Dimian, Alin Dan Potorac, Doru Balan, Daniel-Florin Hrițcan, Marcel Pușcașu and Ovidiu Chiraș
J. Cybersecur. Priv. 2025, 5(4), 78; https://doi.org/10.3390/jcp5040078 - 1 Oct 2025
Abstract
Cyber-physical infrastructures such as hospitals and smart campuses face hybrid threats that target both digital and physical domains. Traditional security solutions separate surveillance from network monitoring, leaving blind spots when attackers combine these vectors. This paper introduces ARGUS, an autonomous robotic platform designed [...] Read more.
Cyber-physical infrastructures such as hospitals and smart campuses face hybrid threats that target both digital and physical domains. Traditional security solutions separate surveillance from network monitoring, leaving blind spots when attackers combine these vectors. This paper introduces ARGUS, an autonomous robotic platform designed to close this gap by correlating cyber and physical anomalies in real time. ARGUS integrates computer vision for facial and weapon detection with intrusion detection systems (Snort, Suricata) for monitoring malicious network activity. Operating through an edge-first microservice architecture, it ensures low latency and resilience without reliance on cloud services. Our evaluation covered five scenarios—access control, unauthorized entry, weapon detection, port scanning, and denial-of-service attacks—with each repeated ten times under varied conditions such as low light, occlusion, and crowding. Results show face recognition accuracy of 92.7% (500 samples), weapon detection accuracy of 89.3% (450 samples), and intrusion detection latency below one second, with minimal false positives. Audio analysis of high-risk sounds further enhanced situational awareness. Beyond performance, ARGUS addresses GDPR and ISO 27001 compliance and anticipates adversarial robustness. By unifying cyber and physical detection, ARGUS advances beyond state-of-the-art patrol robots, delivering comprehensive situational awareness and a practical path toward resilient, ethical robotic security. Full article
(This article belongs to the Special Issue Cybersecurity Risk Prediction, Assessment and Management)
Show Figures

Figure 1

33 pages, 7835 KB  
Article
PyGEE-ST-MEDALUS: AI Spatiotemporal Framework Integrating MODIS and Sentinel-1/-2 Data for Desertification Risk Assessment in Northeastern Algeria
by Zakaria Khaldi, Jingnong Weng, Franz Pablo Antezana Lopez, Guanhua Zhou, Ilyes Ghedjatti and Aamir Ali
Remote Sens. 2025, 17(19), 3350; https://doi.org/10.3390/rs17193350 - 1 Oct 2025
Abstract
Desertification threatens the sustainability of dryland ecosystems, yet many existing monitoring frameworks rely on static maps, coarse spatial resolution, or lack temporal forecasting capacity. To address these limitations, this study introduces PyGEE-ST-MEDALUS, a novel spatiotemporal framework combining the full MEDALUS desertification model with [...] Read more.
Desertification threatens the sustainability of dryland ecosystems, yet many existing monitoring frameworks rely on static maps, coarse spatial resolution, or lack temporal forecasting capacity. To address these limitations, this study introduces PyGEE-ST-MEDALUS, a novel spatiotemporal framework combining the full MEDALUS desertification model with deep learning (CNN, LSTM, DeepMLP) and machine learning (RF, XGBoost, SVM) techniques on the Google Earth Engine (GEE) platform. Applied across Tebessa Province, Algeria (2001–2028), the framework integrates MODIS and Sentinel-1/-2 data to compute four core indices—climatic, soil, vegetation, and land management quality—and create the Desertification Sensitivity Index (DSI). Unlike prior studies that focus on static or spatial-only MEDALUS implementations, PyGEE-ST-MEDALUS introduces scalable, time-series forecasting, yielding superior predictive performance (R2 ≈ 0.96; RMSE < 0.03). Over 71% of the region was classified as having high to very high sensitivity, driven by declining vegetation and thermal stress. Comparative analysis confirms that this study advances the state-of-the-art by integrating interpretable AI, near-real-time satellite analytics, and full MEDALUS indicators into one cloud-based pipeline. These contributions make PyGEE-ST-MEDALUS a transferable, efficient decision-support tool for identifying degradation hotspots, supporting early warning systems, and enabling evidence-based land management in dryland regions. Full article
Show Figures

Graphical abstract

15 pages, 1081 KB  
Article
Digital Tools for Decision Support in Social Rehabilitation
by Valeriya Gribova and Elena Shalfeeva
J. Pers. Med. 2025, 15(10), 468; https://doi.org/10.3390/jpm15100468 - 1 Oct 2025
Abstract
Objectives: The process of social rehabilitation involves several stages, from assessing an individual’s condition and determining their potential for rehabilitation to implementing a personalized plan with continuous monitoring of progress. Advances in information technology, including artificial intelligence, enable the use of software-assisted [...] Read more.
Objectives: The process of social rehabilitation involves several stages, from assessing an individual’s condition and determining their potential for rehabilitation to implementing a personalized plan with continuous monitoring of progress. Advances in information technology, including artificial intelligence, enable the use of software-assisted solutions for objective assessments and personalized rehabilitation strategies. The research aims to present interconnected semantic models that represent expandable knowledge in the field of rehabilitation, as well as an integrated framework and methodology for constructing virtual assistants and personalized decision support systems based on these models. Materials and Methods: The knowledge and data accumulated in these areas require special tools for their representation, access, and use. To develop a set of models that form the basis of decision support systems in rehabilitation, it is necessary to (1) analyze the domain, identify concepts and group them by type, and establish a set of resources that should contain knowledge for intellectual support; (2) create a set of semantic models to represent knowledge for the rehabilitation of patients. The ontological approach, combined with the cloud cover of the IACPaaS platform, has been proposed. Results: This paper presents a suite of semantic models and a methodology for implementing decision support systems capable of expanding rehabilitation knowledge through updated regulatory frameworks and empirical data. Conclusions: The potential advantage of such systems is the combination of the most relevant knowledge with a high degree of personalization in rehabilitation planning. Full article
(This article belongs to the Section Personalized Medical Care)
Show Figures

Figure 1

19 pages, 1182 KB  
Article
HGAA: A Heterogeneous Graph Adaptive Augmentation Method for Asymmetric Datasets
by Hongbo Zhao, Wei Liu, Congming Gao, Weining Shi, Zhihong Zhang and Jianfei Chen
Symmetry 2025, 17(10), 1623; https://doi.org/10.3390/sym17101623 - 1 Oct 2025
Abstract
Edge intelligence plays an increasingly vital role in ensuring the reliability of distributed microservice-based applications, which are widely used in domains such as e-commerce, industrial IoT, and cloud-edge collaborative platforms. However, anomaly detection in these systems encounters a critical challenge: labeled anomaly data [...] Read more.
Edge intelligence plays an increasingly vital role in ensuring the reliability of distributed microservice-based applications, which are widely used in domains such as e-commerce, industrial IoT, and cloud-edge collaborative platforms. However, anomaly detection in these systems encounters a critical challenge: labeled anomaly data are scarce. This scarcity leads to severe class asymmetry and compromised detection performance, particularly under the resource constraints of edge environments. Recent approaches based on Graph Neural Networks (GNNs)—often integrated with DeepSVDD and regularization techniques—have shown potential, but they rarely address this asymmetry in an adaptive, scenario-specific way. This work proposes Heterogeneous Graph Adaptive Augmentation (HGAA), a framework tailored for edge intelligence scenarios. HGAA dynamically optimizes graph data augmentation by leveraging feedback from online anomaly detection. To enhance detection accuracy while adhering to resource constraints, the framework incorporates a selective bias toward underrepresented anomaly types. It uses knowledge distillation to model dataset-dependent distributions and adaptively adjusts augmentation probabilities, thus avoiding excessive computational overhead in edge environments. Additionally, a dynamic adjustment mechanism evaluates augmentation success rates in real time, refining the selection processes to maintain model robustness. Experiments were conducted on two real-world datasets (TraceLog and FlowGraph) under simulated edge scenarios. Results show that HGAA consistently outperforms competitive baseline methods. Specifically, compared with the best non-adaptive augmentation strategies, HGAA achieves an average improvement of 4.5% in AUC and 4.6% in AP. Even larger gains are observed in challenging cases: for example, when using the HGT model on the TraceLog dataset, AUC improves by 14.6% and AP by 18.1%. Beyond accuracy, HGAA also significantly enhances efficiency: compared with filter-based methods, training time is reduced by up to 71% on TraceLog and 8.6% on FlowGraph, confirming its suitability for resource-constrained edge environments. These results highlight the potential of adaptive, edge-aware augmentation techniques in improving microservice anomaly detection within heterogeneous, resource-limited environments. Full article
(This article belongs to the Special Issue Symmetry and Asymmetry in Embedded Systems)
Show Figures

Figure 1

18 pages, 654 KB  
Article
Trustworthy Face Recognition as a Service: A Multi-Layered Approach for Mitigating Spoofing and Ensuring System Integrity
by Mostafa Kira, Zeyad Alajamy, Ahmed Soliman, Yusuf Mesbah and Manuel Mazzara
Future Internet 2025, 17(10), 450; https://doi.org/10.3390/fi17100450 - 30 Sep 2025
Abstract
Facial recognition systems are increasingly used for authentication across domains such as finance, e-commerce, and public services, but their growing adoption raises significant concerns about spoofing attacks enabled by printed photos, replayed videos, or AI-generated deepfakes. To address this gap, we introduce a [...] Read more.
Facial recognition systems are increasingly used for authentication across domains such as finance, e-commerce, and public services, but their growing adoption raises significant concerns about spoofing attacks enabled by printed photos, replayed videos, or AI-generated deepfakes. To address this gap, we introduce a multi-layered Face Recognition-as-a-Service (FRaaS) platform that integrates passive liveness detection with active challenge–response mechanisms, thereby defending against both low-effort and sophisticated presentation attacks. The platform is designed as a scalable cloud-based solution, complemented by an open-source SDK for seamless third-party integration, and guided by ethical AI principles of fairness, transparency, and privacy. A comprehensive evaluation validates the system’s logic and implementation: (i) Frontend audits using Lighthouse consistently scored above 96% in performance, accessibility, and best practices; (ii) SDK testing achieved over 91% code coverage with reliable OAuth flow and error resilience; (iii) Passive liveness layer employed the DeepPixBiS model, which achieves an Average Classification Error Rate (ACER) of 0.4 on the OULU–NPU benchmark, outperforming prior state-of-the-art methods; and (iv) Load simulations confirmed high throughput (276 req/s), low latency (95th percentile at 1.51 ms), and zero error rates. Together, these results demonstrate that the proposed platform is robust, scalable, and trustworthy for security-critical applications. Full article
Show Figures

Figure 1

20 pages, 1991 KB  
Article
EcoWild: Reinforcement Learning for Energy-Aware Wildfire Detection in Remote Environments
by Nuriye Yildirim, Mingcong Cao, Minwoo Yun, Jaehyun Park and Umit Y. Ogras
Sensors 2025, 25(19), 6011; https://doi.org/10.3390/s25196011 - 30 Sep 2025
Abstract
Early wildfire detection in remote areas remains a critical challenge due to limited connectivity, intermittent solar energy, and the need for autonomous, long-term operation. Existing systems often rely on fixed sensing schedules or cloud connectivity, making them impractical for energy-constrained deployments. We introduce [...] Read more.
Early wildfire detection in remote areas remains a critical challenge due to limited connectivity, intermittent solar energy, and the need for autonomous, long-term operation. Existing systems often rely on fixed sensing schedules or cloud connectivity, making them impractical for energy-constrained deployments. We introduce EcoWild, a reinforcement learning-driven cyber-physical system for energy-adaptive wildfire detection on solar-powered edge devices. EcoWild combines a decision tree-based fire risk estimator, lightweight on-device smoke detection, and a reinforcement learning agent that dynamically adjusts sensing and communication strategies based on battery levels, solar input, and estimated fire risk. The system models realistic solar harvesting, battery dynamics, and communication costs to ensure sustainable operation on embedded platforms. We evaluate EcoWild using real-world solar, weather, and fire image datasets in a high-fidelity simulation environment. Results show that EcoWild consistently maintains responsiveness while avoiding battery depletion under diverse conditions. Compared to static baselines, it achieves 2.4× to 7.7× faster detection, maintains moderate energy consumption, and avoids system failure due to battery depletion across 125 deployment scenarios. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

22 pages, 365 KB  
Article
Development of a Fully Autonomous Offline Assistive System for Visually Impaired Individuals: A Privacy-First Approach
by Fitsum Yebeka Mekonnen, Mohammad F. Al Bataineh, Dana Abu Abdoun, Ahmed Serag, Kena Teshale Tamiru, Winner Abula and Simon Darota
Sensors 2025, 25(19), 6006; https://doi.org/10.3390/s25196006 - 29 Sep 2025
Abstract
Visual impairment affects millions worldwide, creating significant barriers to environmental interaction and independence. Existing assistive technologies often rely on cloud-based processing, raising privacy concerns and limiting accessibility in resource-constrained environments. This paper explores the integration and potential of open-source AI models in developing [...] Read more.
Visual impairment affects millions worldwide, creating significant barriers to environmental interaction and independence. Existing assistive technologies often rely on cloud-based processing, raising privacy concerns and limiting accessibility in resource-constrained environments. This paper explores the integration and potential of open-source AI models in developing a fully offline assistive system that can be locally set up and operated to support visually impaired individuals. Built on a Raspberry Pi 5, the system combines real-time object detection (YOLOv8), optical character recognition (Tesseract), face recognition with voice-guided registration, and offline voice command control (VOSK), delivering hands-free multimodal interaction without dependence on cloud infrastructure. Audio feedback is generated using Piper for real-time environmental awareness. Designed to prioritize user privacy, low latency, and affordability, the platform demonstrates that effective assistive functionality can be achieved using only open-source tools on low-power edge hardware. Evaluation results in controlled conditions show 75–90% detection and recognition accuracies, with sub-second response times, confirming the feasibility of deploying such systems in privacy-sensitive or resource-constrained environments. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

24 pages, 1641 KB  
Article
Intellectual Property Protection Through Blockchain: Introducing the Novel SmartRegistry-IP for Secure Digital Ownership
by Abeer S. Al-Humaimeedy
Future Internet 2025, 17(10), 444; https://doi.org/10.3390/fi17100444 - 29 Sep 2025
Abstract
The rise of digital content has made the need for reliable and practical intellectual property (IP) management systems more critical than ever. Most traditional IP systems are prone to issues such as delays, inefficiency, and data security breaches. This paper introduces SmartRegistry-IP, a [...] Read more.
The rise of digital content has made the need for reliable and practical intellectual property (IP) management systems more critical than ever. Most traditional IP systems are prone to issues such as delays, inefficiency, and data security breaches. This paper introduces SmartRegistry-IP, a system developed to simplify the registration, licensing, and transfer of intellectual property assets in a secure and scalable decentralized environment. By utilizing the InterPlanetary File System (IPFS) for decentralized storage, SmartRegistry-IP achieves a low storage latency of 300 milliseconds, outperforming both cloud storage (500 ms) and local storage (700 ms). The system also supports a high transaction throughput of 120 transactions per second. Through the use of smart contracts, licensing agreements are automatically and securely enforced, reducing the need for intermediaries and lowering operational costs. Additionally, the proof-of-work process verifies all transactions, ensuring higher security and maintaining data consistency. The platform integrates an intuitive graphical user interface that enables seamless asset uploads, license management, and analytics visualization in real time. SmartRegistry-IP demonstrates superior efficiency compared to traditional systems, achieving a blockchain delay of 300 ms, which is half the latency of standard systems, averaging 600 ms. According to this study, adopting SmartRegistry-IP provides IP organizations with enhanced security and transparent management, ensuring they can overcome operational challenges regardless of their size. As a result, the use of blockchain for intellectual property management is expected to increase, helping maintain precise records and reducing time spent on online copyright registration. Full article
Show Figures

Figure 1

19 pages, 2248 KB  
Article
A Platform for Machine Learning Operations for Network Constrained Far-Edge Devices
by Calum McCormack and Imene Mitiche
Appl. Syst. Innov. 2025, 8(5), 141; https://doi.org/10.3390/asi8050141 - 28 Sep 2025
Abstract
Machine Learning (ML) models developed for the Edge have seen a massive uptake in recent years, with many types of predictive analytics, condition monitoring and pre-emptive fault detection developed and in-use on Internet of Things (IoT) systems serving industrial power generators, environmental monitoring [...] Read more.
Machine Learning (ML) models developed for the Edge have seen a massive uptake in recent years, with many types of predictive analytics, condition monitoring and pre-emptive fault detection developed and in-use on Internet of Things (IoT) systems serving industrial power generators, environmental monitoring systems and more. At scale, these systems can be difficult to manage and keep upgraded, especially those devices that are deployed in far-Edge networks with unreliable networking. This paper presents a simple and novel platform architecture for deployment and management of ML at the Edge for increasing model and device reliability by reducing downtime and access to new model versions via the ability to manage models from both Cloud and Edge. This platform provides an Edge ML Operations “Mirror” that replicates and minimises cloud MLOps systems to provide reliable delivery and retraining of models at the network Edge, solving many problems associated with both Cloud-first and Edge networks. The paper explores and explains the architecture and components of the system, offering a prototype system that was evaluated by measuring time to deploy models with regard to differing network instabilities in a simulated environment to highlight the necessity for local management and federated training of models as a secondary function to Cloud model management. This architecture could be utilised by researchers to improve the deployment, recording and management of ML experiments on the Edge. Full article
Show Figures

Figure 1

20 pages, 1367 KB  
Review
AI-Integrated QSAR Modeling for Enhanced Drug Discovery: From Classical Approaches to Deep Learning and Structural Insight
by Mahesh Koirala, Lindy Yan, Zoser Mohamed and Mario DiPaola
Int. J. Mol. Sci. 2025, 26(19), 9384; https://doi.org/10.3390/ijms26199384 - 25 Sep 2025
Abstract
Integrating artificial intelligence (AI) with the Quantitative Structure-Activity Relationship (QSAR) has transformed modern drug discovery by empowering faster, more accurate, and scalable identification of therapeutic compounds. This review outlines the evolution from classical QSAR methods, such as multiple linear regression and partial least [...] Read more.
Integrating artificial intelligence (AI) with the Quantitative Structure-Activity Relationship (QSAR) has transformed modern drug discovery by empowering faster, more accurate, and scalable identification of therapeutic compounds. This review outlines the evolution from classical QSAR methods, such as multiple linear regression and partial least squares, to advanced machine learning and deep learning approaches, including graph neural networks and SMILES-based transformers. Molecular docking and molecular dynamics simulations are presented as cooperative tools that boost the mechanistic consideration and structural insight into the ligand-target interactions. Discussions on using PROTACs and targeted protein degradation, ADMET prediction, and public databases and cloud-based platforms to democratize access to computational modeling are well presented with priority. Challenges related to authentication, interpretability, regulatory standards, and ethical concerns are examined, along with emerging patterns in AI-driven drug development. This review is a guideline for using computational models and databases in explainable, data-rich and profound drug discovery pipelines. Full article
Show Figures

Graphical abstract

28 pages, 3237 KB  
Article
CodeDive: A Web-Based IDE with Real-Time Code Activity Monitoring for Programming Education
by Hyunchan Park, Youngpil Kim, Kyungwoon Lee, Soonheon Jin, Jinseok Kim, Yan Heo, Gyuho Kim and Eunhye Kim
Appl. Sci. 2025, 15(19), 10403; https://doi.org/10.3390/app151910403 - 25 Sep 2025
Abstract
This paper introduces CodeDive, a web-based programming environment with real-time behavioral tracking designed to enhance student progress assessment and provide timely support for learners, while also addressing the academic integrity challenges posed by Large Language Models (LLMs). Visibility into the student’s learning process [...] Read more.
This paper introduces CodeDive, a web-based programming environment with real-time behavioral tracking designed to enhance student progress assessment and provide timely support for learners, while also addressing the academic integrity challenges posed by Large Language Models (LLMs). Visibility into the student’s learning process has become essential for effective pedagogical analysis and personalized feedback, especially in the era where LLMs can generate complete solutions, making it difficult to truly assess student learning and ensure academic integrity based solely on the final outcome. CodeDive provides this process-level transparency by capturing fine-grained events, such as code edits, executions, and pauses, enabling instructors to gain actionable insights for timely student support, analyze learning trajectories, and effectively uphold academic integrity. It operates on a scalable Kubernetes-based cloud architecture, ensuring security and user isolation via containerization and SSO authentication. As a browser-accessible platform, it requires no local installation, simplifying deployment. The system produces a rich data stream of all interaction events for pedagogical analysis. In a Spring 2025 deployment in an Operating Systems course with approximately 100 students, CodeDive captured nearly 25,000 code snapshots and over 4000 execution events with a low overhead. The collected data powered an interactive dashboard visualizing each learner’s coding timeline, offering actionable insights for timely student support and a deeper understanding of their problem-solving strategies. By shifting evaluation from the final artifact to the developmental process, CodeDive offers a practical solution for comprehensively assessing student progress and verifying authentic learning in the LLM era. The successful deployment confirms that CodeDive is a stable and valuable tool for maintaining pedagogical transparency and integrity in modern classrooms. Full article
(This article belongs to the Special Issue ICT in Education, 2nd Edition)
Show Figures

Figure 1

Back to TopTop