Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (594)

Search Parameters:
Keywords = cloud provider comparison

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 2074 KB  
Article
Research on Encryption and Decryption Technology of Microservice Communication Based on Block Cipher
by Shijie Zhang, Xiaolan Xie, Ting Fan and Yu Wang
Electronics 2026, 15(2), 431; https://doi.org/10.3390/electronics15020431 - 19 Jan 2026
Abstract
The efficiency optimization of encryption and decryption algorithms in cloud environments is addressed in this study, where the processing speed of encryption and decryption is enhanced through the application of multi-threaded parallel technology. In view of the high-concurrency and distributed storage characteristics of [...] Read more.
The efficiency optimization of encryption and decryption algorithms in cloud environments is addressed in this study, where the processing speed of encryption and decryption is enhanced through the application of multi-threaded parallel technology. In view of the high-concurrency and distributed storage characteristics of cloud platforms, a multi-threaded concurrency mechanism is adopted for the direct processing of data streams. Compared with the traditional serial processing mode, four distinct encryption algorithms, namely AES, DES, SM4 and Ascon, are employed, and different data units are processed concurrently by means of multithreaded technology. Based on multi-dimensional performance evaluation indicators (including throughput, memory footprint and security level), comparative analyses are carried out to optimize the design scheme; accordingly, multi-threaded collaborative encryption is realized to improve the overall operation efficiency. Experimental results indicate that, in comparison with the traditional serial encryption method, the encryption and decryption latency of the algorithm is reduced by around 50%, which significantly lowers the time overhead associated with encryption and decryption processes. Simultaneously, the throughput of AES and DES algorithms is observed to be doubled, which leads to a remarkable improvement in communication efficiency. Moreover, under the premise that the original secure communication capability is guaranteed, system resource overhead is effectively reduced by SM4 and Ascon algorithms. On this basis, a quantitative reference basis is provided for cloud platforms to develop targeted encryption strategies tailored to diverse business demands. In conclusion, the proposed approach is of profound significance for advancing the synergistic optimization of security and performance in cloud-native data communication scenarios. Full article
(This article belongs to the Special Issue AI for Wireless Communications and Security)
Show Figures

Figure 1

23 pages, 5292 KB  
Article
Research on Rapid 3D Model Reconstruction Based on 3D Gaussian Splatting for Power Scenarios
by Huanruo Qi, Yi Zhou, Chen Chen, Lu Zhang, Peipei He, Xiangyang Yan and Mengqi Zhai
Sustainability 2026, 18(2), 726; https://doi.org/10.3390/su18020726 - 10 Jan 2026
Viewed by 268
Abstract
As core infrastructure of power transmission networks, power towers require high-precision 3D models, which are critical for intelligent inspection and digital twin applications of power transmission lines. Traditional reconstruction methods, such as LiDAR scanning and oblique photogrammetry, suffer from issues including high operational [...] Read more.
As core infrastructure of power transmission networks, power towers require high-precision 3D models, which are critical for intelligent inspection and digital twin applications of power transmission lines. Traditional reconstruction methods, such as LiDAR scanning and oblique photogrammetry, suffer from issues including high operational risks, low modeling efficiency, and loss of fine details. To address these limitations, this paper proposes a 3D Gaussian Splatting (3DGS)-based method for power tower 3D reconstruction to enhance reconstruction efficiency and detail preservation capability. First, a multi-view data acquisition scheme combining “unmanned aerial vehicle + oblique photogrammetry” was designed to capture RGB images acquired by Unmanned Aerial Vehicle (UAV) platforms, which are used as the primary input for 3D reconstruction. Second, a sparse point cloud was generated via Structure from Motion. Finally, based on 3DGS, Gaussian model initialization, differentiable rendering, and adaptive density control were performed to produce high-precision 3D models of power towers. Taking two typical power tower types as experimental subjects, comparisons were made with the oblique photogrammetry + ContextCapture method. Experimental results demonstrate that 3DGS not only achieves high model completeness (with the reconstructed model nearly indistinguishable from the original images) but also excels in preserving fine details such as angle steels and cables. Additionally, the final modeling time is reduced by over 70% compared to traditional oblique photogrammetry. 3DGS enables efficient and high-precision reconstruction of power tower 3D models, providing a reliable technical foundation for digital twin applications in power transmission lines. By significantly improving reconstruction efficiency and reducing operational costs, the proposed method supports sustainable power infrastructure inspection, asset lifecycle management, and energy-efficient digital twin applications. Full article
Show Figures

Figure 1

19 pages, 2610 KB  
Article
Open HTML5 Widgets for Smart Learning: Enriching Educational 360° Virtual Tours and a Comparative Evaluation vs. H5P
by Félix Fariña-Rodriguez, Jose Luis Saorín, Dámari Melian Díaz, Jose Luis Saorín-Ferrer and Cecile Meier
Appl. Sci. 2026, 16(1), 338; https://doi.org/10.3390/app16010338 - 29 Dec 2025
Viewed by 276
Abstract
In educational smart learning contexts, 360° virtual tours deliver authentic, cross-device experiences, but uptake is limited by subscription-based authoring tools and free options that restrict in-tour rich media embedding. To address this, we present a library of eight open-source HTML5 widgets (image gallery, [...] Read more.
In educational smart learning contexts, 360° virtual tours deliver authentic, cross-device experiences, but uptake is limited by subscription-based authoring tools and free options that restrict in-tour rich media embedding. To address this, we present a library of eight open-source HTML5 widgets (image gallery, PDF viewer, quiz, 3D model viewer, webpage viewer, audio player, YouTube viewer, and image comparison) that can be embedded directly in the viewer as HTML pop-ups (e.g., CloudPano) or run standalone, with dual packaging (single self-contained HTML or server-hosted assets referenced by URL). Evaluation is limited to technical efficiency (resource size, load performance, and cross-device/browser compatibility), with pedagogical outcomes and learner performance beyond the scope. The architecture minimizes dependencies and enables reuse in virtual classrooms via iframes. We provide a unified web interface and a repository to promote adoption, auditability, and community contributions. The results show that standalone widgets are between 20 and 100 times smaller than H5P equivalents produced with Lumi Education and exhibit shorter measured load times (0.1–0.5 ms). Seamless integration is demonstrated for CloudPano and Moodle. By lowering costs, simplifying deployment, and broadening in-tour media capabilities, the proposed widgets offer a pragmatic pathway to enrich educational 360° tours. Full article
(This article belongs to the Special Issue Application of Smart Learning in Education)
Show Figures

Figure 1

26 pages, 3290 KB  
Article
Empirical Evaluation of Big Data Stacks: Performance and Design Analysis of Hadoop, Modern, and Cloud Architectures
by Widad Elouataoui and Youssef Gahi
Big Data Cogn. Comput. 2026, 10(1), 7; https://doi.org/10.3390/bdcc10010007 - 24 Dec 2025
Viewed by 660
Abstract
The proliferation of big data applications across various industries has led to a paradigm shift in data architecture, with traditional approaches giving way to more agile and scalable frameworks. The evolution of big data architecture began with the emergence of the Hadoop-based data [...] Read more.
The proliferation of big data applications across various industries has led to a paradigm shift in data architecture, with traditional approaches giving way to more agile and scalable frameworks. The evolution of big data architecture began with the emergence of the Hadoop-based data stack, leveraging technologies like Hadoop Distributed File System (HDFS) and Apache Spark for efficient data processing. However, recent years have seen a shift towards modern data stacks, offering flexibility and diverse toolsets tailored to specific use cases. Concurrently, cloud computing has revolutionized big data management, providing unparalleled scalability and integration capabilities. Despite their benefits, navigating these data stack paradigms can be challenging. While existing literature offers valuable insights into individual data stack paradigms, there remains a dearth of studies that offer practical, in-depth comparisons of these paradigms across the entire big data value chain. To address this gap in the field, this paper examines three main big data stack paradigms: the Hadoop data stack, modern data stack, and cloud-based data stack. Indeed, we conduct in this study an exhaustive architectural comparison of these stacks covering the entire big data value chain from data acquisition to exposition. Moreover, this study extends beyond architectural considerations to include end-to-end use case implementations for a comprehensive evaluation of each stack. Using a large dataset of Amazon reviews, different data stack scenarios are implemented and compared. Furthermore, the paper explores critical factors such as data integration, implementation costs, and ease of deployment to provide researchers and practitioners with a relevant and up-to-date reference for navigating the complex landscape of big data technologies and making informed decisions about data strategies. Full article
(This article belongs to the Topic Big Data and Artificial Intelligence, 3rd Edition)
Show Figures

Figure 1

25 pages, 3798 KB  
Article
Soil MoistureRetrieval from TM-1 GNSS-R Reflections with Auxiliary Geophysical Variables: A Multi-Cluster and Seasonal Evaluation
by Yu Jin, Min Ji, Naiquan Zheng, Zhihua Zhang, Penghui Ding and Qian Zhao
Land 2026, 15(1), 36; https://doi.org/10.3390/land15010036 - 24 Dec 2025
Viewed by 331
Abstract
Current passive microwave satellites like SMAP still face limitations in observational frequency and responsiveness in regions with frequent cloud cover, dense vegetation, or complex terrain, making it difficult to achieve continuous global monitoring with high spatio-temporal resolution. To enhance global high-frequency monitoring capabilities, [...] Read more.
Current passive microwave satellites like SMAP still face limitations in observational frequency and responsiveness in regions with frequent cloud cover, dense vegetation, or complex terrain, making it difficult to achieve continuous global monitoring with high spatio-temporal resolution. To enhance global high-frequency monitoring capabilities, this study utilizes global reflectivity data provided by the Tianmu-1 (TM-1) constellation since 2023, combined with multiple auxiliary variables, including NDVI, VWC, precipitation, and elevation, to develop a 9 km resolution soil moisture retrieval model. Several spatial clustering and temporal partitioning strategies are incorporated for systematic evaluation. Additionally, since the publicly available TM-1 L1 reflectivity data does not provide separable polarization channels, this study uses DDM/specular point reflectivity as the primary observable quantity for modeling and mitigates non-soil factor interference by introducing multi-source priors such as NDVI, VWC, precipitation, terrain, and roughness. Unlike SMAP’s “single orbit daily fixed local time” observation mode, TM-1, leveraging multi-constellation and multi-orbit reflection geometry, offers more balanced temporal sampling and availability in cloudy, rainy, and mid-to-high latitude regions. This enables temporal gap filling and rapid event response (such as moisture transitions within hours after precipitation events) during periods of SMAP’s quality masking or intermittent data loss. Results indicate that the model combining LC-cluster with seasonal partitioning delivers the best performance at the cluster level, achieving a correlation coefficient (R) of 0.8155 and an unbiased RMSE (ubRMSE) of 0.0689 cm3/cm3, with a particularly strong performance in barren and shrub ecosystems. Comparisons with SMAP and ISMN datasets show that TM-1 is consistent with mainstream products in trend tracking and systematic error control, providing valuable support for global and high-latitude studies of dynamic hydrothermal processes due to its more balanced mid- and high-latitude orbital coverage. Full article
Show Figures

Figure 1

14 pages, 939 KB  
Article
Effective Height of Mountaintop Towers Revisited: Simulation-Based Assessment for Self-Initiated Upward Lightning
by André Tiso Lobato, Liliana Arevalo and Vernon Cooray
Atmosphere 2026, 17(1), 16; https://doi.org/10.3390/atmos17010016 - 23 Dec 2025
Viewed by 246
Abstract
Mountaintop towers are highly exposed to self-initiated upward lightning flashes. Accurate estimation of their effective height—the equivalent flat-ground height yielding the same lightning exposure—is essential for reliable exposure assessment, for interpreting and calibrating measurement data at instrumented mountaintop towers, and for comparison with [...] Read more.
Mountaintop towers are highly exposed to self-initiated upward lightning flashes. Accurate estimation of their effective height—the equivalent flat-ground height yielding the same lightning exposure—is essential for reliable exposure assessment, for interpreting and calibrating measurement data at instrumented mountaintop towers, and for comparison with established protection guidelines. This study applies a two-step numerical framework that couples finite-element electrostatic simulations with a leader-inception and propagation model for representative tower–terrain configurations reflecting reference instrumented mountaintop sites in lightning research. For each configuration, the stabilization field, the minimum background electric field enabling continuous upward leader propagation to the cloud base, is determined, from which effective heights are obtained. The simulated results agree with the analytical formulation of Zhou et al. (within ~10%), while simplified or empirical approaches by Shindo, Eriksson, and Pierce exhibit larger deviations, especially for broader mountains. A normalized analysis demonstrates that the tower-to-mountain slenderness ratio (h/a) governs the scaling of effective height, following a power-law dependence with exponent −0.17 (R2 = 0.94). This compact relation enables direct estimation of effective height from geometric parameters alone, complementing detailed leader-inception modeling. The findings validate the proposed physics-based framework, quantify the geometric dependence of effective height for mountaintop towers, and provide a foundation for improving lightning-exposure assessments, measurement calibration and design standards for elevated structures. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

29 pages, 613 KB  
Article
Design and Comparison of Hardware Architectures for FIPS 140-Certified Cryptographic Applications
by Peter Kolok, Michal Hodon, Michal Kubascik and Jan Kapitulik
Electronics 2026, 15(1), 44; https://doi.org/10.3390/electronics15010044 - 23 Dec 2025
Viewed by 409
Abstract
Modern cryptographic systems increasingly depend on certified hardware modules to guarantee trustworthy key management, tamper resistance, and secure execution across Internet of Things (IoT), embedded, and cloud infrastructures. Although numerous FIPS 140-certified platforms exist, prior studies typically evaluate these solutions in isolation, offering [...] Read more.
Modern cryptographic systems increasingly depend on certified hardware modules to guarantee trustworthy key management, tamper resistance, and secure execution across Internet of Things (IoT), embedded, and cloud infrastructures. Although numerous FIPS 140-certified platforms exist, prior studies typically evaluate these solutions in isolation, offering limited insight into their cross-domain suitability and practical deployment trade-offs. This work addresses this gap by proposing a unified, multi-criteria evaluation framework aligned with the FIPS 140 standard family (including both FIPS 140-2 and FIPS 140-3), replacing the earlier formulation that assumed an exclusive FIPS 140-3 evaluation model. The framework systematically compares secure elements (SEs), Trusted Platform Modules (TPMs), embedded Systems-on-Chip (SoCs) with dedicated security coprocessors, enterprise-grade Hardware Security Modules (HSMs), and cloud-based trusted execution environments. It integrates certification analysis, performance normalization, physical-security assessment, integration complexity, and total cost of ownership. Validation is performed using verified CMVP certification records and harmonized performance benchmarks derived from publicly available FIPS datasets. The results reveal pronounced architectural trade-offs: lightweight SEs offer cost-efficient protection for large-scale IoT deployments, while enterprise HSMs and cloud enclaves provide high throughput and Level 3 assurance at the expense of increased operational and integration complexity. Quantitative comparison further shows that secure elements reduce active power consumption by approximately 80–85% compared to TPM 2.0 modules (<20 mW vs. 100–150 mW) but typically require 2–3× higher firmware-integration effort due to middleware dependencies. Likewise, SE050-based architectures deliver roughly 5× higher cryptographic throughput than TPMs (∼500 ops/s vs. ∼100 ops/s), whereas enterprise HSMs outperform all embedded platforms by two orders of magnitude (>10 000 ops/s). Because the evaluated platforms span both FIPS 140-2 and FIPS 140-3 certifications, the comparative analysis interprets their security guarantees in terms of requirements shared across the FIPS 140 standard family, rather than attributing all properties to FIPS 140-3 alone. No single architecture emerges as universally optimal; rather, platform suitability depends on the desired balance between assurance level, scalability, performance, and deployment constraints. The findings offer actionable guidance for engineers and system architects selecting FIPS-validated hardware for secure and compliant digital infrastructures. Full article
Show Figures

Figure 1

25 pages, 33156 KB  
Article
Combining Ground Penetrating Radar and a Terrestrial Laser Scanner to Constrain EM Velocity: A Novel Approach for Masonry Wall Characterization in Cultural Heritage Applications
by Giorgio Alaia, Maurizio Ercoli, Raffaella Brigante, Laura Marconi, Nicola Cavalagli and Fabio Radicioni
Remote Sens. 2026, 18(1), 15; https://doi.org/10.3390/rs18010015 - 20 Dec 2025
Viewed by 415
Abstract
In this paper, the combined use of Ground Penetrating Radar (GPR) and a Terrestrial Laser Scanner (TLS) is illustrated to highlight multiple advantages arising from the integration of these two distinct Non-Destructive Testing (NDT) techniques in the investigation of a historical wall. In [...] Read more.
In this paper, the combined use of Ground Penetrating Radar (GPR) and a Terrestrial Laser Scanner (TLS) is illustrated to highlight multiple advantages arising from the integration of these two distinct Non-Destructive Testing (NDT) techniques in the investigation of a historical wall. In particular, thanks to the TLS point cloud, a precise evaluation of the medium’s thickness, as well as its irregularities, was carried out. Based on this accurate geometrical constraint, a first-order velocity model, to be used for a time-to-depth conversion and for a post-stack GPR data migration, was computed. Moreover, a joint visualization of both datasets (GPR and TLS) was achieved in a novel tridimensional workspace. This solution provided a more straightforward and efficient way of testing the reliability of the combined results, proving the efficiency of the proposed method in the estimation of a velocity model, especially in comparison to conventional GPR methods. This demonstrates how the integration of different remote sensing methodologies can yield a more solid interpretation, taking into account the uncertainties related to the geometrical irregularities of the external wall’s surface and the inner structure generating complex GPR signatures. Full article
Show Figures

Figure 1

22 pages, 12312 KB  
Article
ES-YOLO: Multi-Scale Port Ship Detection Combined with Attention Mechanism in Complex Scenes
by Lixiang Cao, Jia Xi, Zixuan Xie, Teng Feng and Xiaomin Tian
Sensors 2025, 25(24), 7630; https://doi.org/10.3390/s25247630 - 16 Dec 2025
Viewed by 398
Abstract
With the rapid development of remote sensing technology and deep learning, the port ship detection based on a single-stage algorithm has achieved remarkable results in optical imagery. However, most of the existing methods are designed and verified in specific scenes, such as fixed [...] Read more.
With the rapid development of remote sensing technology and deep learning, the port ship detection based on a single-stage algorithm has achieved remarkable results in optical imagery. However, most of the existing methods are designed and verified in specific scenes, such as fixed viewing angle, uniform background, or open sea, which makes it difficult to deal with the problem of ship detection in complex environments, such as cloud occlusion, wave fluctuation, complex buildings in the harbor, and multi-ship aggregation. To this end, ES-YOLO framework is proposed to solve the limitations of ship detection. A novel edge perception channel, Spatial Attention Mechanism (EACSA), is proposed to enhance the extraction of edge information and improve the ability to capture feature details. A lightweight spatial–channel decoupled down-sampling module (LSCD) is designed to replace the down-sampling structure of the original network and reduce the complexity of the down-sampling stage. A new hierarchical scale structure is designed to balance the detection effect of different scale differences. In this paper, a remote sensing ship dataset, TJShip, is constructed based on Gaofen-2 images, which covers multi-scale targets from small fishing boats to large cargo ships. The TJShip dataset was adopted as the data source, and the ES-YOLO model was employed to conduct ablation and comparison experiments. The results show that the introduction of EACSA attention mechanism, LSCD, and multi-scale structure improves the mAP of ship detection by 0.83%, 0.54%, and 1.06%, respectively, compared with the baseline model, also performing well in precision, recall and F1. Compared with Faster R-CNN, RetinaNet, YOLOv5, YOLOv7, and YOLOv8 methods, the results show that the ES-YOLO model improves the mAP by 46.87%, 8.14%, 1.85%, 1.75%, and 0.86%, respectively, under the same experimental conditions, which provides research ideas for ship detection. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

20 pages, 1116 KB  
Article
Edge-Enabled Hybrid Encryption Framework for Secure Health Information Exchange in IoT-Based Smart Healthcare Systems
by Norjihan Abdul Ghani, Bintang Annisa Bagustari, Muneer Ahmad, Herman Tolle and Diva Kurnianingtyas
Sensors 2025, 25(24), 7583; https://doi.org/10.3390/s25247583 - 14 Dec 2025
Viewed by 504
Abstract
The integration of the Internet of Things (IoT) and edge computing is transforming healthcare by enabling real-time acquisition, processing, and exchange of sensitive patient data close to the data source. However, the distributed nature of IoT-enabled smart healthcare systems exposes them to severe [...] Read more.
The integration of the Internet of Things (IoT) and edge computing is transforming healthcare by enabling real-time acquisition, processing, and exchange of sensitive patient data close to the data source. However, the distributed nature of IoT-enabled smart healthcare systems exposes them to severe security and privacy risks during health information exchange (HIE). This study proposes an edge-enabled hybrid encryption framework that combines elliptic curve cryptography (ECC), HMAC-SHA256, and the Advanced Encryption Standard (AES) to ensure data confidentiality, integrity, and efficient computation in healthcare communication networks. The proposed model minimizes latency and reduces cloud dependency by executing encryption and verification at the network edge. It provides the first systematic comparison of hybrid encryption configurations for edge-based HIE, evaluating CPU usage, memory consumption, and scalability across varying data volumes. Experimental results demonstrate that the ECC + HMAC-SHA256 + AES configuration achieves high encryption efficiency and strong resistance to attacks while maintaining lightweight processing suitable for edge devices. This approach provides a scalable and secure solution for protecting sensitive health data in next-generation IoT-enabled smart healthcare systems. Full article
(This article belongs to the Special Issue Edge Artificial Intelligence and Data Science for IoT-Enabled Systems)
Show Figures

Figure 1

22 pages, 6834 KB  
Article
Comparison of Broadband Surface Albedo from MODIS and Ground-Based Measurements at the Thule High Arctic Atmospheric Observatory in Pituffik, Greenland, During 2016–2024
by Monica Tosco, Filippo Calì Quaglia, Virginia Ciardini, Tatiana Di Iorio, Antonio Iaccarino, Daniela Meloni, Giovanni Muscari, Giandomenico Pace, Claudio Scarchilli and Alcide Giorgio di Sarra
Remote Sens. 2025, 17(24), 3952; https://doi.org/10.3390/rs17243952 - 6 Dec 2025
Viewed by 484
Abstract
The surface albedo, α, is one of the key climate parameters since it regulates the shortwave radiation absorbed by the Earth’s surface. An accurate determination of the albedo is crucial in the polar regions due to its variations associated with climate change [...] Read more.
The surface albedo, α, is one of the key climate parameters since it regulates the shortwave radiation absorbed by the Earth’s surface. An accurate determination of the albedo is crucial in the polar regions due to its variations associated with climate change and its role in the strong feedback mechanisms. In this work, satellite and in situ measurements of broadband surface albedo at the Thule High Arctic Atmospheric Observatory (THAAO) on the northwestern coast of Greenland (76.5°N, 68.8°W) are compared. Measurements of surface albedo were started at THAAO in 2016. They show a large variability mainly in the transition seasons, suggesting that THAAO is a very interesting site for verifying the satellite capabilities in challenging conditions. The comparison of daily ground-based and MODIS-derived albedo covers the period July 2016–October 2024. The analysis has been conducted for all-sky and cloud-free conditions. The mean bias and mean squared difference between the two datasets are −0.02 and 0.09, respectively, for all sky conditions and −0.03 and 0.06 for cloud-free conditions. Very good agreement is found in summer in snow-free conditions, when the mean albedo is 0.17 in both datasets under cloud-free conditions. On the contrary, the capability to determine the surface albedo from space is largely reduced in the transition seasons, when significant differences between ground- and satellite-based albedo estimates are found. Differences for all-sky conditions may be as large as 0.3 in spring and autumn. These maximum differences are significantly reduced for cloud-free conditions, although a negative bias of MODIS data with respect to measurements at THAAO is generally found in spring. The combined analysis of the albedo, cloudiness, air temperature, and precipitation characteristics during two periods in 2023 and 2024 shows that, although satellite observations provide a reasonable picture of the long-term albedo evolution, they are not capable of following fast changes in albedo values induced by precipitation of snow/rain or temperature variations. Moreover, as expected, cloudiness plays a large role in affecting the satellite capabilities. The use of MODIS albedo data with the best value of the quality assurance flag (equal to 0) is recommended for studies aimed at determining the daily evolution of the surface radiation and energy budget. Full article
Show Figures

Graphical abstract

18 pages, 1443 KB  
Review
Empathy by Design: Reframing the Empathy Gap Between AI and Humans in Mental Health Chatbots
by Alastair Howcroft and Holly Blake
Information 2025, 16(12), 1074; https://doi.org/10.3390/info16121074 - 4 Dec 2025
Viewed by 2273
Abstract
Artificial intelligence (AI) chatbots are now embedded across therapeutic contexts, from the United Kingdom’s National Health Service (NHS) Talking Therapies to widely used platforms like ChatGPT. Whether welcomed or not, these systems are increasingly used for both patient care and everyday support, sometimes [...] Read more.
Artificial intelligence (AI) chatbots are now embedded across therapeutic contexts, from the United Kingdom’s National Health Service (NHS) Talking Therapies to widely used platforms like ChatGPT. Whether welcomed or not, these systems are increasingly used for both patient care and everyday support, sometimes even replacing human contact. Their capacity to convey empathy strongly influences how people experience and benefit from them. However, current systems often create an “AI empathy gap”, where interactions feel impersonal and superficial compared to those with human practitioners. This paper, presented as a critical narrative review, cautiously challenges the prevailing narrative that empathy is a uniquely human skill that AI cannot replicate. We argue this belief can stem from an unfair comparison: evaluating generic AIs against an idealised human practitioner. We reframe capabilities seen as exclusively human, such as building bonds through long-term memory and personalisation, not as insurmountable barriers but as concrete design targets. We also discuss the critical architectural and privacy trade-offs between cloud and on-device (edge) solutions. Accordingly, we propose a conceptual framework to meet these targets. It integrates three key technologies: Retrieval-Augmented Generation (RAG) for long-term memory; feedback-driven adaptation for real-time emotional tuning; and lightweight adapter modules for personalised conversational styles. This framework provides a path toward systems that users perceive as genuinely empathic, rather than ones that merely mimic supportive language. While AI cannot experience emotional empathy, it can model cognitive empathy and simulate affective and compassionate responses in coordinated ways at the behavioural level. However, because these systems lack conscious, autonomous ‘helping’ intentions, these design advancements must be considered alongside careful ethical and regulatory safeguards. Full article
(This article belongs to the Special Issue Internet of Things (IoT) and Cloud/Edge Computing)
Show Figures

Figure 1

26 pages, 892 KB  
Article
A Comparative Study of Partially, Somewhat, and Fully Homomorphic Encryption in Modern Cryptographic Libraries
by Eva Kupcova, Matúš Pleva, Vladyslav Khavan and Milos Drutarovsky
Electronics 2025, 14(23), 4753; https://doi.org/10.3390/electronics14234753 - 3 Dec 2025
Viewed by 843
Abstract
Homomorphic encryption enables computations to be performed directly on encrypted data, ensuring data confidentiality even in untrusted or distributed environments. Although this approach provides strong theoretical security, its practical adoption remains limited due to high computational and memory requirements. This study presents a [...] Read more.
Homomorphic encryption enables computations to be performed directly on encrypted data, ensuring data confidentiality even in untrusted or distributed environments. Although this approach provides strong theoretical security, its practical adoption remains limited due to high computational and memory requirements. This study presents a comparative evaluation of three representative homomorphic encryption paradigms: partially, somewhat, and fully homomorphic encryption. The implementations are based on the GMP library, Microsoft SEAL, and OpenFHE. The analysis examines encryption and decryption time, ciphertext expansion, and memory usage under various parameter configurations, including different polynomial modulus degrees. The goal is to provide a transparent and reproducible comparison that illustrates the practical differences among these approaches. The results highlight the trade-offs between security, efficiency, and numerical precision, identifying cases where lightweight schemes can achieve acceptable performance for latency-sensitive or resource-constrained applications. These findings offer practical guidance for deploying homomorphic encryption in secure cloud-based computation and other privacy-preserving environments. Full article
Show Figures

Graphical abstract

17 pages, 3230 KB  
Article
Evaluating the Reliability of Remote Sensing Techniques for Detecting the Strip Road Network in Boom-Corridor Systems
by Rachele Venanzi, Rodolfo Picchio, Aurora Bonaudo, Leonardo Assettati, Luca Cozzolino, Eugenia Pauselli, Massimo Cecchini, Angela Lo Monaco and Francesco Latterini
Forests 2025, 16(12), 1768; https://doi.org/10.3390/f16121768 - 24 Nov 2025
Viewed by 360
Abstract
Accurate detection of machinery-induced strip roads after forest operations is fundamental for assessing soil disturbance and supporting sustainable forest management. However, in Mediterranean pine forests where canopy openings after boom-corridor thinning are moderate, the effectiveness of different remote sensing techniques remains uncertain. Previous [...] Read more.
Accurate detection of machinery-induced strip roads after forest operations is fundamental for assessing soil disturbance and supporting sustainable forest management. However, in Mediterranean pine forests where canopy openings after boom-corridor thinning are moderate, the effectiveness of different remote sensing techniques remains uncertain. Previous studies have shown that LiDAR-based methods can reliably detect logging trails in different forest stands, but their direct transfer to structurally simpler, even-aged Mediterranean stands has not been validated. This study addresses this gap by testing whether UAV-derived RGB imagery can achieve comparable accuracy to LiDAR-based methods under the canopy conditions of boom-corridor thinning. We compared four approaches for detecting strip roads in a black pine (Pinus nigra Arn.) plantation on Mount Amiata (Tuscany, Italy): one based on high-resolution UAV RGB imagery and three based on LiDAR data, namely Hillshading (Hill), Local Relief Model (LRM), and Relative Density Model (RDM). The RDM method was specifically adapted to Mediterranean conditions by redefining its return-density height interval (1–30 cm) to better capture areas of bare soil typical of recently trafficked strip roads. Accuracy was evaluated against a GNSS-derived control map using nine performance metrics and a balanced subsampling framework with bootstrapped confidence intervals and ANOVA-based statistical comparisons. Results confirmed that UAV-RGB imagery provides reliable detection of strip roads under moderate canopy openings (accuracy = 0.64, Kappa = 0.27), while the parameter-tuned RDM achieved the highest accuracy and recall (accuracy = 0.75, Kappa = 0.49). This study demonstrates that RGB-based mapping can serve as a cost-effective solution for operational monitoring, while a properly tuned RDM provides the most robust performance when computational resources are sufficient to work on large point clouds. By adapting the RDM to Mediterranean forest conditions and validating the effectiveness of low-cost UAV-RGB surveys, this study bridges a key methodological gap in post-harvest disturbance mapping, offering forest managers practical, scalable tools to monitor soil impacts and support sustainable mechanized harvesting. Full article
(This article belongs to the Special Issue Research Advances in Management and Design of Forest Operations)
Show Figures

Figure 1

29 pages, 1003 KB  
Article
A Secure and Efficient KA-PRE Scheme for Data Transmission in Remote Data Management Environments
by JaeJeong Shin, Deok Gyu Lee, Daehee Seo, Wonbin Kim and Su-Hyun Kim
Electronics 2025, 14(21), 4339; https://doi.org/10.3390/electronics14214339 - 5 Nov 2025
Viewed by 513
Abstract
In recent years, remote data management environments have been increasingly deployed across diverse infrastructures, accompanied by a rapid surge in the demand for sharing and collaborative processing of sensitive data. Consequently, ensuring data security and privacy protection remains a fundamental challenge. A representative [...] Read more.
In recent years, remote data management environments have been increasingly deployed across diverse infrastructures, accompanied by a rapid surge in the demand for sharing and collaborative processing of sensitive data. Consequently, ensuring data security and privacy protection remains a fundamental challenge. A representative example of such an environment is the cloud, where efficient mechanisms for secure data sharing and access control are essential. In domains such as finance, healthcare, and public administration, where large volumes of sensitive information are processed by multiple participants, traditional access-control techniques often fail to satisfy the stringent security requirements. To address these limitations, Key-Aggregate Proxy Re-Encryption (KA-PRE) has emerged as a promising cryptographic primitive that simultaneously provides efficient key management and flexible authorization. However, existing KA-PRE constructions still suffer from several inherent security weaknesses, including aggregate-key leakage, ciphertext insertion and regeneration attacks, metadata exposure, and the lack of participant anonymity within the data-management framework. To overcome these limitations, this study systematically analyzes potential attack models in the KA-PRE setting and introduces a novel KA-PRE scheme designed to mitigate the identified vulnerabilities. Furthermore, through theoretical comparison with existing approaches and an evaluation of computational efficiency, the proposed scheme is shown to enhance security while maintaining practical performance and scalability. Full article
Show Figures

Figure 1

Back to TopTop