Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (189)

Search Parameters:
Keywords = memory footprint

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 19197 KiB  
Article
Empirical Evaluation of TLS-Enhanced MQTT on IoT Devices for V2X Use Cases
by Nikolaos Orestis Gavriilidis, Spyros T. Halkidis and Sophia Petridou
Appl. Sci. 2025, 15(15), 8398; https://doi.org/10.3390/app15158398 - 29 Jul 2025
Viewed by 29
Abstract
The rapid growth of Internet of Things (IoT) deployment has led to an unprecedented volume of interconnected, resource-constrained devices. Securing their communication is essential, especially in vehicular environments, where sensitive data exchange requires robust authentication, integrity, and confidentiality guarantees. In this paper, we [...] Read more.
The rapid growth of Internet of Things (IoT) deployment has led to an unprecedented volume of interconnected, resource-constrained devices. Securing their communication is essential, especially in vehicular environments, where sensitive data exchange requires robust authentication, integrity, and confidentiality guarantees. In this paper, we present an empirical evaluation of TLS (Transport Layer Security)-enhanced MQTT (Message Queuing Telemetry Transport) on low-cost, quad-core Cortex-A72 ARMv8 boards, specifically the Raspberry Pi 4B, commonly used as prototyping platforms for On-Board Units (OBUs) and Road-Side Units (RSUs). Three MQTT entities, namely, the broker, the publisher, and the subscriber, are deployed, utilizing Elliptic Curve Cryptography (ECC) for key exchange and authentication and employing the AES_256_GCM and ChaCha20_Poly1305 ciphers for confidentiality via appropriately selected libraries. We quantify resource consumption in terms of CPU utilization, execution time, energy usage, memory footprint, and goodput across TLS phases, cipher suites, message packaging strategies, and both Ethernet and WiFi interfaces. Our results show that (i) TLS 1.3-enhanced MQTT is feasible on Raspberry Pi 4B devices, though it introduces non-negligible resource overheads; (ii) batching messages into fewer, larger packets reduces transmission cost and latency; and (iii) ChaCha20_Poly1305 outperforms AES_256_GCM, particularly in wireless scenarios, making it the preferred choice for resource- and latency-sensitive V2X applications. These findings provide actionable recommendations for deploying secure MQTT communication on an IoT platform. Full article
(This article belongs to the Special Issue Cryptography in Data Protection and Privacy-Enhancing Technologies)
Show Figures

Figure 1

36 pages, 7426 KiB  
Article
PowerLine-MTYOLO: A Multitask YOLO Model for Simultaneous Cable Segmentation and Broken Strand Detection
by Badr-Eddine Benelmostafa and Hicham Medromi
Drones 2025, 9(7), 505; https://doi.org/10.3390/drones9070505 - 18 Jul 2025
Viewed by 489
Abstract
Power transmission infrastructure requires continuous inspection to prevent failures and ensure grid stability. UAV-based systems, enhanced with deep learning, have emerged as an efficient alternative to traditional, labor-intensive inspection methods. However, most existing approaches rely on separate models for cable segmentation and anomaly [...] Read more.
Power transmission infrastructure requires continuous inspection to prevent failures and ensure grid stability. UAV-based systems, enhanced with deep learning, have emerged as an efficient alternative to traditional, labor-intensive inspection methods. However, most existing approaches rely on separate models for cable segmentation and anomaly detection, leading to increased computational overhead and reduced reliability in real-time applications. To address these limitations, we propose PowerLine-MTYOLO, a lightweight, one-stage, multitask model designed for simultaneous power cable segmentation and broken strand detection from UAV imagery. Built upon the A-YOLOM architecture, and leveraging the YOLOv8 foundation, our model introduces four novel specialized modules—SDPM, HAD, EFR, and the Shape-Aware Wise IoU loss—that improve geometric understanding, structural consistency, and bounding-box precision. We also present the Merged Public Power Cable Dataset (MPCD), a diverse, open-source dataset tailored for multitask training and evaluation. The experimental results show that our model achieves up to +10.68% mAP@50 and +1.7% IoU compared to A-YOLOM, while also outperforming recent YOLO-based detectors in both accuracy and efficiency. These gains are achieved with a smaller model memory footprint and a similar inference speed compared to A-YOLOM. By unifying detection and segmentation into a single framework, PowerLine-MTYOLO offers a promising solution for autonomous aerial inspection and lays the groundwork for future advances in fine-structure monitoring tasks. Full article
Show Figures

Figure 1

14 pages, 1277 KiB  
Article
Experimentally Constrained Mechanistic and Data-Driven Models for Simulating NMDA Receptor Dynamics
by Duy-Tan J. Pham and Jean-Marie C. Bouteiller
Biomedicines 2025, 13(7), 1674; https://doi.org/10.3390/biomedicines13071674 - 8 Jul 2025
Viewed by 296
Abstract
Background: The N-methyl-d-aspartate receptor (NMDA-R) is a glutamate ionotropic receptor in the brain that is crucial for synaptic plasticity, which underlies learning and memory formation. Dysfunction of NMDA receptors is implicated in various neurological diseases due to their roles in both normal [...] Read more.
Background: The N-methyl-d-aspartate receptor (NMDA-R) is a glutamate ionotropic receptor in the brain that is crucial for synaptic plasticity, which underlies learning and memory formation. Dysfunction of NMDA receptors is implicated in various neurological diseases due to their roles in both normal cognition and excitotoxicity. However, their dynamics are challenging to capture accurately due to their high complexity and non-linear behavior. Methods: This article presents the elaboration and calibration of experimentally constrained computational models of GluN1/GluN2A NMDA-R dynamics: (1) a nine-state kinetic model optimized to replicate experimental data and (2) a computationally efficient look-up table model capable of replicating the dynamics of the nine-state kinetic model with a highly reduced footprint. Determination of the kinetic model’s parameter values was performed using the particle swarm optimization algorithm. The optimized kinetic model was then used to generate a rich input–output dataset to train the look-up table synapse model and estimate its coefficients. Results: Optimization produced a kinetic model capable of accurately reproducing experimentally found results such as frequency-dependent potentiation and the temporal response due to synaptic release of glutamate. Furthermore, the look-up table synapse model was able to closely mimic the dynamics of the optimized kinetic model. Conclusions: The results obtained with both models indicate that they constitute accurate alternatives for faithfully reproducing the dynamics of NMDA-Rs. High computational efficiency is also achieved with the use of the look-up table synapse model, making this implementation an ideal option for inclusion in large-scale neuronal models. Full article
(This article belongs to the Special Issue Synaptic Function and Modulation in Health and Disease)
Show Figures

Figure 1

20 pages, 19840 KiB  
Article
A Comparison of Segmentation Methods for Semantic OctoMap Generation
by Marcin Czajka, Maciej Krupka, Daria Kubacka, Michał Remigiusz Janiszewski and Dominik Belter
Appl. Sci. 2025, 15(13), 7285; https://doi.org/10.3390/app15137285 - 27 Jun 2025
Viewed by 475
Abstract
Semantic mapping plays a critical role in enabling autonomous vehicles to understand and navigate complex environments. Instead of computationally demanding 3D segmentation of point clouds, we propose efficient segmentation on RGB images and projection of the corresponding LIDAR measurements on the semantic OctoMap. [...] Read more.
Semantic mapping plays a critical role in enabling autonomous vehicles to understand and navigate complex environments. Instead of computationally demanding 3D segmentation of point clouds, we propose efficient segmentation on RGB images and projection of the corresponding LIDAR measurements on the semantic OctoMap. This study presents a comparative evaluation of different semantic segmentation methods and examines the impact of input image resolution on the accuracy of 3D semantic environment reconstruction, inference time, and computational resource usage. The experiments were conducted using an ROS 2-based pipeline that combines RGB images and LiDAR point clouds. Semantic segmentation is performed using ONNX-exported deep neural networks, with class predictions projected onto corresponding 3D LiDAR data using calibrated extrinsic. The resulting semantically annotated point clouds are fused into a probabilistic 3D representation using an OctoMap, where each voxel stores both occupancy and semantic class information. Multiple encoder–decoder architectures with various backbone configurations are evaluated in terms of segmentation quality, latency, memory footprint, and GPU utilization. Furthermore, a comparison between high and low image resolutions is conducted to assess trade-offs between model accuracy and real-time applicability. Full article
Show Figures

Figure 1

56 pages, 3118 KiB  
Article
Semantic Reasoning Using Standard Attention-Based Models: An Application to Chronic Disease Literature
by Yalbi Itzel Balderas-Martínez, José Armando Sánchez-Rojas, Arturo Téllez-Velázquez, Flavio Juárez Martínez, Raúl Cruz-Barbosa, Enrique Guzmán-Ramírez, Iván García-Pacheco and Ignacio Arroyo-Fernández
Big Data Cogn. Comput. 2025, 9(6), 162; https://doi.org/10.3390/bdcc9060162 - 19 Jun 2025
Viewed by 683
Abstract
Large-language-model (LLM) APIs demonstrate impressive reasoning capabilities, but their size, cost, and closed weights limit the deployment of knowledge-aware AI within biomedical research groups. At the other extreme, standard attention-based neural language models (SANLMs)—including encoder–decoder architectures such as Transformers, Gated Recurrent Units (GRUs), [...] Read more.
Large-language-model (LLM) APIs demonstrate impressive reasoning capabilities, but their size, cost, and closed weights limit the deployment of knowledge-aware AI within biomedical research groups. At the other extreme, standard attention-based neural language models (SANLMs)—including encoder–decoder architectures such as Transformers, Gated Recurrent Units (GRUs), and Long Short-Term Memory (LSTM) networks—are computationally inexpensive. However, their capacity for semantic reasoning in noisy, open-vocabulary knowledge bases (KBs) remains unquantified. Therefore, we investigate whether compact SANLMs can (i) reason over hybrid OpenIE-derived KBs that integrate commonsense, general-purpose, and non-communicable-disease (NCD) literature; (ii) operate effectively on commodity GPUs; and (iii) exhibit semantic coherence as assessed through manual linguistic inspection. To this end, we constructed four training KBs by integrating ConceptNet (600k triples), a 39k-triple general-purpose OpenIE set, and an 18.6k-triple OpenNCDKB extracted from 1200 PubMed abstracts. Encoder–decoder GRU, LSTM, and Transformer models (1–2 blocks) were trained to predict the object phrase given the subject + predicate. Beyond token-level cross-entropy, we introduced the Meaning-based Selectional-Preference Test (MSPT): for each withheld triple, we masked the object, generated a candidate, and measured its surplus cosine similarity over a random baseline using word embeddings, with significance assessed via a one-sided t-test. Hyperparameter sensitivity (311 GRU/168 LSTM runs) was analyzed, and qualitative frame–role diagnostics completed the evaluation. Our results showed that all SANLMs learned effectively from the point of view of the cross entropy loss. In addition, our MSPT provided meaningful semantic insights: for the GRUs (256-dim, 2048-unit, 1-layer): mean similarity (μsts) of 0.641 to the ground truth vs. 0.542 to the random baseline (gap 12.1%; p<10180). For the 1-block Transformer: μsts=0.551 vs. 0.511 (gap 4%; p<1025). While Transformers minimized loss and accuracy variance, GRUs captured finer selectional preferences. Both architectures trained within <24 GB GPU VRAM and produced linguistically acceptable, albeit over-generalized, biomedical assertions. Due to their observed performance, LSTM results were designated as baseline models for comparison. Therefore, properly tuned SANLMs can achieve statistically robust semantic reasoning over noisy, domain-specific KBs without reliance on massive LLMs. Their interpretability, minimal hardware footprint, and open weights promote equitable AI research, opening new avenues for automated NCD knowledge synthesis, surveillance, and decision support. Full article
Show Figures

Figure 1

15 pages, 36663 KiB  
Article
Self-Sensing of Piezoelectric Micropumps: Gas Bubble Detection by Artificial Intelligence Methods on Limited Embedded Systems
by Kristjan Axelsson, Mohammadhossien Sheikhsarraf, Christoph Kutter and Martin Richter
Sensors 2025, 25(12), 3784; https://doi.org/10.3390/s25123784 - 17 Jun 2025
Viewed by 381
Abstract
Gas bubbles are one of the main disturbances encountered when dispensing drugs of microliter volumes using portable miniaturized systems based on piezoelectric diaphragm micropumps. The presence of a gas bubble in the pump chamber leads to the inaccurate administration of the required dose [...] Read more.
Gas bubbles are one of the main disturbances encountered when dispensing drugs of microliter volumes using portable miniaturized systems based on piezoelectric diaphragm micropumps. The presence of a gas bubble in the pump chamber leads to the inaccurate administration of the required dose due to its impact on the flowrate. This is particularly important for highly concentrated drugs such as insulin. Different types of sensors are used to detect gas bubbles: inline on the fluidic channels or inside the pump chamber itself. These solutions increase the complexity, size, and cost of the microdosing system. To address these problems, a radically new approach is taken by utilizing the sensing capability of the piezoelectric diaphragm during micropump actuation. This work demonstrates the workflow to build a self-sensing micropump based on artificial intelligence methods on an embedded system. This is completed by the implementation of an electronic circuit that amplifies and samples the loading current of the piezoelectric ceramic with a microcontroller STM32G491RE. Training datasets of 11 micropumps are generated at an automated testbench for gas bubble injections. The training and hyper-parameter optimization of artificial intelligence algorithms from the TensorFlow and scikit-learn libraries are conducted using a grid search approach. The classification accuracy is determined by a cross-training routine, and model deployment on STM32G491RE is conducted utilizing the STM32Cube.AI framework. The finally deployed model on the embedded system has a memory footprint of 15.23 kB, a runtime of 182 µs, and detects gas bubbles with an accuracy of 99.41%. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Graphical abstract

26 pages, 9618 KiB  
Article
Predicting Energy Consumption and Time of Use of Home Appliances in an HEMS Using LSTM Networks and Smart Meters: A Case Study in Sincelejo, Colombia
by Zurisaddai Severiche-Maury, Carlos Uc-Ríos, Javier E. Sierra and Alejandro Guerrero
Sustainability 2025, 17(11), 4749; https://doi.org/10.3390/su17114749 - 22 May 2025
Cited by 1 | Viewed by 590
Abstract
Rising household electricity consumption, driven by technological advances and increased indoor activity, has led to higher energy costs and an increased reliance on non-renewable sources, exacerbating the carbon footprint. Home energy management systems (HEMS) are positioning themselves as an efficient alternative by integrating [...] Read more.
Rising household electricity consumption, driven by technological advances and increased indoor activity, has led to higher energy costs and an increased reliance on non-renewable sources, exacerbating the carbon footprint. Home energy management systems (HEMS) are positioning themselves as an efficient alternative by integrating artificial intelligence to improve their accuracy. Predictive algorithms that provide accurate data on the future behavior of energy consumption and appliance usage time are required in these HEMS to achieve this goal. This study presents a predictive model based on recurrent neural networks with long short-term memory (LSTM), known to capture nonlinear relationships and long-term dependencies in time series data. The model predicts individual and total household energy consumption and appliance usage time. Training data were collected for 12 months from an HEMS installed in a typical Colombian house, using smart meters developed in this research. The model’s performance is evaluated using the mean squared error (MSE), reaching a value of 0.0168 kWh2. The results confirm the effectiveness of HEMS and demonstrate that the integration of LSTM-based predictive models can significantly improve energy efficiency and optimize household energy consumption. Full article
(This article belongs to the Section Energy Sustainability)
Show Figures

Graphical abstract

20 pages, 5649 KiB  
Article
Edge-Deployed Band-Split Rotary Position Encoding Transformer for Ultra-Low-Signal-to-Noise-Ratio Unmanned Aerial Vehicle Speech Enhancement
by Feifan Liu, Muying Li, Luming Guo, Hao Guo, Jie Cao, Wei Zhao and Jun Wang
Drones 2025, 9(6), 386; https://doi.org/10.3390/drones9060386 - 22 May 2025
Cited by 1 | Viewed by 796
Abstract
Addressing the significant challenge of speech enhancement in ultra-low-Signal-to-Noise-Ratio (SNR) scenarios for Unmanned Aerial Vehicle (UAV) voice communication, particularly under edge deployment constraints, this study proposes the Edge-Deployed Band-Split Rotary Position Encoding Transformer (Edge-BS-RoFormer), a novel, lightweight band-split rotary position encoding transformer. While [...] Read more.
Addressing the significant challenge of speech enhancement in ultra-low-Signal-to-Noise-Ratio (SNR) scenarios for Unmanned Aerial Vehicle (UAV) voice communication, particularly under edge deployment constraints, this study proposes the Edge-Deployed Band-Split Rotary Position Encoding Transformer (Edge-BS-RoFormer), a novel, lightweight band-split rotary position encoding transformer. While existing deep learning methods face limitations in dynamic UAV noise suppression under such constraints, including insufficient harmonic modeling and high computational complexity, the proposed Edge-BS-RoFormer distinctively synergizes a band-split strategy for fine-grained spectral processing, a dual-dimension Rotary Position Encoding (RoPE) mechanism for superior joint time–frequency modeling, and FlashAttention to optimize computational efficiency, pivotal for its lightweight nature and robust ultra-low-SNR performance. Experiments on our self-constructed DroneNoise-LibriMix (DN-LM) dataset demonstrate Edge-BS-RoFormer’s superiority. Under a −15 dB SNR, it achieves Scale-Invariant Signal-to-Distortion Ratio (SI-SDR) improvements of 2.2 dB over Deep Complex U-Net (DCUNet), 25.0 dB over the Dual-Path Transformer Network (DPTNet), and 2.3 dB over HTDemucs. Correspondingly, the Perceptual Evaluation of Speech Quality (PESQ) is enhanced by 0.11, 0.18, and 0.15, respectively. Crucially, its efficacy for edge deployment is substantiated by a minimal model storage of 8.534 MB, 11.617 GFLOPs (an 89.6% reduction vs. DCUNet), a runtime memory footprint of under 500MB, a Real-Time Factor (RTF) of 0.325 (latency: 330.830 ms), and a power consumption of 6.536 W on an NVIDIA Jetson AGX Xavier, fulfilling real-time processing demands. This study delivers a validated lightweight solution, exemplified by its minimal computational overhead and real-time edge inference capability, for effective speech enhancement in complex UAV acoustic scenarios, including dynamic noise conditions. Furthermore, the open-sourced dataset and model contribute to advancing research and establishing standardized evaluation frameworks in this domain. Full article
(This article belongs to the Section Drone Communications)
Show Figures

Figure 1

19 pages, 2902 KiB  
Article
Prediction of the Marine Dynamic Environment for Arctic Ice-Based Buoys Using Historical Profile Data
by Jingzi Zhu, Yu Luo, Tao Li, Yanhai Gan and Junyu Dong
J. Mar. Sci. Eng. 2025, 13(6), 1003; https://doi.org/10.3390/jmse13061003 - 22 May 2025
Viewed by 377
Abstract
In this paper, the time-series model is used to predict whether an ocean buoy is about to be inside a vortex. Marine buoys are an important tool for collecting ocean data and studying ocean dynamics, climate change, and ecosystem health. A vortex is [...] Read more.
In this paper, the time-series model is used to predict whether an ocean buoy is about to be inside a vortex. Marine buoys are an important tool for collecting ocean data and studying ocean dynamics, climate change, and ecosystem health. A vortex is an important ocean dynamic process. If we can predict that a buoy is about to enter a vortex, we can automatically adjust the buoy’s sampling frequency to better observe the vortex’s structure and development. To address this requirement, based on the profile data, including latitude and longitude, temperature, and salinity, collected by 56 buoys in the Arctic Ocean from 2014 to 2023, this paper uses the TSMixer time-series model to predict whether an ocean buoy is about to be inside a vortex. The TSMixer model effectively captures the spatio-temporal characteristics of multivariate time series through time-mixing and feature-mixing mechanisms, and the accuracy of the model reaches 84.6%. The proposed model is computationally efficient and has a low memory footprint, which is suitable for real-time applications and provides accurate prediction support for marine monitoring. Full article
(This article belongs to the Section Physical Oceanography)
Show Figures

Figure 1

21 pages, 4686 KiB  
Article
Low-Memory-Footprint CNN-Based Biomedical Signal Processing for Wearable Devices
by Zahra Kokhazad, Dimitrios Gkountelos, Milad Kokhazadeh, Charalampos Bournas, Georgios Keramidas and Vasilios Kelefouras
IoT 2025, 6(2), 29; https://doi.org/10.3390/iot6020029 - 8 May 2025
Viewed by 623
Abstract
The rise of wearable devices has enabled real-time processing of sensor data for critical health monitoring applications, such as human activity recognition (HAR) and cardiac disorder classification (CDC). However, the limited computational and memory resources of wearables necessitate lightweight yet accurate classification models. [...] Read more.
The rise of wearable devices has enabled real-time processing of sensor data for critical health monitoring applications, such as human activity recognition (HAR) and cardiac disorder classification (CDC). However, the limited computational and memory resources of wearables necessitate lightweight yet accurate classification models. While deep neural networks (DNNs), including convolutional neural networks (CNNs) and long short-term memory networks, have shown high accuracy for HAR and CDC, their large parameter sizes hinder deployment on edge devices. On the other hand, various DNN compression techniques have been proposed, but exploiting the combination of various compression techniques with the aim of achieving memory efficient DNN models for HAR and CDC tasks remains under-investigated. This work studies the impact of CNN architecture parameters, focusing on the convolutional and dense layers, to identify configurations that balance accuracy and efficiency. We derive two versions of each model—lean and fat—based on their memory characteristics. Subsequently, we apply three complementary compression techniques: filter-based pruning, low-rank factorization, and dynamic range quantization. Experiments across three diverse DNNs demonstrate that this multi-faceted compression approach can significantly reduce memory and computational requirements while maintaining validation accuracy, leading to DNN models suitable for intelligent health monitoring on resource-constrained wearable devices. Full article
Show Figures

Figure 1

18 pages, 1897 KiB  
Article
Multi-Path Convolutional Architecture with Channel-Wise Attention for Multiclass Brain Tumor Detection in Magnetic Resonance Imaging Scans
by Muneeb A. Khan, Tsagaanchuluun Sugir, Byambaa Dorj, Ganchimeg Uuganchimeg, Seonuck Paek, Khurelbaatar Zagarzusem and Heemin Park
Electronics 2025, 14(9), 1741; https://doi.org/10.3390/electronics14091741 - 24 Apr 2025
Viewed by 680
Abstract
Accurately detecting and classifying brain tumors in magnetic resonance imaging (MRI) scans poses formidable challenges, stemming from the heterogeneous presentation of tumors and the need for reliable, real-time diagnostic outputs. In this paper, we propose a novel multi-path convolutional architecture enhanced with channel-wise [...] Read more.
Accurately detecting and classifying brain tumors in magnetic resonance imaging (MRI) scans poses formidable challenges, stemming from the heterogeneous presentation of tumors and the need for reliable, real-time diagnostic outputs. In this paper, we propose a novel multi-path convolutional architecture enhanced with channel-wise attention mechanisms, evaluated on a comprehensive four-class brain tumor dataset. Specifically: (i) we design a parallel feature extraction strategy that captures nuanced tumor morphologies, while channel-wise attention refines salient characteristics; (ii) we employ systematic data augmentation, yielding a balanced dataset of 6380 MRI scans to bolster model generalization; (iii) we compare the proposed architecture against state-of-the-art models, demonstrating superior diagnostic performance with 97.52% accuracy, 97.63% precision, 97.18% recall, 98.32% specificity, and an F1-score of 97.36%; and (iv) we report an inference speed of 5.13 ms per scan, alongside a higher memory footprint of approximately 26 GB, underscoring both the feasibility for real-time clinical application and the importance of resource considerations. These findings collectively highlight the proposed framework’s potential for improving automated brain tumor detection workflows and prompt further optimization for broader clinical deployment. Full article
Show Figures

Figure 1

22 pages, 8440 KiB  
Article
Comparison and Prediction of the Ecological Footprint of Water Resources—Taking Guizhou Province as an Example
by Yongtao Wang, Wenfeng Yang, Jian Liu, Enhui Lu, Ye Li and Ning Chen
Hydrology 2025, 12(5), 99; https://doi.org/10.3390/hydrology12050099 - 22 Apr 2025
Viewed by 1192
Abstract
Water resources are considered to be of paramount importance to the natural world on a global scale, being critical for the sustenance of ecosystems, the support of life, and the achievement of sustainable development. However, these resources are under threat from climate change, [...] Read more.
Water resources are considered to be of paramount importance to the natural world on a global scale, being critical for the sustenance of ecosystems, the support of life, and the achievement of sustainable development. However, these resources are under threat from climate change, population growth, urbanization and pollution. This necessitates the development of robust and effective assessment methods to ensure their sustainable use. Although assessing the ecological footprint (EF) of urban water systems plays a critical role in advancing sustainable cities and managing water assets, existing research has largely overlooked the application of geospatial visualization techniques in evaluating resource allocation strategies within karst mountain watersheds, an oversight this study aims to correct through innovative methodological integration. This research establishes an evaluation framework for predicting water resource availability in Guizhou through the synergistic application of three methodologies: (1) the water-based ecological accounting framework (WEF), (2) ecosystem service thresholds defined by the water ecological carrying capacity of water resources (WECC) thresholds, and (3) composite sustainability metrics, all correlated with contemporary hydrological utilization profiles. Spatiotemporal patterns were quantified across the province’s nine administrative divisions during the 2013–2022 period through time-series analysis, with subsequent WEF projections for 2023–2027 generated via Long Short-Term Memory (LSTM) temporal forecasting techniques. Full article
Show Figures

Figure 1

28 pages, 881 KiB  
Article
Towards Sustainable Energy: Predictive Models for Space Heating Consumption at the European Central Bank
by Fernando Almeida, Mauro Castelli and Nadine Côrte-Real
Environments 2025, 12(4), 131; https://doi.org/10.3390/environments12040131 - 21 Apr 2025
Viewed by 374
Abstract
Space heating consumption prediction is critical for energy management and efficiency, directly impacting sustainability and efforts to reduce greenhouse gas emissions. Accurate models enable better demand forecasting, promote the use of green energy, and support decarbonization goals. However, existing models often lack precision [...] Read more.
Space heating consumption prediction is critical for energy management and efficiency, directly impacting sustainability and efforts to reduce greenhouse gas emissions. Accurate models enable better demand forecasting, promote the use of green energy, and support decarbonization goals. However, existing models often lack precision due to limited feature sets, suboptimal algorithm choices, and limited access to weather data, which reduces generalizability. This study addresses these gaps by evaluating various Machine Learning and Deep Learning models, including K-Nearest Neighbors, Support Vector Regression, Decision Trees, Linear Regression, XGBoost, Random Forest, Gradient Boosting, AdaBoost, Long Short-Term Memory, and Gated Recurrent Units. We utilized space heating consumption data from the European Central Bank Headquarters office as a case study. We employed a methodology that involved splitting the features into three categories based on the correlation and evaluating model performance using Mean Squared Error, Mean Absolute Error, Root Mean Squared Error, and R-squared metrics. Results indicate that XGBoost consistently outperformed other models, particularly when utilizing all available features, achieving an R2 value of 0.966 using the weather data from the building weather station. This model’s superior performance underscores the importance of comprehensive feature sets for accurate predictions. The significance of this study lies in its contribution to sustainable energy management practices. By improving the accuracy of space heating consumption forecasts, our approach supports the efficient use of green energy resources, aiding in the global efforts towards decarbonization and reducing carbon footprints in urban environments. Full article
Show Figures

Figure 1

16 pages, 1530 KiB  
Article
TSC-SIG: A Novel Method Based on the CPU Timestamp Counter with POSIX Signals for Efficient Memory Reclamation
by Chen Zhang, Zhengming Yi and Xinghui Zhu
Electronics 2025, 14(7), 1371; https://doi.org/10.3390/electronics14071371 - 29 Mar 2025
Viewed by 295
Abstract
In dynamic concurrent data structures, memory management poses a significant challenge due to the diverse types of memory access and operations. Timestamps are widely used in concurrent algorithms, but existing safe memory reclamation algorithms that utilize timestamps often fail to achieve a balance [...] Read more.
In dynamic concurrent data structures, memory management poses a significant challenge due to the diverse types of memory access and operations. Timestamps are widely used in concurrent algorithms, but existing safe memory reclamation algorithms that utilize timestamps often fail to achieve a balance among performance, applicability, and robustness. With the development of the CPU timestamp counter, using it as the timestamp has proven to be efficient. Based on this, we introduce TSC-SIG in this paper to guarantee safe memory reclamation and successfully avoid use-after-free errors. TSC-SIG effectively reduces synchronization overhead, thereby improving the performance of concurrent operations. It leverages the POSIX signal mechanism to restrict the memory footprint. Furthermore, TSC-SIG can be integrated into various data structures. We conducted extensive experiments on diverse data structures and workloads, and the results clearly demonstrate the excellence of TSC-SIG in terms of performance, applicability, and robustness. TSC-SIG shows remarkable performance in read-dominated workloads. As related techniques continue to evolve, TSC-SIG exhibits significant development and application potential. Full article
(This article belongs to the Special Issue Computer Architecture & Parallel and Distributed Computing)
Show Figures

Figure 1

16 pages, 3892 KiB  
Review
2D Spintronics for Neuromorphic Computing with Scalability and Energy Efficiency
by Douglas Z. Plummer, Emily D’Alessandro, Aidan Burrowes, Joshua Fleischer, Alexander M. Heard and Yingying Wu
J. Low Power Electron. Appl. 2025, 15(2), 16; https://doi.org/10.3390/jlpea15020016 - 24 Mar 2025
Cited by 2 | Viewed by 3065
Abstract
The demand for computing power has been growing exponentially with the rise of artificial intelligence (AI), machine learning, and the Internet of Things (IoT). This growth requires unconventional computing primitives that prioritize energy efficiency, while also addressing the critical need for scalability. Neuromorphic [...] Read more.
The demand for computing power has been growing exponentially with the rise of artificial intelligence (AI), machine learning, and the Internet of Things (IoT). This growth requires unconventional computing primitives that prioritize energy efficiency, while also addressing the critical need for scalability. Neuromorphic computing, inspired by the biological brain, offers a transformative paradigm for addressing these challenges. This review paper provides an overview of advancements in 2D spintronics and device architectures designed for neuromorphic applications, with a focus on techniques such as spin-orbit torque, magnetic tunnel junctions, and skyrmions. Emerging van der Waals materials like CrI3, Fe3GaTe2, and graphene-based heterostructures have demonstrated unparalleled potential for integrating memory and logic at the atomic scale. This work highlights technologies with ultra-low energy consumption (0.14 fJ/operation), high switching speeds (sub-nanosecond), and scalability to sub-20 nm footprints. It covers key material innovations and the role of spintronic effects in enabling compact, energy-efficient neuromorphic systems, providing a foundation for advancing scalable, next-generation computing architectures. Full article
Show Figures

Figure 1

Back to TopTop