Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (462)

Search Parameters:
Keywords = assistive communication devices

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 1029 KiB  
Article
Lattice-Based Certificateless Proxy Re-Signature for IoT: A Computation-and-Storage Optimized Post-Quantum Scheme
by Zhanzhen Wei, Gongjian Lan, Hong Zhao, Zhaobin Li and Zheng Ju
Sensors 2025, 25(15), 4848; https://doi.org/10.3390/s25154848 (registering DOI) - 6 Aug 2025
Abstract
Proxy re-signature enables transitive authentication of digital identities across different domains and has significant application value in areas such as digital rights management, cross-domain certificate validation, and distributed system access control. However, most existing proxy re-signature schemes, which are predominantly based on traditional [...] Read more.
Proxy re-signature enables transitive authentication of digital identities across different domains and has significant application value in areas such as digital rights management, cross-domain certificate validation, and distributed system access control. However, most existing proxy re-signature schemes, which are predominantly based on traditional public-key cryptosystems, face security vulnerabilities and certificate management bottlenecks. While identity-based schemes alleviate some issues, they introduce key escrow concerns. Certificateless schemes effectively resolve both certificate management and key escrow problems but remain vulnerable to quantum computing threats. To address these limitations, this paper constructs an efficient post-quantum certificateless proxy re-signature scheme based on algebraic lattices. Building upon algebraic lattice theory and leveraging the Dilithium algorithm, our scheme innovatively employs a lattice basis reduction-assisted parameter selection strategy to mitigate the potential algebraic attack vectors inherent in the NTRU lattice structure. This ensures the security and integrity of multi-party communication in quantum-threat environments. Furthermore, the scheme significantly reduces computational overhead and optimizes signature storage complexity through structured compression techniques, facilitating deployment on resource-constrained devices like Internet of Things (IoT) terminals. We formally prove the unforgeability of the scheme under the adaptive chosen-message attack model, with its security reducible to the hardness of the corresponding underlying lattice problems. Full article
(This article belongs to the Special Issue IoT Network Security (Second Edition))
22 pages, 4895 KiB  
Article
Machine Learning-Assisted Secure Random Communication System
by Areeb Ahmed and Zoran Bosnić
Entropy 2025, 27(8), 815; https://doi.org/10.3390/e27080815 - 29 Jul 2025
Viewed by 213
Abstract
Machine learning techniques have revolutionized physical layer security (PLS) and provided opportunities for optimizing the performance and security of modern communication systems. In this study, we propose the first machine learning-assisted random communication system (ML-RCS). It comprises a pretrained decision tree (DT)-based receiver [...] Read more.
Machine learning techniques have revolutionized physical layer security (PLS) and provided opportunities for optimizing the performance and security of modern communication systems. In this study, we propose the first machine learning-assisted random communication system (ML-RCS). It comprises a pretrained decision tree (DT)-based receiver that extracts binary information from the transmitted random noise carrier signals. The ML-RCS employs skewed alpha-stable (α-stable) noise as a random carrier to encode the incoming binary bits securely. The DT model is pretrained on an extensively developed dataset encompassing all the selected parameter combinations to generate and detect the α-stable noise signals. The legitimate receiver leverages the pretrained DT and a predetermined key, specifically the pulse length of a single binary information bit, to securely decode the hidden binary bits. The performance evaluations included the single-bit transmission, confusion matrices, and a bit error rate (BER) analysis via Monte Carlo simulations. The fact that the BER reached 10−3 confirms the ability of the proposed system to establish successful secure communication between a transmitter and legitimate receiver. Additionally, the ML-RCS provides an increased data rate compared to previous random communication systems. From the perspective of security, the confusion matrices and computed false negative rate of 50.2% demonstrate the failure of an eavesdropper to decode the binary bits without access to the predetermined key and the private dataset. These findings highlight the potential ability of unconventional ML-RCSs to promote the development of secure next-generation communication devices with built-in PLSs. Full article
(This article belongs to the Special Issue Wireless Communications: Signal Processing Perspectives, 2nd Edition)
Show Figures

Figure 1

24 pages, 1530 KiB  
Article
A Lightweight Robust Training Method for Defending Model Poisoning Attacks in Federated Learning Assisted UAV Networks
by Lucheng Chen, Weiwei Zhai, Xiangfeng Bu, Ming Sun and Chenglin Zhu
Drones 2025, 9(8), 528; https://doi.org/10.3390/drones9080528 - 28 Jul 2025
Viewed by 397
Abstract
The integration of unmanned aerial vehicles (UAVs) into next-generation wireless networks greatly enhances the flexibility and efficiency of communication and distributed computation for ground mobile devices. Federated learning (FL) provides a privacy-preserving paradigm for device collaboration but remains highly vulnerable to poisoning attacks [...] Read more.
The integration of unmanned aerial vehicles (UAVs) into next-generation wireless networks greatly enhances the flexibility and efficiency of communication and distributed computation for ground mobile devices. Federated learning (FL) provides a privacy-preserving paradigm for device collaboration but remains highly vulnerable to poisoning attacks and is further challenged by the resource constraints and heterogeneous data common to UAV-assisted systems. Existing robust aggregation and anomaly detection methods often degrade in efficiency and reliability under these realistic adversarial and non-IID settings. To bridge these gaps, we propose FedULite, a lightweight and robust federated learning framework specifically designed for UAV-assisted environments. FedULite features unsupervised local representation learning optimized for unlabeled, non-IID data. Moreover, FedULite leverages a robust, adaptive server-side aggregation strategy that uses cosine similarity-based update filtering and dimension-wise adaptive learning rates to neutralize sophisticated data and model poisoning attacks. Extensive experiments across diverse datasets and adversarial scenarios demonstrate that FedULite reduces the attack success rate (ASR) from over 90% in undefended scenarios to below 5%, while maintaining the main task accuracy loss within 2%. Moreover, it introduces negligible computational overhead compared to standard FedAvg, with approximately 7% additional training time. Full article
(This article belongs to the Special Issue IoT-Enabled UAV Networks for Secure Communication)
Show Figures

Figure 1

13 pages, 940 KiB  
Review
Management of Dysarthria in Amyotrophic Lateral Sclerosis
by Elena Pasqualucci, Diletta Angeletti, Pamela Rosso, Elena Fico, Federica Zoccali, Paola Tirassa, Armando De Virgilio, Marco de Vincentiis and Cinzia Severini
Cells 2025, 14(14), 1048; https://doi.org/10.3390/cells14141048 - 9 Jul 2025
Viewed by 569
Abstract
Amyotrophic lateral sclerosis (ALS) stands as the leading neurodegenerative disorder affecting the motor system. One of the hallmarks of ALS, especially its bulbar form, is dysarthria, which significantly impairs the quality of life of ALS patients. This review provides a comprehensive overview of [...] Read more.
Amyotrophic lateral sclerosis (ALS) stands as the leading neurodegenerative disorder affecting the motor system. One of the hallmarks of ALS, especially its bulbar form, is dysarthria, which significantly impairs the quality of life of ALS patients. This review provides a comprehensive overview of the current knowledge on the clinical manifestations, diagnostic differentiation, underlying mechanisms, diagnostic tools, and therapeutic strategies for the treatment of dysarthria in ALS. We update on the most promising digital speech biomarkers of ALS that are critical for early and differential diagnosis. Advances in artificial intelligence and digital speech processing have transformed the analysis of speech patterns, and offer the opportunity to start therapy early to improve vocal function, as speech rate appears to decline significantly before the diagnosis of ALS is confirmed. In addition, we discuss the impact of interventions that can improve vocal function and quality of life for patients, such as compensatory speech techniques, surgical options, improving lung function and respiratory muscle strength, and percutaneous dilated tracheostomy, possibly with adjunctive therapies to treat respiratory insufficiency, and finally assistive devices for alternative communication. Full article
(This article belongs to the Special Issue Pathology and Treatments of Amyotrophic Lateral Sclerosis (ALS))
Show Figures

Figure 1

19 pages, 1514 KiB  
Article
A UAV Trajectory Optimization and Task Offloading Strategy Based on Hybrid Metaheuristic Algorithm in Mobile Edge Computing
by Yeqiang Zheng, An Li, Yihu Wen and Gaocai Wang
Future Internet 2025, 17(7), 300; https://doi.org/10.3390/fi17070300 - 3 Jul 2025
Viewed by 377
Abstract
In the UAV-assisted mobile edge computing (MEC) communication system, the UAV receives the data offloaded by multiple ground user devices as an aerial base station. Among them, due to the limited battery storage of a UAV, energy saving is a key issue in [...] Read more.
In the UAV-assisted mobile edge computing (MEC) communication system, the UAV receives the data offloaded by multiple ground user devices as an aerial base station. Among them, due to the limited battery storage of a UAV, energy saving is a key issue in a UAV-assisted MEC system. However, for a low-altitude flying UAV, successful obstacle avoidance is also very necessary. This paper aims to maximize the system energy efficiency (defined as the ratio of the total amount of offloaded data to the energy consumption of the UAV) to meet the maneuverability and three-dimensional obstacle avoidance constraints of a UAV. A joint optimization strategy with maximized energy efficiency for the UAV flight trajectory and user device task offloading rate is proposed. In order to solve this problem, hybrid alternating metaheuristics for energy optimization are given. Due to the non-convexity and fractional structure of the optimization problem, it can be transformed into an equivalent parameter optimization problem using the Dinkelbach method and then divided into two sub-optimization problems that are alternately optimized using metaheuristic algorithms. The experimental results show that the strategy proposed in this paper can enable a UAV to avoid obstacles during flight by detouring or crossing, and the trajectory does not overlap with obstacles, effectively achieving two-dimensional and three-dimensional obstacle avoidance. In addition, compared with related solving methods, the solving method in this paper has significantly higher success than traditional algorithms. In comparison with related optimization strategies, the strategy proposed in this paper can effectively reduce the overall energy consumption of UAV. Full article
Show Figures

Figure 1

21 pages, 482 KiB  
Review
Assistive Technologies for Individuals with a Disability from a Neurological Condition: A Narrative Review on the Multimodal Integration
by Mirjam Bonanno, Beatrice Saracino, Irene Ciancarelli, Giuseppe Panza, Alfredo Manuli, Giovanni Morone and Rocco Salvatore Calabrò
Healthcare 2025, 13(13), 1580; https://doi.org/10.3390/healthcare13131580 - 1 Jul 2025
Viewed by 862
Abstract
Background/Objectives: Neurological disorders often result in a broad spectrum of disabilities that impact mobility, communication, cognition, and sensory processing, leading to significant limitations in independence and quality of life. Assistive technologies (ATs) offer tools to compensate for these impairments, support daily living, and [...] Read more.
Background/Objectives: Neurological disorders often result in a broad spectrum of disabilities that impact mobility, communication, cognition, and sensory processing, leading to significant limitations in independence and quality of life. Assistive technologies (ATs) offer tools to compensate for these impairments, support daily living, and improve quality of life. The World Health Organization encourages the adoption and diffusion of effective assistive technology (AT). This narrative review aims to explore the integration, benefits, and challenges of assistive technologies in individuals with neurological disabilities, focusing on their role across mobility, communication, cognitive, and sensory domains. Methods: A narrative approach was adopted by reviewing relevant studies published between 2014 and 2024. Literature was sourced from PubMed and Scopus using specific keyword combinations related to assistive technology and neurological disorders. Results: Findings highlight the potential of ATs, ranging from traditional aids to intelligent systems like brain–computer interfaces and AI-driven devices, to enhance autonomy, communication, and quality of life. However, significant barriers remain, including usability issues, training requirements, accessibility disparities, limited user involvement in design, and a low diffusion of a health technology assessment approach. Conclusions: Future directions emphasize the need for multidimensional, user-centered solutions that integrate personalization through machine learning and artificial intelligence to ensure long-term adoption and efficacy. For instance, combining brain–computer interfaces (BCIs) with virtual reality (VR) using machine learning algorithms could help monitor cognitive load in real time. Similarly, ATs driven by artificial intelligence technology could be useful to dynamically respond to users’ physiological and behavioral data to optimize support in daily tasks. Full article
Show Figures

Figure 1

21 pages, 1476 KiB  
Article
AI-Driven Handover Management and Load Balancing Optimization in Ultra-Dense 5G/6G Cellular Networks
by Chaima Chabira, Ibraheem Shayea, Gulsaya Nurzhaubayeva, Laura Aldasheva, Didar Yedilkhan and Saule Amanzholova
Technologies 2025, 13(7), 276; https://doi.org/10.3390/technologies13070276 - 1 Jul 2025
Cited by 1 | Viewed by 1176
Abstract
This paper presents a comprehensive review of handover management and load balancing optimization (LBO) in ultra-dense 5G and emerging 6G cellular networks. With the increasing deployment of small cells and the rapid growth of data traffic, these networks face significant challenges in ensuring [...] Read more.
This paper presents a comprehensive review of handover management and load balancing optimization (LBO) in ultra-dense 5G and emerging 6G cellular networks. With the increasing deployment of small cells and the rapid growth of data traffic, these networks face significant challenges in ensuring seamless mobility and efficient resource allocation. Traditional handover and load balancing techniques, primarily designed for 4G systems, are no longer sufficient to address the complexity of heterogeneous network environments that incorporate millimeter-wave communication, Internet of Things (IoT) devices, and unmanned aerial vehicles (UAVs). The review focuses on how recent advances in artificial intelligence (AI), particularly machine learning (ML) and deep learning (DL), are being applied to improve predictive handover decisions and enable real-time, adaptive load distribution. AI-driven solutions can significantly reduce handover failures, latency, and network congestion, while improving overall user experience and quality of service (QoS). This paper surveys state-of-the-art research on these techniques, categorizing them according to their application domains and evaluating their performance benefits and limitations. Furthermore, the paper discusses the integration of intelligent handover and load balancing methods in smart city scenarios, where ultra-dense networks must support diverse services with high reliability and low latency. Key research gaps are also identified, including the need for standardized datasets, energy-efficient AI models, and context-aware mobility strategies. Overall, this review aims to guide future research and development in designing robust, AI-assisted mobility and resource management frameworks for next-generation wireless systems. Full article
Show Figures

Figure 1

12 pages, 438 KiB  
Article
From Hospital to Home: Interdisciplinary Approaches to Optimise Palliative Care Discharge Processes
by Matthias Unseld, Timon Wnendt, Christian Sebesta, Jana van Oers, Jonathan Parizek, Lea Kum, Eva Katharina Masel, Pavol Mikula, Hans Jürgen Heppner and Elisabeth Lucia Zeilinger
Int. J. Environ. Res. Public Health 2025, 22(7), 1023; https://doi.org/10.3390/ijerph22071023 - 27 Jun 2025
Viewed by 295
Abstract
The transition from hospital-based palliative care to home care is a critical phase often marked by logistical, medical, and emotional challenges. Effective discharge planning is essential to ensure continuity of care, yet gaps in communication, interdisciplinary coordination, and access to resources frequently hinder [...] Read more.
The transition from hospital-based palliative care to home care is a critical phase often marked by logistical, medical, and emotional challenges. Effective discharge planning is essential to ensure continuity of care, yet gaps in communication, interdisciplinary coordination, and access to resources frequently hinder this process. This qualitative study explored key barriers, related support needs, and strategies for optimising palliative care discharge through semi-structured interviews with 28 participants, including healthcare professionals, recently discharged palliative care patients, and primary caregivers. Reflexive thematic analysis revealed five main themes: (1) discharge planning and coordination; (2) symptom management and medication; (3) psychosocial support; (4) communication and information; (5) the role of assistive devices and home care services. Discharge processes were often late or unstructured. Poor interdisciplinary collaboration and a lack of caregiver preparation also contributed to hospital readmissions and emotional distress. By focusing on needs, our analysis identifies not only what was lacking but also what is required to overcome these barriers. Our findings suggest that standardised discharge protocols and checklists, earlier planning, structured communication tools, and improved integration of home care services could enhance patient outcomes and reduce caregiver burden. Addressing psychosocial needs and ensuring timely access to assistive devices are also crucial. Strengthening interdisciplinary collaboration and refining discharge practices can facilitate smoother transitions and improve the quality of palliative care at home. Full article
Show Figures

Figure 1

28 pages, 1791 KiB  
Article
Speech Recognition-Based Wireless Control System for Mobile Robotics: Design, Implementation, and Analysis
by Sandeep Gupta, Udit Mamodiya and Ahmed J. A. Al-Gburi
Automation 2025, 6(3), 25; https://doi.org/10.3390/automation6030025 - 24 Jun 2025
Viewed by 1028
Abstract
This paper describes an innovative wireless mobile robotics control system based on speech recognition, where the ESP32 microcontroller is used to control motors, facilitate Bluetooth communication, and deploy an Android application for the real-time speech recognition logic. With speech processed on the Android [...] Read more.
This paper describes an innovative wireless mobile robotics control system based on speech recognition, where the ESP32 microcontroller is used to control motors, facilitate Bluetooth communication, and deploy an Android application for the real-time speech recognition logic. With speech processed on the Android device and motor commands handled on the ESP32, the study achieves significant performance gains through distributed architectures while maintaining low latency for feedback control. In experimental tests over a range of 1–10 m, stable 110–140 ms command latencies, with low variation (±15 ms) were observed. The system’s voice and manual button modes both yield over 92% accuracy with the aid of natural language processing, resulting in training requirements being low, and displaying strong performance in high-noise environments. The novelty of this work is evident through an adaptive keyword spotting algorithm for improved recognition performance in high-noise environments and a gradual latency management system that optimizes processing parameters in the presence of noise. By providing a user-friendly, real-time speech interface, this work serves to enhance human–robot interaction when considering future assistive devices, educational platforms, and advanced automated navigation research. Full article
(This article belongs to the Section Robotics and Autonomous Systems)
Show Figures

Figure 1

22 pages, 2535 KiB  
Article
Research on a Secure and Reliable Runtime Patching Method for Cyber–Physical Systems and Internet of Things Devices
by Zesheng Xi, Bo Zhang, Aniruddha Bhattacharjya, Yunfan Wang and Chuan He
Symmetry 2025, 17(7), 983; https://doi.org/10.3390/sym17070983 - 21 Jun 2025
Viewed by 423
Abstract
Recent advances in technologies such as blockchain, the Internet of Things (IoT), Cyber–Physical Systems (CPSs), and the Industrial Internet of Things (IIoT) have driven the digitalization and intelligent transformation of modern industries. However, embedded control devices within power system communication infrastructures have become [...] Read more.
Recent advances in technologies such as blockchain, the Internet of Things (IoT), Cyber–Physical Systems (CPSs), and the Industrial Internet of Things (IIoT) have driven the digitalization and intelligent transformation of modern industries. However, embedded control devices within power system communication infrastructures have become increasingly susceptible to cyber threats due to escalating software complexity and extensive network exposure. We have seen that symmetric conventional patching techniques—both static and dynamic—often fail to satisfy the stringent requirements of real-time responsiveness and computational efficiency in resource-constrained environments of all kinds of power grids. To address this limitation, we have proposed a hardware-assisted runtime patching framework tailored for embedded systems in critical power system networks. Our method has integrated binary-level vulnerability modeling, execution-trace-driven fault localization, and lightweight patch synthesis, enabling dynamic, in-place code redirection without disrupting ongoing operations. By constructing a system-level instruction flow model, the framework has leveraged on-chip debug registers to deploy patches at runtime, ensuring minimal operational impact. Experimental evaluations within a simulated substation communication architecture have revealed that the proposed approach has reduced patch latency by 92% over static techniques, which are symmetrical in a working way, while incurring less than 3% CPU overhead. This work has offered a scalable and real-time model-driven defense strategy that has enhanced the cyber–physical resilience of embedded systems in modern power systems, contributing new insights into the intersection of runtime security and grid infrastructure reliability. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

37 pages, 7361 KiB  
Review
Evolution and Knowledge Structure of Wearable Technologies for Vulnerable Road User Safety: A CiteSpace-Based Bibliometric Analysis (2000–2025)
by Gang Ren, Zhihuang Huang, Tianyang Huang, Gang Wang and Jee Hang Lee
Appl. Sci. 2025, 15(12), 6945; https://doi.org/10.3390/app15126945 - 19 Jun 2025
Viewed by 549
Abstract
This study presents a systematic bibliometric review of wearable technologies aimed at vulnerable road user (VRU) safety, covering publications from 2000 to 2025. Guided by PRISMA procedures and a PICo-based search strategy, 58 records were extracted and analyzed in CiteSpace, yielding visualizations of [...] Read more.
This study presents a systematic bibliometric review of wearable technologies aimed at vulnerable road user (VRU) safety, covering publications from 2000 to 2025. Guided by PRISMA procedures and a PICo-based search strategy, 58 records were extracted and analyzed in CiteSpace, yielding visualizations of collaboration networks, publication trajectories, and intellectual structures. The results indicate a clear evolution from single-purpose, stand-alone devices to integrated ecosystem solutions that address the needs of diverse VRU groups. Six dominant knowledge clusters emerged—street-crossing assistance, obstacle avoidance, human–computer interaction, cyclist safety, blind navigation, and smart glasses. Comparative analysis across pedestrians, cyclists and motorcyclists, and persons with disabilities shows three parallel transitions: single- to multisensory interfaces, reactive to predictive systems, and isolated devices to V2X-enabled ecosystems. Contemporary research emphasizes context-adaptive interfaces, seamless V2X integration, and user-centered design, and future work should focus on lightweight communication protocols, adaptive sensory algorithms, and personalized safety profiles. The review provides a consolidated knowledge map to inform researchers, practitioners, and policy-makers striving for inclusive and proactive road safety solutions. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

16 pages, 467 KiB  
Article
A Socially Assistive Robot as Orchestrator of an AAL Environment for Seniors
by Carlos E. Sanchez-Torres, Ernesto A. Lozano, Irvin H. López-Nava, J. Antonio Garcia-Macias and Jesus Favela
Technologies 2025, 13(6), 260; https://doi.org/10.3390/technologies13060260 - 19 Jun 2025
Viewed by 359
Abstract
Social robots in Ambient Assisted Living (AAL) environments offer a promising alternative for enhancing senior care by providing companionship and functional support. These robots can serve as intuitive interfaces to complex smart home systems, allowing seniors and caregivers to easily control their environment [...] Read more.
Social robots in Ambient Assisted Living (AAL) environments offer a promising alternative for enhancing senior care by providing companionship and functional support. These robots can serve as intuitive interfaces to complex smart home systems, allowing seniors and caregivers to easily control their environment and access various assistance services through natural interactions. By combining the emotional engagement capabilities of social robots with the comprehensive monitoring and support features of AAL, this integrated approach can potentially improve the quality of life and independence of elderly individuals while alleviating the burden on human caregivers. This paper explores the integration of social robotics with ambient assisted living (AAL) technologies to enhance elderly care. We propose a novel framework where a social robot is the central orchestrator of an AAL environment, coordinating various smart devices and systems to provide comprehensive support for seniors. Our approach leverages the social robot’s ability to engage in natural interactions while managing the complex network of environmental and wearable sensors and actuators. In this paper, we focus on the technical aspects of our framework. A computational P2P notebook is used to customize the environment and run reactive services. Machine learning models can be included for real-time recognition of gestures, poses, and moods to support non-verbal communication. We describe scenarios to illustrate the utility and functionality of the framework and how the robot is used to orchestrate the AAL environment to contribute to the well-being and independence of elderly individuals. We also address the technical challenges and future directions for this integrated approach to elderly care. Full article
Show Figures

Figure 1

15 pages, 4199 KiB  
Article
A Portable Wave Tank and Wave Energy Converter for Engineering Dissemination and Outreach
by Nicholas Ross, Delaney Heileman, A. Gerrit Motes, Anwi Fomukong, Giorgio Bacelli, Steven J. Spencer, Dominic D. Forbush, Kevin Dullea and Ryan G. Coe
Hardware 2025, 3(2), 5; https://doi.org/10.3390/hardware3020005 - 4 Jun 2025
Viewed by 659
Abstract
Wave energy converters are a nascent energy generation technology that harnesses the power in ocean waves. To assist in communicating both fundamental and complex concepts of wave energy, a small-scale portable wave tank and wave energy converter have been developed. The system has [...] Read more.
Wave energy converters are a nascent energy generation technology that harnesses the power in ocean waves. To assist in communicating both fundamental and complex concepts of wave energy, a small-scale portable wave tank and wave energy converter have been developed. The system has been designed using commercial off-the-shelf components, and all design hardware and software are openly available for replication. This project builds on prior research conducted at Sandia National Laboratories, particularly in the areas of WEC device design and control systems. By showcasing the principles of causal feedback control and innovative device design, SIWEED not only serves as a practical demonstration tool but also enhances the educational experience for users. This paper presents the detailed system design of this tool. Furthermore, via testing and analysis, we demonstrate the basic functionality of the system. Full article
Show Figures

Figure 1

24 pages, 1964 KiB  
Article
Energy-Efficient Multi-Agent Deep Reinforcement Learning Task Offloading and Resource Allocation for UAV Edge Computing
by Shu Xu, Qingjie Liu, Chengye Gong and Xupeng Wen
Sensors 2025, 25(11), 3403; https://doi.org/10.3390/s25113403 - 28 May 2025
Viewed by 1114
Abstract
The integration of Unmanned Aerial Vehicles (UAVs) into Mobile Edge Computing (MEC) systems has emerged as a transformative solution for latency-sensitive applications, leveraging UAVs’ unique advantages in mobility, flexible deployment, and on-demand service provisioning. This paper proposes a novel multi-agent reinforcement learning framework, [...] Read more.
The integration of Unmanned Aerial Vehicles (UAVs) into Mobile Edge Computing (MEC) systems has emerged as a transformative solution for latency-sensitive applications, leveraging UAVs’ unique advantages in mobility, flexible deployment, and on-demand service provisioning. This paper proposes a novel multi-agent reinforcement learning framework, termed Multi-Agent Twin Delayed Deep Deterministic Policy Gradient for Task Offloading and Resource Allocation (MATD3-TORA), to optimize task offloading and resource allocation in UAV-assisted MEC networks. The framework enables collaborative decision making among multiple UAVs to efficiently serve sparsely distributed ground mobile devices (MDs) and establish an integrated mobility, communication, and computational offloading model, which formulates a joint optimization problem aimed at minimizing the weighted sum of task processing latency and UAV energy consumption. Extensive experiments demonstrate that the algorithm achieves improvements in system latency and energy efficiency compared to conventional approaches. The results highlight MATD3-TORA’s effectiveness in addressing UAV-MEC challenges, including mobility–energy tradeoffs, distributed decision making, and real-time resource allocation. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

17 pages, 1921 KiB  
Article
Streamlining cVEP Paradigms: Effects of a Minimized Electrode Montage on Brain–Computer Interface Performance
by Milán András Fodor, Atilla Cantürk, Gernot Heisenberg and Ivan Volosyak
Brain Sci. 2025, 15(6), 549; https://doi.org/10.3390/brainsci15060549 - 23 May 2025
Viewed by 508
Abstract
(1) Background: Brain–computer interfaces (BCIs) enable direct communication between the brain and external devices using electroencephalography (EEG) signals, offering potential applications in assistive technology and neurorehabilitation. Code-modulated visual evoked potential (cVEP)-based BCIs employ code-pattern-based stimulation to evoke neural responses, which can then be [...] Read more.
(1) Background: Brain–computer interfaces (BCIs) enable direct communication between the brain and external devices using electroencephalography (EEG) signals, offering potential applications in assistive technology and neurorehabilitation. Code-modulated visual evoked potential (cVEP)-based BCIs employ code-pattern-based stimulation to evoke neural responses, which can then be classified to infer user intent. While increasing the number of EEG electrodes across the visual cortex enhances classification accuracy, it simultaneously reduces user comfort and increases setup complexity, duration, and hardware costs. (2) Methods: This online BCI study, involving thirty-eight able-bodied participants, investigated how reducing the electrode count from 16 to 6 affected performance. Three experimental conditions were tested: a baseline 16-electrode configuration, a reduced 6-electrode setup without retraining, and a reduced 6-electrode setup with retraining. (3) Results: Our results indicate that, on average, performance declines with fewer electrodes; nonetheless, retraining restored near-baseline mean Information Transfer Rate (ITR) and accuracy for those participants for whom the system remained functional. The results reveal that for a substantial number of participants, the classification pipeline fails after electrode removal, highlighting individual differences in the cVEP response characteristics or inherent limitations of the classification approach. (4) Conclusions: Ultimately, this suggests that minimal cVEP-BCI electrode setups capable of reliably functioning across all users might only be feasible through other, more flexible classification methods that can account for individual differences. These findings aim to serve as a guideline for what is currently achievable with this common cVEP paradigm and to highlight where future research should focus in order to move closer to a practical and user-friendly system. Full article
Show Figures

Figure 1

Back to TopTop