Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (399)

Search Parameters:
Keywords = low-power wide-area network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1467 KiB  
Article
Confidence-Based Knowledge Distillation to Reduce Training Costs and Carbon Footprint for Low-Resource Neural Machine Translation
by Maria Zafar, Patrick J. Wall, Souhail Bakkali and Rejwanul Haque
Appl. Sci. 2025, 15(14), 8091; https://doi.org/10.3390/app15148091 - 21 Jul 2025
Viewed by 433
Abstract
The transformer-based deep learning approach represents the current state-of-the-art in machine translation (MT) research. Large-scale pretrained transformer models produce state-of-the-art performance across a wide range of MT tasks for many languages. However, such deep neural network (NN) models are often data-, compute-, space-, [...] Read more.
The transformer-based deep learning approach represents the current state-of-the-art in machine translation (MT) research. Large-scale pretrained transformer models produce state-of-the-art performance across a wide range of MT tasks for many languages. However, such deep neural network (NN) models are often data-, compute-, space-, power-, and energy-hungry, typically requiring powerful GPUs or large-scale clusters to train and deploy. As a result, they are often regarded as “non-green” and “unsustainable” technologies. Distilling knowledge from large deep NN models (teachers) to smaller NN models (students) is a widely adopted sustainable development approach in MT as well as in broader areas of natural language processing (NLP), including speech, and image processing. However, distilling large pretrained models presents several challenges. First, increased training time and cost that scales with the volume of data used for training a student model. This could pose a challenge for translation service providers (TSPs), as they may have limited budgets for training. Moreover, CO2 emissions generated during model training are typically proportional to the amount of data used, contributing to environmental harm. Second, when querying teacher models, including encoder–decoder models such as NLLB, the translations they produce for low-resource languages may be noisy or of low quality. This can undermine sequence-level knowledge distillation (SKD), as student models may inherit and reinforce errors from inaccurate labels. In this study, the teacher model’s confidence estimation is employed to filter those instances from the distilled training data for which the teacher exhibits low confidence. We tested our methods on a low-resource Urdu-to-English translation task operating within a constrained training budget in an industrial translation setting. Our findings show that confidence estimation-based filtering can significantly reduce the cost and CO2 emissions associated with training a student model without drop in translation quality, making it a practical and environmentally sustainable solution for the TSPs. Full article
(This article belongs to the Special Issue Deep Learning and Its Applications in Natural Language Processing)
Show Figures

Figure 1

33 pages, 2299 KiB  
Review
Edge Intelligence in Urban Landscapes: Reviewing TinyML Applications for Connected and Sustainable Smart Cities
by Athanasios Trigkas, Dimitrios Piromalis and Panagiotis Papageorgas
Electronics 2025, 14(14), 2890; https://doi.org/10.3390/electronics14142890 - 19 Jul 2025
Viewed by 509
Abstract
Tiny Machine Learning (TinyML) extends edge AI capabilities to resource-constrained devices, offering a promising solution for real-time, low-power intelligence in smart cities. This review systematically analyzes 66 peer-reviewed studies from 2019 to 2024, covering applications across urban mobility, environmental monitoring, public safety, waste [...] Read more.
Tiny Machine Learning (TinyML) extends edge AI capabilities to resource-constrained devices, offering a promising solution for real-time, low-power intelligence in smart cities. This review systematically analyzes 66 peer-reviewed studies from 2019 to 2024, covering applications across urban mobility, environmental monitoring, public safety, waste management, and infrastructure health. We examine hardware platforms and machine learning models, with particular attention to power-efficient deployment and data privacy. We review the approaches employed in published studies for deploying machine learning models on resource-constrained hardware, emphasizing the most commonly used communication technologies—while noting the limited uptake of low-power options such as Low Power Wide Area Networks (LPWANs). We also discuss hardware–software co-design strategies that enable sustainable operation. Furthermore, we evaluate the alignment of these deployments with the United Nations Sustainable Development Goals (SDGs), highlighting both their contributions and existing gaps in current practices. This review identifies recurring technical patterns, methodological challenges, and underexplored opportunities, particularly in the areas of hardware provisioning, usage of inherent privacy benefits in relevant applications, communication technologies, and dataset practices, offering a roadmap for future TinyML research and deployment in smart urban systems. Among the 66 studies examined, 29 focused on mobility and transportation, 17 on public safety, 10 on environmental sensing, 6 on waste management, and 4 on infrastructure monitoring. TinyML was deployed on constrained microcontrollers in 32 studies, while 36 used optimized models for resource-limited environments. Energy harvesting, primarily solar, was featured in 6 studies, and low-power communication networks were used in 5. Public datasets were used in 27 studies, custom datasets in 24, and the remainder relied on hybrid or simulated data. Only one study explicitly referenced SDGs, and 13 studies considered privacy in their system design. Full article
(This article belongs to the Special Issue New Advances in Embedded Software and Applications)
Show Figures

Figure 1

24 pages, 6250 KiB  
Article
A Failure Risk-Aware Multi-Hop Routing Protocol in LPWANs Using Deep Q-Network
by Shaojun Tao, Hongying Tang, Jiang Wang and Baoqing Li
Sensors 2025, 25(14), 4416; https://doi.org/10.3390/s25144416 - 15 Jul 2025
Viewed by 250
Abstract
Multi-hop routing over low-power wide-area networks (LPWANs) has emerged as a promising technology for extending network coverage. However, existing protocols face high transmission disruption risks due to factors such as dynamic topology driven by stochastic events, dynamic link quality, and coverage holes induced [...] Read more.
Multi-hop routing over low-power wide-area networks (LPWANs) has emerged as a promising technology for extending network coverage. However, existing protocols face high transmission disruption risks due to factors such as dynamic topology driven by stochastic events, dynamic link quality, and coverage holes induced by imbalanced energy consumption. To address this issue, we propose a failure risk-aware deep Q-network-based multi-hop routing (FRDR) protocol, aiming to reduce transmission disruption probability. First, we design a power regulation mechanism (PRM) that works in conjunction with pre-selection rules to optimize end-device node (EN) activations and candidate relay selection. Second, we introduce the concept of routing failure risk value (RFRV) to quantify the potential failure risk posed by each candidate next-hop EN, which correlates with its neighborhood state characteristics (i.e., the number of neighbors, the residual energy level, and link quality). Third, a deep Q-network (DQN)-based routing decision mechanism is proposed, where a multi-objective reward function incorporating RFRV, residual energy, distance to the gateway, and transmission hops is utilized to determine the optimal next-hop. Simulation results demonstrate that FRDR outperforms existing protocols in terms of packet delivery rate and network lifetime while maintaining comparable transmission delay. Full article
(This article belongs to the Special Issue Security, Privacy and Trust in Wireless Sensor Networks)
Show Figures

Figure 1

22 pages, 1648 KiB  
Article
Toward High Bit Rate LoRa Transmission via Joint Frequency-Amplitude Modulation
by Gupeng Tang, Zhidan Zhao, Chengxin Zhang, Jiaqi Wu, Nan Jing and Lin Wang
Electronics 2025, 14(13), 2687; https://doi.org/10.3390/electronics14132687 - 2 Jul 2025
Viewed by 357
Abstract
Long Range (LoRa) is one of the promising Low-Power Wide-Area Network technologies to achieve a strong anti-noise ability due to the modulation of the chirp spread spectrum in low-power and long-distance communications. However, LoRa suffers the problem of packet collisions. Hence, we propose [...] Read more.
Long Range (LoRa) is one of the promising Low-Power Wide-Area Network technologies to achieve a strong anti-noise ability due to the modulation of the chirp spread spectrum in low-power and long-distance communications. However, LoRa suffers the problem of packet collisions. Hence, we propose QR−LoRa, a novel PHY-layer scheme that can transmit data in both amplitude and frequency dimensions simultaneously. For the amplitude modulation, we modulate the constant envelope of a LoRa chirp with a cyclic right-shifted ramp signal, where the cyclic right-shifted position carries the data of the amplitude modulation. We adopt the standard LoRa for frequency modulation. We prototype QR−LoRa on the software-defined radio platform USRP N210 and evaluate its performance via simulations and field experiments. The results show the bit rate gain of QR−LoRa is up to 2× compared with the standard LoRa device. Full article
Show Figures

Figure 1

41 pages, 8353 KiB  
Article
Optimizing LoRaWAN Gateway Placement in Urban Environments: A Hybrid PSO-DE Algorithm Validated via HTZ Simulations
by Kanar Alaa Al-Sammak, Sama Hussein Al-Gburi, Ion Marghescu, Ana-Maria Claudia Drăgulinescu, Cristina Marghescu, Alexandru Martian, Nayef A. M. Alduais and Nawar Alaa Hussein Al-Sammak
Technologies 2025, 13(6), 256; https://doi.org/10.3390/technologies13060256 - 17 Jun 2025
Viewed by 870
Abstract
With rapid advancements in the Internet of Things (IoT), Low-Power Wide-Area Networks (LPWANs) play a crucial role in expanding IoT’s capabilities while using minimal energy. Among the various LPWAN technologies, LoRaWAN (Long-Range Wide-Area Network) is particularly notable for its capacity to enable long-range, [...] Read more.
With rapid advancements in the Internet of Things (IoT), Low-Power Wide-Area Networks (LPWANs) play a crucial role in expanding IoT’s capabilities while using minimal energy. Among the various LPWAN technologies, LoRaWAN (Long-Range Wide-Area Network) is particularly notable for its capacity to enable long-range, low-rate communications with low power needs. This study investigates how to optimize the placement of LoRaWAN gateways by using a combination of Particle Swarm Optimization (PSO) and Differential Evolution (DE). The approach is validated through simulations driven by HTZ to evaluate network performance in urban settings. Centered around the area of the Politehnica University of Bucharest, this research examines how different gateway placements on various floors of a building affect network coverage and packet loss. The experiment employs Adeunis Field Test Devices (FTDs) and Dragino LG308-EC25 gateways, systematically testing two spreading factors, SF7 and SF12, to assess their effectiveness in terms of signal quality and reliability. An innovative optimization algorithm, GateOpt PSODE, is introduced, which combines PSO and DE to optimize gateway placements based on real-time network performance metrics, like the Received Signal Strength Indicator (RSSI), the Signal-to-Noise Ratio (SNR), and packet loss. The findings reveal that strategically positioning gateways, especially on higher floors, significantly improves communication reliability and network efficiency, providing a solid framework for deploying LoRaWAN networks in intricate urban environments. Full article
Show Figures

Figure 1

26 pages, 2415 KiB  
Article
RL-SCAP SigFox: A Reinforcement Learning Based Scalable Communication Protocol for Low-Power Wide-Area IoT Networks
by Raghad Albalawi, Fatma Bouabdallah, Linda Mohaisen and Shireen Saifuddin
Technologies 2025, 13(6), 255; https://doi.org/10.3390/technologies13060255 - 17 Jun 2025
Viewed by 309
Abstract
The Internet of Things (IoT) aims to wirelessly connect billions of physical things to the IT infrastructure. Although there are several radio access technologies available, few of them meet the needs of Internet of Things applications, such as long range, low cost, and [...] Read more.
The Internet of Things (IoT) aims to wirelessly connect billions of physical things to the IT infrastructure. Although there are several radio access technologies available, few of them meet the needs of Internet of Things applications, such as long range, low cost, and low energy consumption. The low data rate of low-power wide-area network (LPWAN) technologies, particularly SigFox, makes them appropriate for Internet of Things applications since the longer the radio link’s useable distance, the lower the data rate. Network reliability is the primary goal of SigFox technology, which aims to deliver data messages successfully through redundancy. This raises concerns about SigFox’s scalability and leads to one of its flaws, namely the high collision rate. In this paper, the goal is to prevent collisions by switching to time division multiple access (TDMA) from SigFox’s Aloha-based medium access protocol, utilizing only orthogonal channels, and eliminating redundancy. Consequently, during a designated time slot, each node transmits a single copy of the data message over a particular orthogonal channel. To achieve this, a multi-agent, off-policy reinforcement learning (RL) Q-Learning technique will be used on top of SigFox. In other words, the objective is to increase SigFox’s scalability through the use of Reinforcement Learning based time slot allocation (RL-SCAP). The findings show that, especially in situations with high node densities or constrained communication slots, the proposed protocol performs better than the basic SCAP (Slot and Channel Allocation Protocol) by obtaining a higher Packet Delivery Ratio (PDR) in average of 60.58%, greater throughput in average of 60.90%, and a notable decrease in collisions up to 79.37%. Full article
Show Figures

Figure 1

19 pages, 6786 KiB  
Article
Hybrid Radio-Frequency-Energy- and Solar-Energy-Harvesting-Integrated Circuit for Internet of Things and Low-Power Applications
by Guo-Ming Sung, Shih-Hao Chen, Venkatesh Choppa and Chih-Ping Yu
Electronics 2025, 14(11), 2192; https://doi.org/10.3390/electronics14112192 - 28 May 2025
Viewed by 481
Abstract
This paper proposes a hybrid energy-harvesting chip that utilizes both radio-frequency (RF) energy and solar energy for low-power applications and extended service life. The key contributions include a wide input power range, a compact chip area, and a high maximum power conversion efficiency [...] Read more.
This paper proposes a hybrid energy-harvesting chip that utilizes both radio-frequency (RF) energy and solar energy for low-power applications and extended service life. The key contributions include a wide input power range, a compact chip area, and a high maximum power conversion efficiency (PCE). Solar energy is a clean and readily available source. The hybrid energy harvesting system has gained popularity by combining RF and solar energy to improve overall energy availability and efficiency. The proposed chip comprises a matching network, rectifier, charge pump, DC combiner, overvoltage protection circuit, and low-dropout voltage regulator (LDO). The matching network ensures maximum power delivery from the antenna to the rectifier. The rectifier circuit utilizes a cross-coupled differential drive rectifier to convert radio frequency energy into DC voltage, incorporating boosting functionality. In addition, a solar harvester is employed to provide an additional energy source to extend service time and stabilize the output by combining it with the radio-frequency source using a DC combiner. The overvoltage protection circuit safeguards against high voltage passing from the DC combiner to the LDO. Finally, the LDO facilitates the production of a stable output voltage. The entire circuit is simulated using the Taiwan Semiconductor Manufacturing Company 0.18 µm 1P6M complementary metal–oxide–semiconductor standard process developed by the Taiwan Semiconductor Research Institute. The simulation results indicated a rectifier conversion efficiency of approximately 41.6% for the proposed radio-frequency-energy-harvesting system. It can operate with power levels ranging from −1 to 20 dBm, and the rectifier circuit’s output voltage is within the range of 1.7–1.8 V. A 0.2 W monocrystalline silicon solar panel (70 × 30 mm2) was used to generate a supplied voltage of 1 V. The overvoltage protection circuit limited the output voltage to 3.6 V. Finally, the LDO yielded a stable output voltage of 3.3 V. Full article
Show Figures

Figure 1

29 pages, 4136 KiB  
Article
IoT-NTN with VLEO and LEO Satellite Constellations and LPWAN: A Comparative Study of LoRa, NB-IoT, and Mioty
by Changmin Lee, Taekhyun Kim, Chanhee Jung and Zizung Yoon
Electronics 2025, 14(9), 1798; https://doi.org/10.3390/electronics14091798 - 28 Apr 2025
Viewed by 1021
Abstract
This study investigates the optimization of satellite constellations for Low-Power, Wide-Area Network (LPWAN)-based Internet of Things (IoT) communications in Very Low Earth Orbit (VLEO) at 200 km and 300 km altitudes and Low Earth Orbit (LEO) at 600km using a Genetic Algorithm (GA). [...] Read more.
This study investigates the optimization of satellite constellations for Low-Power, Wide-Area Network (LPWAN)-based Internet of Things (IoT) communications in Very Low Earth Orbit (VLEO) at 200 km and 300 km altitudes and Low Earth Orbit (LEO) at 600km using a Genetic Algorithm (GA). Focusing on three LPWAN technologies—LoRa, Narrowband IoT (NB-IoT), and Mioty—we evaluate their performance in terms of revisit time, data transmission volume, and economic efficiency. Results indicate that a 300 km VLEO constellation with LoRa achieves the shortest average revisit time and requires the fewest satellites, offering notable cost benefits. NB-IoT provides the highest data transmission volume. Mioty demonstrates strong scalability but necessitates a larger satellite count. These findings highlight the potential of VLEO satellites, particularly at 300 km, combined with LPWAN solutions for efficient and scalable IoT Non-Terrestrial Network (IoT-NTN) applications. Future work will explore multi-altitude simulations and hybrid LPWAN integration for further optimization. Full article
(This article belongs to the Special Issue Future Generation Non-Terrestrial Networks)
Show Figures

Figure 1

19 pages, 5673 KiB  
Article
LoRa Communications Spectrum Sensing Based on Artificial Intelligence: IoT Sensing
by Partemie-Marian Mutescu, Valentin Popa and Alexandru Lavric
Sensors 2025, 25(9), 2748; https://doi.org/10.3390/s25092748 - 26 Apr 2025
Viewed by 906
Abstract
The backbone of the Internet of Things ecosystem relies heavily on wireless sensor networks and low-power wide area network technologies, such as LoRa modulation, to provide the long-range, energy-efficient communications essential for applications as diverse as smart homes, healthcare, agriculture, smart grids, and [...] Read more.
The backbone of the Internet of Things ecosystem relies heavily on wireless sensor networks and low-power wide area network technologies, such as LoRa modulation, to provide the long-range, energy-efficient communications essential for applications as diverse as smart homes, healthcare, agriculture, smart grids, and transportation. With the number of IoT devices expected to reach approximately 41 billion by 2034, managing radio spectrum resources becomes a critical issue. However, as these devices are deployed at an increasing rate, the limited spectral resources will result in increased interference, packet collisions, and degraded quality of service. Current methods for increasing network capacity have limitations and require advanced solutions. This paper proposes a novel hybrid spectrum sensing framework that combines traditional signal processing and artificial intelligence techniques specifically designed for LoRa spreading factor detection and communication channel analytics. Our proposed framework processes wideband signals directly from IQ samples to identify and classify multiple concurrent LoRa transmissions. The results show that the framework is highly effective, achieving a detection accuracy of 96.2%, a precision of 99.16%, and a recall of 95.4%. The proposed framework’s flexible architecture separates the AI processing pipeline from the channel analytics pipeline, ensuring adaptability to various communication protocols beyond LoRa. Full article
(This article belongs to the Special Issue LoRa Communication Technology for IoT Applications)
Show Figures

Figure 1

24 pages, 7088 KiB  
Article
Ultra-Lightweight and Highly Efficient Pruned Binarised Neural Networks for Intrusion Detection in In-Vehicle Networks
by Auangkun Rangsikunpum, Sam Amiri and Luciano Ost
Electronics 2025, 14(9), 1710; https://doi.org/10.3390/electronics14091710 - 23 Apr 2025
Cited by 1 | Viewed by 718
Abstract
With the rapid evolution toward autonomous vehicles, securing in-vehicle communications is more critical than ever. The widely used Controller Area Network (CAN) protocol lacks built-in security, leaving vehicles vulnerable to cyberattacks. Although machine learning-based Intrusion Detection Systems (IDSs) can achieve high detection accuracy, [...] Read more.
With the rapid evolution toward autonomous vehicles, securing in-vehicle communications is more critical than ever. The widely used Controller Area Network (CAN) protocol lacks built-in security, leaving vehicles vulnerable to cyberattacks. Although machine learning-based Intrusion Detection Systems (IDSs) can achieve high detection accuracy, their heavy computational and power demands often limit real-world deployment. In this paper, we present an optimised IDS based on a Binarised Neural Network (BNN) that employs network pruning to eliminate redundant parameters, achieving up to a 91.07% reduction with only a 0.1% accuracy loss. The proposed approach incorporates a two-stage Coarse-to-Fine (C2F) framework, efficiently filtering normal traffic in the initial stage to minimise unnecessary processing. To assess its practical feasibility, we implement and compare the pruned IDS across CPU, GPU, and FPGA platforms. The experimental results indicate that, with the same model structure, the FPGA-based solution outperforms GPU and CPU implementations by up to 3.7× and 2.4× in speed, while achieving up to 7.4× and 3.8× greater energy efficiency, respectively. Among cutting-edge BNN-based IDSs, our ultra-lightweight FPGA-based C2F approach achieves the fastest average inference speed, showing a 3.3× to 12× improvement, while also outperforming them in accuracy and average F1 score, highlighting its potential for low-power, high-performance vehicle security. Full article
(This article belongs to the Special Issue Recent Advances in Intrusion Detection Systems Using Machine Learning)
Show Figures

Figure 1

14 pages, 432 KiB  
Article
Dual-Mode Data Collection for Periodic and Urgent Data Transmission in Energy Harvesting Wireless Sensor Networks
by Ikjune Yoon
Sensors 2025, 25(8), 2559; https://doi.org/10.3390/s25082559 - 18 Apr 2025
Viewed by 496
Abstract
Wireless Sensor Networks (WSNs) are widely used for environmental data collection; however, their reliance on battery power significantly limits network longevity. While energy harvesting technologies provide a sustainable power solution, conventional approaches often fail to efficiently utilize surplus energy, leading to performance constraints. [...] Read more.
Wireless Sensor Networks (WSNs) are widely used for environmental data collection; however, their reliance on battery power significantly limits network longevity. While energy harvesting technologies provide a sustainable power solution, conventional approaches often fail to efficiently utilize surplus energy, leading to performance constraints. This paper proposes an energy-efficient dual-mode data collection scheme that integrates Long Range Wide Area Network (LoRaWAN) and Bluetooth Low Energy (BLE) in an energy-harvesting WSN environment. The proposed method dynamically adjusts sensing intervals based on harvested energy predictions and reserves energy for urgent data transmissions. Urgent messages are transmitted via BLE using multi-hop routing with redundant paths to ensure reliability, while periodic environmental data is transmitted over LoRaWAN in a single hop to optimize energy efficiency. Simulation results demonstrate that the proposed scheme significantly enhances data collection efficiency and improves urgent message delivery reliability compared to existing approaches. Future work will focus on optimizing energy consumption for redundant urgent transmissions and integrating error correction mechanisms to further enhance transmission reliability. Full article
(This article belongs to the Special Issue Energy Harvesting Technologies for Wireless Sensors)
Show Figures

Figure 1

19 pages, 1062 KiB  
Article
Reinforcement Learning-Based Time-Slotted Protocol: A Reinforcement Learning Approach for Optimizing Long-Range Network Scalability
by Nuha Alhattab, Fatma Bouabdallah, Enas F. Khairullah and Aishah Aseeri
Sensors 2025, 25(8), 2420; https://doi.org/10.3390/s25082420 - 11 Apr 2025
Viewed by 534
Abstract
The Internet of Things (IoT) is revolutionizing communication by connecting everyday objects to the Internet, enabling data exchange and automation. Low-Power Wide-Area networks (LPWANs) provide a wireless communication solution optimized for long-range, low-power IoT devices. LoRa is a prominent LPWAN technology; its ability [...] Read more.
The Internet of Things (IoT) is revolutionizing communication by connecting everyday objects to the Internet, enabling data exchange and automation. Low-Power Wide-Area networks (LPWANs) provide a wireless communication solution optimized for long-range, low-power IoT devices. LoRa is a prominent LPWAN technology; its ability to provide long-range, low-power wireless connectivity makes it ideal for IoT applications that cover large areas or where battery life is critical. Despite its advantages, LoRa uses a random access mode, which makes it susceptible to increased collisions as the network expands. In addition, the scalability of LoRa is affected by the distribution of its transmission parameters. This paper introduces a Reinforcement Learning-based Time-Slotted (RL-TS) LoRa protocol that incorporates a mechanism for distributing transmission parameters. It leverages a reinforcement learning algorithm, enabling nodes to autonomously select their time slots, thereby optimizing the allocation of transmission parameters and TDMA slots. To evaluate the effectiveness of our approach, we conduct simulations to assess the convergence speed of the reinforcement learning algorithm, as well as its impact on throughput and packet delivery ratio (PDR). The results demonstrate significant improvements, with PDR increasing from 0.45–0.85 in LoRa to 0.88–0.97 in RL-TS, and throughput rising from 80–150 packets to 156–172 packets. Additionally, RL-TS achieves 82% reduction in collisions compared to LoRa, highlighting its effectiveness in enhancing network performance. Moreover, a detailed comparison with conventional LoRa and other existing protocols is provided, highlighting the advantages of the proposed method. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

16 pages, 630 KiB  
Article
A Study on Performance Improvement of Maritime Wireless Communication Using Dynamic Power Control with Tethered Balloons
by Tao Fang, Jun-han Wang, Jaesang Cha, Incheol Jeong and Chang-Jun Ahn
Electronics 2025, 14(7), 1277; https://doi.org/10.3390/electronics14071277 - 24 Mar 2025
Cited by 2 | Viewed by 457
Abstract
In recent years, the demand for maritime wireless communication has been increasing, particularly in areas such as ship operations management, marine resource utilization, and safety assurance. However, due to the difficulty of deploying base stations(BSs), maritime communication still faces challenges in terms of [...] Read more.
In recent years, the demand for maritime wireless communication has been increasing, particularly in areas such as ship operations management, marine resource utilization, and safety assurance. However, due to the difficulty of deploying base stations(BSs), maritime communication still faces challenges in terms of limited coverage and unreliable communication quality. As the number of users on ships and offshore platforms increases, along with the growing demand for mobile communication at sea, conventional terrestrial base stations struggle to provide stable connectivity. Therefore, existing maritime communication primarily relies on satellite communication and long-range Wi-Fi. However, these solutions still have limitations in terms of cost, stability, and communication efficiency. Satellite communication solutions, such as Starlink and Iridium, provide global coverage and high reliability, making them essential for deep-sea and offshore communication. However, these systems have high operational costs and limited bandwidth per user, making them impractical for cost-sensitive nearshore communication. Additionally, geostationary satellites suffer from high latency, while low Earth orbit (LEO) satellite networks require specialized and expensive terminals, increasing hardware costs and limiting compatibility with existing maritime communication systems. On the other hand, 5G-based maritime communication offers high data rates and low latency, but its infrastructure deployment is demanding, requiring offshore base stations, relay networks, and high-frequency mmWave (millimeter-wave) technology. The high costs of deployment and maintenance restrict the feasibility of 5G networks for large-scale nearshore environments. Furthermore, in dynamic maritime environments, maintaining stable backhaul connections presents a significant challenge. To address these issues, this paper proposes a low-cost nearshore wireless communication solution utilizing tethered balloons as coastal base stations. Unlike satellite communication, which relies on expensive global infrastructure, or 5G networks, which require extensive offshore base station deployment, the proposed method provides a more economical and flexible nearshore communication alternative. The tethered balloon is physically connected to the coast, ensuring stable power supply and data backhaul while providing wide-area coverage to support communication for ships and offshore platforms. Compared to short-range communication solutions, this method reduces operational costs while significantly improving communication efficiency, making it suitable for scenarios where global satellite coverage is unnecessary and 5G infrastructure is impractical. Additionally, conventional uniform power allocation or channel-gain-based amplification methods often fail to meet the communication demands of dynamic maritime environments. This paper introduces a nonlinear dynamic power allocation method based on channel gain information to maximize downlink communication efficiency. Simulation results demonstrate that, compared to conventional methods, the proposed approach significantly improves downlink communication performance, verifying its feasibility in achieving efficient and stable communication in nearshore environments. Full article
Show Figures

Figure 1

24 pages, 586 KiB  
Article
Performance Evaluation of a Mesh-Topology LoRa Network
by Thomas Gerhardus Durand and Marthinus Johannes Booysen
Sensors 2025, 25(5), 1602; https://doi.org/10.3390/s25051602 - 5 Mar 2025
Viewed by 2205
Abstract
Research into, and the usage of, Low-Power Wide-Area Networks (LPWANs) has increased significantly to support the ever-expanding requirements set by IoT applications. Specifically, the usage of Long-Range Wide-Area Networks (LoRaWANs) has increased, due to the LPWAN’s robust physical layer, Long-Range (LoRa), modulation scheme, [...] Read more.
Research into, and the usage of, Low-Power Wide-Area Networks (LPWANs) has increased significantly to support the ever-expanding requirements set by IoT applications. Specifically, the usage of Long-Range Wide-Area Networks (LoRaWANs) has increased, due to the LPWAN’s robust physical layer, Long-Range (LoRa), modulation scheme, which enables scalable, low-power consumption, long-range communication to IoT devices. The LoRaWAN Medium Access Control (MAC) protocol is currently limited to only support single-hop communication. This limits the coverage of a single gateway and increases the power consumption of devices which are located at the edge of a gateway’s coverage range. There is currently no standardised and commercialised multi-hop LoRa-based network, and the field is experiencing ongoing research. In this work, we propose a complementary network to LoRaWAN, which integrates mesh networking. An ns-3 simulation model has been developed, and the proposed LoRaMesh network is simulated for a varying number of scenarios. This research focuses on the design decisions needed to design a LoRa-based mesh network which maintains the low-power consumption advantages that LoRaWAN offers while ensuring that data packets are routed successfully to the gateway. The results highlighted a significant increase in the packet delivery ratio in nodes located far from a centralised gateway in a dense network. Nodes located further than 5.8 km from a gateway’s packet delivery ratio were increased from an average of 40.2% to 73.78%. The findings in this article validate the concept of a mesh-type LPWAN network based on the LoRa physical layer and highlight the potential for future optimisation. Full article
(This article belongs to the Special Issue LoRa Communication Technology for IoT Applications)
Show Figures

Figure 1

30 pages, 3278 KiB  
Article
Centralized MPPT Control Architecture for Photovoltaic Systems Using LoRa Technology
by Pablo Fernández-Bustamante, Eneko Artetxe, Isidro Calvo and Oscar Barambones
Appl. Sci. 2025, 15(5), 2456; https://doi.org/10.3390/app15052456 - 25 Feb 2025
Viewed by 646
Abstract
Maximum power point tracking (MPPT) algorithms are necessary to optimize the power generation in solar photovoltaic (PV) power plants. Typically, MPPT control systems depend on the wired connections among sensors, processing nodes, and DC–DC power converters. However, Low-Power Wide-Area Networks (LPWANs) allow for [...] Read more.
Maximum power point tracking (MPPT) algorithms are necessary to optimize the power generation in solar photovoltaic (PV) power plants. Typically, MPPT control systems depend on the wired connections among sensors, processing nodes, and DC–DC power converters. However, Low-Power Wide-Area Networks (LPWANs) allow for centralizing the execution of MPPT algorithms wirelessly, achieving more flexibility and reducing costs. In particular, LoRa/LoRaWAN is a low-cost/low-consumption technology with an excellent immunity to interference, which is able to operate over tens of kilometers. This article presents a centralized MPPT control architecture for PV systems based on the LoRa/LoRaWAN technology. This technology provides long-range/low-cost wireless connectivity with PV plants located far away. The presented approach allows for executing in parallel, on a central computing node, different MPPT algorithms for distinct PV subsystems. A proof-of-concept prototype was implemented to experimentally validate the architecture. It involved a rooftop PV system and a DC–DC converter connected to a computer, which executes the MPPT algorithms, by means of a point-to-point LoRa network. For validation purpose, two MPPT control techniques were implemented: Perturb and Observe (P&O) and Sliding Mode Control (SMC). However, the presented approach allows for the implementation of more sophisticated MPPT algorithms for optimizing energy production. The obtained results prove the validity of the concept and suggest that the proposed low-cost approach can be extrapolated to be used with LoRaWAN networks. Full article
(This article belongs to the Special Issue Energy and Power Systems: Control and Management)
Show Figures

Figure 1

Back to TopTop