Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (98)

Search Parameters:
Keywords = cell-edge user

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 3533 KB  
Article
Research on an Automatic Seeding Performance Detection and Intelligent Reseeding Device for Leafy Vegetable Plug Seedlings
by Lei Zhong, Junming Huang, Yijuan Qin, Jie Wang, Shengye He, Yuming Luo, Xu Ma, Xueshen Chen and Suiyan Tan
Agronomy 2026, 16(3), 387; https://doi.org/10.3390/agronomy16030387 - 5 Feb 2026
Viewed by 447
Abstract
To address the issues of a low single-seed qualification index and a high missed-seeding index in the process of leafy vegetable plug seedling sowing, this study proposes a lightweight seeding performance detection model named VS-YOLO based on YOLO11n. The model is then deployed [...] Read more.
To address the issues of a low single-seed qualification index and a high missed-seeding index in the process of leafy vegetable plug seedling sowing, this study proposes a lightweight seeding performance detection model named VS-YOLO based on YOLO11n. The model is then deployed on the edge device, the NVIDIA Jetson Xavier NX. A concise and intuitive graphical user interface (GUI) was developed and an automated detection system for vegetable seeding performance was constructed. Based on the empty cells identified by the system, a real-time data transmission mechanism between the Jetson device and a PLC-based control unit is established, enabling the intelligent reseeding device to perform precise reseeding at the designated cell location, achieving row-wise and cell-specific intelligent planting. VS-YOLO incorporates several innovative improvements, including the introduction of a Context Anchor Attention (CAA) module to form the C2PSA_CAA module, the adoption of the Wise Intersection over Union version 3 (WIoU v3) loss function, and the addition of an extra-small object detection head. These enhancements significantly improve the classification and recognition capability for small-sized vegetable seeds while notably reducing the number of model parameters. Experimental results show that VS-YOLO achieves a mAP@0.5 of 96.5% and an F1 Score of 93.45% in detecting the seeding performance of three types of vegetable seeds, outperforming YOLO11n’s 91.5% and 85.19% by 5.0% and 8.26%. The parameter count of VS-YOLO is only 1.61 M, which is 37.6% lower than YOLO11n’s 2.58 M, making it lightweight. Operating at a productivity rate of 120 trays per hour, the system achieved an accuracy of 99.03%, 89.83%, and 92.26% for single-seed prediction, multiple-seeding prediction, and missed-seeding prediction. The single-seed qualification index and missed-seeding index were 93.43% and 4.68%. After reseeding, these indices improved to 97.61% and 0.32%, representing an increase of 4.18% in the single-seed qualification index and a decrease of 4.36% in the missed-seeding index. The significant enhancement offers new ideas and technical approaches for the advancement of seeding performance detection and reseeding systems for vegetable plug seedling production. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

21 pages, 1601 KB  
Article
NOMA-Enabled Cooperative Two-Way Communications for Both Primary and Secondary Systems
by Dong-Hua Chen and Kaiwei Ruan
Electronics 2026, 15(2), 389; https://doi.org/10.3390/electronics15020389 - 15 Jan 2026
Viewed by 257
Abstract
With the aid of non-orthogonal multiple access (NOMA), this paper investigates simultaneous two-way communications for cooperative cognitive radio networks, where a group of secondary access points (APs) scattered over a primary cell not only serve their own users but also help the primary [...] Read more.
With the aid of non-orthogonal multiple access (NOMA), this paper investigates simultaneous two-way communications for cooperative cognitive radio networks, where a group of secondary access points (APs) scattered over a primary cell not only serve their own users but also help the primary cell-edge users′ transmissions cooperatively. As a reward for the cooperation, these APs are granted full access to the primary frequency spectrum. To coordinate the two-way transmissions of the primary and secondary networks, we propose a spectrum-efficient cooperative scheme that only involves two transmission phases, and particularly, the two variable-length transmission phases endow the system with the capability of adapting to possible DL and UL traffic asymmetry. For the system design, we formulate a power minimization problem subject to the bidirectional transmission rate constraints of both networks. The formulated problem is shown to be nonlinear and nonconvex, and for the numerically efficient solution, we propose an iterative algorithm facilitated by the successive convex approximation technique. Simulation results show that the proposed design algorithm has fast convergence speed and is superior to the hybrid orthogonal multiple access and NOMA schemes. Full article
Show Figures

Graphical abstract

22 pages, 1377 KB  
Article
Energy Management Revolution in Unmanned Aerial Vehicles Using Deep Learning Approach
by Sunisa Kunarak
Appl. Sci. 2026, 16(1), 503; https://doi.org/10.3390/app16010503 - 4 Jan 2026
Viewed by 682
Abstract
Unmanned aerial vehicles (UAVs) are playing increasingly important roles in military operations, disaster relief, agriculture, and communications. However, their performance is limited by energy management problems, especially in hybrid systems such as those combining fuel cells with a lithium battery. The potential of [...] Read more.
Unmanned aerial vehicles (UAVs) are playing increasingly important roles in military operations, disaster relief, agriculture, and communications. However, their performance is limited by energy management problems, especially in hybrid systems such as those combining fuel cells with a lithium battery. The potential of deep learning to significantly improve UAV power management is investigated in this work through adaptive forecasting and real-time optimization. We develop smart algorithms that automatically balance energy efficiency and communication performance for heterogeneous wireless networks. The simulation results demonstrate energy consumption savings, optimized flight altitudes, and spectral efficiency improvements compared to Fixed Weight and Fuzzy Logic Weight schemes. At saturated user densities, the model enables up to 42% lower energy consumption and 54% higher throughput. Moreover, predictive models based on recurrent and transformer-based deep networks allow UAVs to predict energy requirements over a variety of mission and environmental contexts, shifting from reactive approaches to proactive control. The adoption of these methods in UAV-aided beyond-5G (B5G) and future 6G network scenarios can potentially prolong endurance times and enhance mission connectivity and reliability in challenging environments. This work lays the foundation for an all-aspect framework to control and manage UAV energy in the 5G era, which takes advantage of not only deep learning but also edge computing and hybrid power systems. Deep learning is confirmed to be a keystone of sustainable, autonomous, and energy-aware UAVs operation for next-generation networks. Full article
Show Figures

Figure 1

17 pages, 369 KB  
Article
AI-Assisted Dynamic Port and Waveform Switching for Enhancing UL Coverage in 5G NR
by Alejandro Villena-Rodríguez, Francisco J. Martín-Vega, Gerardo Gómez, Mari Carmen Aguayo-Torres, José Outes-Carnero, F. Yak Ng-Molina and Juan Ramiro-Moreno
Sensors 2025, 25(18), 5875; https://doi.org/10.3390/s25185875 - 19 Sep 2025
Viewed by 965
Abstract
The uplink of 5G networks allows selecting the transmit waveform between cyclic prefix orthogonal frequency division multiplexing (CP-OFDM) and discrete Fourier transform spread OFDM (DFT-S-OFDM) to cope with the diverse operational conditions of the power amplifiers (PAs) in different user equipment (UEs). CP-OFDM [...] Read more.
The uplink of 5G networks allows selecting the transmit waveform between cyclic prefix orthogonal frequency division multiplexing (CP-OFDM) and discrete Fourier transform spread OFDM (DFT-S-OFDM) to cope with the diverse operational conditions of the power amplifiers (PAs) in different user equipment (UEs). CP-OFDM leads to higher throughput when the PAs are operating in their linear region, which is mostly the case for cell-interior users, whereas DFT-S-OFDM is more appealing when PAs are exhibiting non-linear behavior, which is associated with cell-edge users. Therefore, existing waveform selection solutions rely on predefined signal-to-noise ratio (SNR) thresholds that are computed offline. However, the varying user and channel dynamics, as well as their interactions with power control, require an adaptable threshold selection mechanism. In this paper, we propose an intelligent waveform-switching mechanism based on deep reinforcement learning (DRL) that learns optimal switching thresholds for the current operational conditions. In this proposal, a learning agent aims at maximizing a function built using available throughput percentiles in real networks. Said percentiles are weighted so as to improve the cell-edge users’ service without dramatically reducing the cell average. Aggregated measurements of signal-to-noise ratio (SNR) and timing advance (TA), available in real networks, are used in the procedure. In addition, the solution accounts for the switching cost, which is related to the interruption of the communication after every switch due to implementation issues, which has not been considered in existing solutions. Results show that our proposed scheme achieves remarkable gains in terms of throughput for cell-edge users without degrading the average throughput. Full article
(This article belongs to the Special Issue Future Wireless Communication Networks: 3rd Edition)
Show Figures

Figure 1

16 pages, 2576 KB  
Article
Enhancement in Three-Dimensional Depth with Bionic Image Processing
by Yuhe Chen, Chao Ping Chen, Baoen Han and Yunfan Yang
Computers 2025, 14(8), 340; https://doi.org/10.3390/computers14080340 - 20 Aug 2025
Cited by 1 | Viewed by 936
Abstract
This study proposes an image processing framework based on bionic principles to optimize 3D visual perception in virtual reality (VR) systems. By simulating the physiological mechanisms of the human visual system, the framework significantly enhances depth perception and visual fidelity in VR content. [...] Read more.
This study proposes an image processing framework based on bionic principles to optimize 3D visual perception in virtual reality (VR) systems. By simulating the physiological mechanisms of the human visual system, the framework significantly enhances depth perception and visual fidelity in VR content. The research focuses on three core algorithms: Gabor texture feature extraction algorithm based on directional selectivity of neurons in the V1 region of the visual cortex, which enhances edge detection capability through fourth-order Gaussian kernel; improved Retinex model based on adaptive mechanism of retinal illumination, achieving brightness balance under complex illumination through horizontal–vertical dual-channel decomposition; the RGB adaptive adjustment algorithm, based on the three color response characteristics of cone cells, integrates color temperature compensation with depth cue optimization, enhances color naturalness and stereoscopic depth. Build a modular processing system on the Unity platform, integrate the above algorithms to form a collaborative optimization process, and ensure per-frame processing time meets VR real-time constraints. The experiment uses RMSE, AbsRel, and SSIM metrics, combined with subjective evaluation to verify the effectiveness of the algorithm. The results show that compared with traditional methods (SSAO, SSR, SH), our algorithm demonstrates significant advantages in simple scenes and marginal superiority in composite metrics for complex scenes. Collaborative processing of three algorithms can significantly improve depth map noise and enhance the user’s subjective experience. The research results provide a solution that combines biological rationality and engineering practicality for visual optimization in fields such as implantable metaverse, VR healthcare, and education. Full article
Show Figures

Figure 1

18 pages, 821 KB  
Article
Joint Iterative Decoding Design of Cooperative Downlink SCMA Systems
by Hao Cheng, Min Zhang and Ruoyu Su
Entropy 2025, 27(7), 762; https://doi.org/10.3390/e27070762 - 18 Jul 2025
Viewed by 742
Abstract
Sparse code multiple access (SCMA) has been a competitive multiple access candidate for future communication networks due to its superiority in spectrum efficiency and providing massive connectivity. However, cell edge users may suffer from great performance degradations due to signal attenuation. Therefore, a [...] Read more.
Sparse code multiple access (SCMA) has been a competitive multiple access candidate for future communication networks due to its superiority in spectrum efficiency and providing massive connectivity. However, cell edge users may suffer from great performance degradations due to signal attenuation. Therefore, a cooperative downlink SCMA system is proposed to improve transmission reliability. To the best of our knowledge, multiuser detection is still an open issue for this cooperative downlink SCMA system. To this end, we propose a joint iterative decoding design of the cooperative downlink SCMA system by using the joint factor graph stemming from direct and relay transmission. The closed form bit-error rate (BER) performance of the cooperative downlink SCMA system is also derived. Simulation results verify that the proposed cooperative downlink SCMA system performs better than the non-cooperative one. Full article
(This article belongs to the Special Issue Wireless Communications: Signal Processing Perspectives, 2nd Edition)
Show Figures

Figure 1

17 pages, 421 KB  
Article
CNN-Based End-to-End CPU-AP-UE Power Allocation for Spectral Efficiency Enhancement in Cell-Free Massive MIMO Networks
by Yoon-Ju Choi, Ji-Hee Yu, Seung-Hwan Seo, Seong-Gyun Choi, Hye-Yoon Jeong, Ja-Eun Kim, Myung-Sun Baek, Young-Hwan You and Hyoung-Kyu Song
Mathematics 2025, 13(9), 1442; https://doi.org/10.3390/math13091442 - 28 Apr 2025
Viewed by 1156
Abstract
Cell-free massive multiple-input multiple-output (MIMO) networks eliminate cell boundaries and enhance uniform quality of service by enabling cooperative transmission among access points (APs). In conventional cellular networks, user equipment located at the cell edge experiences severe interference and unbalanced resource allocation. However, in [...] Read more.
Cell-free massive multiple-input multiple-output (MIMO) networks eliminate cell boundaries and enhance uniform quality of service by enabling cooperative transmission among access points (APs). In conventional cellular networks, user equipment located at the cell edge experiences severe interference and unbalanced resource allocation. However, in cell-free massive MIMO networks, multiple access points cooperatively serve user equipment (UEs), effectively mitigating these issues. Beamforming and cooperative transmission among APs are essential in massive MIMO environments, making efficient power allocation a critical factor in determining overall network performance. In particular, considering power allocation from the central processing unit (CPU) to the APs enables optimal power utilization across the entire network. Traditional power allocation methods such as equal power allocation and max–min power allocation fail to fully exploit the cooperative characteristics of APs, leading to suboptimal network performance. To address this limitation, in this study we propose a convolutional neural network (CNN)-based power allocation model that optimizes both CPU-to-AP power allocation and AP-to-UE power distribution. The proposed model learns the optimal power allocation strategy by utilizing the channel state information, AP-UE distance, interference levels, and signal-to-interference-plus-noise ratio as input features. Simulation results demonstrate that the proposed CNN-based power allocation method significantly improves spectral efficiency compared to conventional power allocation techniques while also enhancing energy efficiency. This confirms that deep learning-based power allocation can effectively enhance network performance in cell-free massive MIMO environments. Full article
Show Figures

Figure 1

31 pages, 9117 KB  
Article
Intelligent Omni-Surface-Assisted Cooperative Hybrid Non-Orthogonal Multiple Access: Enhancing Spectral Efficiency Under Imperfect Successive Interference Cancellation and Hardware Distortions
by Helen Sheeba John Kennedy and Vinoth Babu Kumaravelu
Sensors 2025, 25(7), 2283; https://doi.org/10.3390/s25072283 - 3 Apr 2025
Cited by 5 | Viewed by 1037
Abstract
Non-orthogonal multiple access (NOMA) has emerged as a key enabler of massive connectivity in next-generation wireless networks. However, conventional NOMA studies predominantly focus on two-user scenarios, limiting their scalability in practical multi-user environments. A critical challenge in these systems is error propagation in [...] Read more.
Non-orthogonal multiple access (NOMA) has emerged as a key enabler of massive connectivity in next-generation wireless networks. However, conventional NOMA studies predominantly focus on two-user scenarios, limiting their scalability in practical multi-user environments. A critical challenge in these systems is error propagation in successive interference cancellation (SIC), which is further exacerbated by hardware distortions (HWDs). Hybrid NOMA (HNOMA) mitigates SIC errors and reduces system complexity, yet cell-edge users (CEUs) continue to experience degraded sum spectral efficiency (SSE) and throughput. Cooperative NOMA (C-NOMA) enhances CEU performance through retransmissions but incurs higher energy consumption. To address these limitations, this study integrates intelligent omni-surfaces (IOSs) into a cooperative hybrid NOMA (C-HNOMA) framework to enhance retransmission efficiency and extend network coverage. The closed-form expressions for average outage probability and throughput are derived, and a power allocation (PA) optimization framework is proposed to maximize SSE, with validation through Monte Carlo simulations. The introduction of a novel strong–weak strong–weak (SW-SW) user pairing strategy capitalizes on channel diversity, achieving an SSE improvement of ∼0.48% to ∼3.81% over conventional pairing schemes. Moreover, the proposed system demonstrates significant performance gains as the number of IOS elements increases, even under imperfect SIC (iSIC) and HWD conditions. By optimizing PA values, SSE is further enhanced by at least 2.24%, even with an SIC error of 0.01 and an HWD level of 8%. These results underscore the potential of an IOS-assisted C-HNOMA system with SW-SW pairing as a viable solution for improving multi-user connectivity, SSE, and system robustness in future wireless communication networks. Full article
(This article belongs to the Special Issue Performance Analysis of Wireless Communication Systems)
Show Figures

Graphical abstract

20 pages, 468 KB  
Article
Toward 6G: Latency-Optimized MEC Systems with UAV and RIS Integration
by Abdullah Alshahrani
Mathematics 2025, 13(5), 871; https://doi.org/10.3390/math13050871 - 5 Mar 2025
Cited by 2 | Viewed by 2005
Abstract
Multi-access edge computing (MEC) has emerged as a cornerstone technology for deploying 6G network services, offering efficient computation and ultra-low-latency communication. The integration of unmanned aerial vehicles (UAVs) and reconfigurable intelligent surfaces (RISs) further enhances wireless propagation, capacity, and coverage, presenting a transformative [...] Read more.
Multi-access edge computing (MEC) has emerged as a cornerstone technology for deploying 6G network services, offering efficient computation and ultra-low-latency communication. The integration of unmanned aerial vehicles (UAVs) and reconfigurable intelligent surfaces (RISs) further enhances wireless propagation, capacity, and coverage, presenting a transformative paradigm for next-generation networks. This paper addresses the critical challenge of task offloading and resource allocation in an MEC-based system, where a massive MIMO base station, serving multiple macro-cells, hosts the MEC server with support from a UAV-equipped RIS. We propose an optimization framework to minimize task execution latency for user equipment (UE) by jointly optimizing task offloading and communication resource allocation within this UAV-assisted, RIS-aided network. By modeling this problem as a Markov decision process (MDP) with a discrete-continuous hybrid action space, we develop a deep reinforcement learning (DRL) algorithm leveraging a hybrid space representation to solve it effectively. Extensive simulations validate the superiority of the proposed method, demonstrating significant latency reductions compared to state-of-the-art approaches, thereby advancing the feasibility of MEC in 6G networks. Full article
Show Figures

Figure 1

34 pages, 2273 KB  
Article
SimulatorOrchestrator: A 6G-Ready Simulator for the Cell-Free/Osmotic Infrastructure
by Rohin Gillgallon, Reham Almutairi, Giacomo Bergami and Graham Morgan
Sensors 2025, 25(5), 1591; https://doi.org/10.3390/s25051591 - 5 Mar 2025
Cited by 4 | Viewed by 2207
Abstract
To the best of our knowledge, we offer the first IoT-Osmotic simulator supporting 6G and Cloud infrastructures, leveraging the similarities in Software-Defined Wide Area Network (SD-WAN) architectures when used in Osmotic architectures and User-Centric Cell-Free mMIMO (massive multiple-input multiple-output) architectures. Our simulator acts [...] Read more.
To the best of our knowledge, we offer the first IoT-Osmotic simulator supporting 6G and Cloud infrastructures, leveraging the similarities in Software-Defined Wide Area Network (SD-WAN) architectures when used in Osmotic architectures and User-Centric Cell-Free mMIMO (massive multiple-input multiple-output) architectures. Our simulator acts as a simulator orchestrator, supporting the interaction with a patient digital twin generating patient healthcare data (vital signs and emergency alerts) and a VANET simulator (SUMO), both leading to IoT data streams towards the cloud through pre-initiated MQTT protocols. This contextualises our approach within the healthcare domain while showcasing the possibility of orchestrating different simulators at the same time. The combined provision of these two aspects, joined with the addition of a ring network connecting all the first-mile edge nodes (i.e., access points), enables the definition of new packet routing algorithms, streamlining previous solutions from SD-WAN architectures, thus showing the benefit of 6G architectures in achieving better network load balancing, as well as showcasing the limitations of previous approaches. The simulated 6G architecture, combined with the optimal routing algorithm and MEL (Microelements software components) allocation policy, was able to reduce the time required to route all communications from IoT devices to the cloud by up to 50.4% compared to analogous routing algorithms used within 5G architectures. Full article
(This article belongs to the Special Issue e-Health Systems and Technologies)
Show Figures

Figure 1

31 pages, 17989 KB  
Article
IoT-Cloud, VPN, and Digital Twin-Based Remote Monitoring and Control of a Multifunctional Robotic Cell in the Context of AI, Industry, and Education 4.0 and 5.0
by Adrian Filipescu, Georgian Simion, Dan Ionescu and Adriana Filipescu
Sensors 2024, 24(23), 7451; https://doi.org/10.3390/s24237451 - 22 Nov 2024
Cited by 14 | Viewed by 3766
Abstract
The monitoring and control of an assembly/disassembly/replacement (A/D/R) multifunctional robotic cell (MRC) with the ABB 120 Industrial Robotic Manipulator (IRM), based on IoT (Internet of Things)-cloud, VPN (Virtual Private Network), and digital twin (DT) technology, are presented in this paper. The approach integrates [...] Read more.
The monitoring and control of an assembly/disassembly/replacement (A/D/R) multifunctional robotic cell (MRC) with the ABB 120 Industrial Robotic Manipulator (IRM), based on IoT (Internet of Things)-cloud, VPN (Virtual Private Network), and digital twin (DT) technology, are presented in this paper. The approach integrates modern principles of smart manufacturing as outlined in Industry/Education 4.0 (automation, data exchange, smart systems, machine learning, and predictive maintenance) and Industry/Education 5.0 (human–robot collaboration, customization, robustness, and sustainability). Artificial intelligence (AI), based on machine learning (ML), enhances system flexibility, productivity, and user-centered collaboration. Several IoT edge devices are engaged, connected to local networks, LAN-Profinet, and LAN-Ethernet and to the Internet via WAN-Ethernet and OPC-UA, for remote and local processing and data acquisition. The system is connected to the Internet via Wireless Area Network (WAN) and allows remote control via the cloud and VPN. IoT dashboards, as human–machine interfaces (HMIs), SCADA (Supervisory Control and Data Acquisition), and OPC-UA (Open Platform Communication-Unified Architecture), facilitate remote monitoring and control of the MRC, as well as the planning and management of A/D/R tasks. The assignment, planning, and execution of A/D/R tasks were carried out using an augmented reality (AR) tool. Synchronized timed Petri nets (STPN) were used as a digital twin akin to a virtual reality (VR) representation of A/D/R MRC operations. This integration of advanced technology into a laboratory mechatronic system, where the devices are organized in a decentralized, multilevel architecture, creates a smart, flexible, and scalable environment that caters to both industrial applications and educational frameworks. Full article
(This article belongs to the Special Issue Intelligent Robotics Sensing Control System)
Show Figures

Figure 1

10 pages, 1434 KB  
Article
PhenoMetaboDiff: R Package for Analysis and Visualization of Phenotype Microarray Data
by Rini Pauly, Mehtab Iqbal, Narae Lee, Bridgette Allen Moffitt, Sara Moir Sarasua, Luyi Li, Nina Christine Hubig and Luigi Boccuto
Genes 2024, 15(11), 1362; https://doi.org/10.3390/genes15111362 - 24 Oct 2024
Cited by 2 | Viewed by 2050
Abstract
Background: PhenoMetaboDiff is a novel R package for computational analysis and visualization of data generated by Biolog Phenotype Mammalian Microarrays (PM-Ms). These arrays measure the energy production of mammalian cells in different metabolic environments, assess the metabolic activity of cells exposed to various [...] Read more.
Background: PhenoMetaboDiff is a novel R package for computational analysis and visualization of data generated by Biolog Phenotype Mammalian Microarrays (PM-Ms). These arrays measure the energy production of mammalian cells in different metabolic environments, assess the metabolic activity of cells exposed to various drugs or energy sources, and compare the metabolic profiles of cells from individuals affected by specific disorders versus healthy controls. Methods: PhenoMetaboDiff has several modules that facilitate statistical analysis by sample comparisons using non-parametric Mann–Whitney U-test, the integration of the OPM package (an R package for analysing OmniLog® phenotype microarray data) for robust file conversion, and calculation of slope and area under the curve (AUC). In addition, the built-in visualization allows specific wells to be visualized in selected pathways for a particular time slice. Results: Compared to the standard OPM package, the features developed in PhenoMetaboDiff assess metabolic profiles by employing statistical tests and visualize the dynamic nature of the energy production in several conditions. Examples of how this package can be used are demonstrated for several rare disease conditions. The incorporation of a graphical user interface expands the utility of this program to both expert and novice users of R. Conclusions: PhenoMetaboDiff makes the deployment of the cutting-edge Biolog system available to any researcher. Full article
(This article belongs to the Collection Feature Papers in Bioinformatics)
Show Figures

Figure 1

14 pages, 6393 KB  
Article
Hybrid Multi-Access Method for Space-Based IoT: Adaptive Bandwidth Allocation and Beam Layout Based on User Distribution
by Qingquan Liu, Lihu Chen, Songting Li and Yiran Xiang
Sensors 2024, 24(18), 6082; https://doi.org/10.3390/s24186082 - 20 Sep 2024
Cited by 4 | Viewed by 1303
Abstract
The development of space-based Internet of Things is limited by insufficient allocable frequency resources and low spectrum utilization. To meet the demand for massive access users under the condition of restricted frequency resources, a multi-dimensional hybrid multiple-access method for space-time-frequency-code division based on [...] Read more.
The development of space-based Internet of Things is limited by insufficient allocable frequency resources and low spectrum utilization. To meet the demand for massive access users under the condition of restricted frequency resources, a multi-dimensional hybrid multiple-access method for space-time-frequency-code division based on user distribution (MHSTFC-UD) is established. It divides the beam cell of a low orbit satellite into the central and edge area and dynamically adjusts the radius of the central area and the allocation of frequency resources according to the distribution of users. The optimization model for the radius of the central area and the allocation of frequency resources is established and solved by the genetic algorithm. Also, it takes the edge area as the protection interval to realize the full-frequency multiplexing between the beam cells in the time domain, space domain and code domain. The simulation results show that compared with the traditional method of frequency reuse in two or three dimensions, the multi-dimensional hybrid multiple-access method can improve the maximum access capacity of a single satellite user by one to three orders of magnitude. Moreover, the MHSTFC-UD can increase users by an additional 11.5% to 33.1% compared to fixed area division and frequency resource allocation. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

19 pages, 2755 KB  
Technical Note
Cluster-Based Strategy for Maximizing the Sum-Rate of a Distributed Reconfigurable Intelligent Surface (RIS)-Assisted Coordinated Multi-Point Non-Orthogonal Multiple-Access (CoMP-NOMA) System
by Qingqing Yang, Qiuhua Zhang and Yi Peng
Sensors 2024, 24(11), 3644; https://doi.org/10.3390/s24113644 - 4 Jun 2024
Cited by 1 | Viewed by 1768
Abstract
This article proposes a distributed intelligent Coordinated Multi-Point Non-Orthogonal Multiple-Access (CoMP-NOMA) collaborative transmission model with the assistance of reconfigurable intelligent surfaces (RISs) to address the issues of poor communication quality, low fairness, and high system power consumption for edge users in multi-cellular networks. [...] Read more.
This article proposes a distributed intelligent Coordinated Multi-Point Non-Orthogonal Multiple-Access (CoMP-NOMA) collaborative transmission model with the assistance of reconfigurable intelligent surfaces (RISs) to address the issues of poor communication quality, low fairness, and high system power consumption for edge users in multi-cellular networks. By analyzing the interaction mechanisms and influencing factors among RIS signal enhancement, NOMA user scheduling, and multi-point collaborative transmission, the model establishes RIS-enhanced edge user grouping and coordinates NOMA user clusters based on this. In the multi-cell RIS-assisted JT-CoMP NOMA downlink transmission, joint optimization of the power allocation (PA), user clustering (UC), and RIS phase-shift matrix design (PS) poses a challenging Mixed-Integer Non-Linear Programming (MINLP) problem. The original problem is decomposed by optimizing the formulas into joint sub-problems of PA, UC, and PA and PS, and solved using an alternating optimization approach. Simulation results demonstrate that the proposed scheme effectively reduces the system’s power consumption while significantly improving the system’s throughput and rates. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

21 pages, 472 KB  
Article
Multi-Connectivity for Multicast Video Streaming in Cellular Networks
by Sadaf ul Zuhra, Prasanna Chaporkar, Abhay Karandikar and H. Vincent Poor
Network 2024, 4(2), 175-195; https://doi.org/10.3390/network4020009 - 6 May 2024
Cited by 2 | Viewed by 2402
Abstract
The escalating demand for high-quality video streaming poses a major challenge for communication networks today. Catering to these bandwidth-hungry video streaming services places a huge burden on the limited spectral resources of communication networks, limiting the resources available for other services as well. [...] Read more.
The escalating demand for high-quality video streaming poses a major challenge for communication networks today. Catering to these bandwidth-hungry video streaming services places a huge burden on the limited spectral resources of communication networks, limiting the resources available for other services as well. Large volumes of video traffic can lead to severe network congestion, particularly during live streaming events, which require sending the same content to a large number of users simultaneously. For such applications, multicast transmission can effectively combat network congestion while meeting the demands of all the users by serving groups of users requesting the same content over shared spectral resources. Streaming services can further benefit from multi-connectivity, which allows users to receive content from multiple base stations simultaneously. Integrating multi-connectivity within multicast streaming can improve the system resource utilization while also providing seamless connectivity to multicast users. Toward this end, this work studied the impact of using multi-connectivity (MC) alongside wireless multicast for meeting the resource requirements of video streaming. Our findings show that MC substantially enhances the performance of multicast streaming, particularly benefiting cell-edge users who often experience poor channel conditions. We especially considered the number of users that can be simultaneously served by multi-connected multicast systems. It was observed that about 60% of the users that are left unserved under single-connectivity multicast are successfully served using the same resources by employing multi-connectivity in multicast transmissions. We prove that the optimal resource allocation problem for MC multicast is NP-hard. As a solution, we present a greedy approximation algorithm with an approximation factor of (11/e). Furthermore, we establish that no other polynomial-time algorithm can offer a superior approximation. To generate realistic video traffic patterns in our simulations, we made use of traces from actual videos. Our results clearly demonstrate that multi-connectivity leads to significant enhancements in the performance of multicast streaming. Full article
Show Figures

Figure 1

Back to TopTop