Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (661)

Search Parameters:
Keywords = low throughput network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 11868 KB  
Article
An Enhanced Faster R-CNN for High-Throughput Winter Wheat Spike Monitoring to Improved Yield Prediction and Water Use Efficiency
by Donglin Wang, Longfei Shi, Yanbin Li, Binbin Zhang, Guangguang Yang and Serestina Viriri
Agronomy 2025, 15(10), 2388; https://doi.org/10.3390/agronomy15102388 (registering DOI) - 14 Oct 2025
Abstract
This study develops an innovative unmanned aerial vehicle (UAV)-based intelligent system for winter wheat yield prediction, addressing the inefficiencies of traditional manual counting methods (with approximately 15% error rate) and enabling quantitative analysis of water–fertilizer interactions. By integrating an enhanced Faster Region-Based Convolutional [...] Read more.
This study develops an innovative unmanned aerial vehicle (UAV)-based intelligent system for winter wheat yield prediction, addressing the inefficiencies of traditional manual counting methods (with approximately 15% error rate) and enabling quantitative analysis of water–fertilizer interactions. By integrating an enhanced Faster Region-Based Convolutional Neural Network (Faster R-CNN) architecture with multi-source data fusion and machine learning, the system significantly improves both spike detection accuracy and yield forecasting performance. Field experiments during the 2022–2023 growing season captured high-resolution multispectral imagery for varied irrigation regimes and fertilization treatments. The optimized detection model incorporates ResNet-50 as the backbone feature extraction network, with residual connections and channel attention mechanisms, achieving a mean average precision (mAP) of 91.2% (calculated at IoU threshold 0.5) and 88.72% recall while reducing computational complexity. The model outperformed YOLOv8 by a statistically significant 2.1% margin (p < 0.05). Using model-generated spike counts as input, the random forest (RF) model regressor demonstrated superior yield prediction performance (R2 = 0.82, RMSE = 324.42 kg·ha−1), exceeding the Partial Least Squares Regression (PLSR) (R2 +46%, RMSE-44.3%), Least Squares Support Vector Machine (LSSVM) (R2 + 32.3%, RMSE-32.4%), Support Vector Regression (SVR) (R2 + 30.2%, RMSE-29.6%), and Backpropagation (BP) Neural Network (R2+22.4%, RMSE-24.4%) models. Analysis of different water–fertilizer treatments revealed that while organic fertilizer under full irrigation (750 m3 ha−1) conditions achieved maximum yield benefit (13,679.26 CNY·ha−1), it showed relatively low water productivity (WP = 7.43 kg·m−3). Conversely, under deficit irrigation (450 m3 ha−1) conditions, the 3:7 organic/inorganic fertilizer treatment achieved optimal WP (11.65 kg m−3) and WUE (20.16 kg∙ha−1∙mm−1) while increasing yield benefit by 25.46% compared to organic fertilizer alone. This research establishes an integrated technical framework for high-throughput spike monitoring and yield estimation, providing actionable insights for synergistic water–fertilizer management strategies in sustainable precision agriculture. Full article
(This article belongs to the Section Water Use and Irrigation)
21 pages, 630 KB  
Article
Scalability of Wi-Fi Performance in Virtual Reality Scenarios
by Vyacheslav Loginov, Sergei Tutelian, Ivan Startsev and Evgeny Khorov
Sensors 2025, 25(20), 6338; https://doi.org/10.3390/s25206338 (registering DOI) - 14 Oct 2025
Abstract
The adoption of Virtual Reality (VR) applications in Wi-Fi networks intensifies each year. VR applications impose strict Quality of Service (QoS) requirements, necessitating low latency and high throughput. Meeting VR QoS requirements in Wi-Fi networks is especially challenging due to unpredictable channel fading [...] Read more.
The adoption of Virtual Reality (VR) applications in Wi-Fi networks intensifies each year. VR applications impose strict Quality of Service (QoS) requirements, necessitating low latency and high throughput. Meeting VR QoS requirements in Wi-Fi networks is especially challenging due to unpredictable channel fading and interference. This paper presents a comprehensive scalability study of Wi-Fi performance in multi-user VR scenarios. We investigate whether simply increasing Access Point (AP) capabilities, specifically through Multi-User MIMO (MU-MIMO), is sufficient to support dense VR deployments. To this end, we developed a high-fidelity simulation framework in ns-3 to estimate the network capacity when serving VR traffic. Our analysis meticulously evaluates the impact of critical factors, including the number of antennas at the AP and STAs, MU-MIMO scheduling algorithms, channel sounding period, and different channel conditions. The results reveal a critical finding: Scalability is not linear. In particular, doubling AP antennas from 8 to 16 yields only a 35% gain in capacity under typical conditions, not the 100% linear scaling one might expect. We identify and analyze the key bottlenecks that prevent performance from scaling indefinitely with an increased number of AP antennas, providing crucial insights for the design of next-generation Wi-Fi systems aimed at supporting the Metaverse and future immersive VR applications. Full article
(This article belongs to the Special Issue Future Wireless Communication Networks: 3rd Edition)
Show Figures

Figure 1

26 pages, 1646 KB  
Article
Message Passing-Based Assignment for Efficient Handover Management in LEO Networks
by Gilang Raka Rayuda Dewa, Illsoo Sohn and Djati Wibowo Djamari
Telecom 2025, 6(4), 76; https://doi.org/10.3390/telecom6040076 - 10 Oct 2025
Viewed by 178
Abstract
As part of non-terrestrial networks (NTN), the Low Earth Orbit (LEO) plays a critical role in supporting high-throughput wireless communication. However, the high-speed mobility of LEO satellites, coupled with the high density of user terminals, makes efficient user assignment crucial in maintaining overall [...] Read more.
As part of non-terrestrial networks (NTN), the Low Earth Orbit (LEO) plays a critical role in supporting high-throughput wireless communication. However, the high-speed mobility of LEO satellites, coupled with the high density of user terminals, makes efficient user assignment crucial in maintaining overall wireless performance. The suboptimal assignment from LEO satellites to user terminals can result in frequent unnecessary handovers, rendering the user terminal unable to receive the entire downlink signal. Consequently, it reduces user rate and user satisfaction metrics. However, finding the optimum user assignment to reduce handover issues is categorized as a non-linear programming problem with a combinatorial number of possible solutions, resulting in excessive computational complexity. Therefore, this study proposes a distributed user assignment for the LEO networks. By utilizing message-passing frameworks that map the optimization problem into a graphical representation, the proposed algorithm splits the optimization problem into a local mapping issue, thereby significantly reducing computational complexity. By exchanging small messages iteratively, the proposed algorithm autonomously determines the near-optimal solution. The extensive simulation results demonstrate that the proposed algorithm significantly outperforms the conventional algorithm in terms of user rate and user satisfaction metric under various wireless parameters. Full article
Show Figures

Figure 1

51 pages, 1512 KB  
Article
CoCoChain: A Concept-Aware Consensus Protocol for Secure Sensor Data Exchange in Vehicular Ad Hoc Networks
by Rubén Juárez, Ruben Nicolas-Sans and José Fernández Tamames
Sensors 2025, 25(19), 6226; https://doi.org/10.3390/s25196226 - 8 Oct 2025
Viewed by 221
Abstract
Vehicular Ad Hoc Networks (VANETs) support safety-critical and traffic-optimization applications through low-latency, reliable V2X communication. However, securing integrity and auditability with blockchain is challenging because conventional BFT-style consensus incurs high message overhead and latency. We introduce CoCoChain, a concept-aware consensus mechanism tailored to [...] Read more.
Vehicular Ad Hoc Networks (VANETs) support safety-critical and traffic-optimization applications through low-latency, reliable V2X communication. However, securing integrity and auditability with blockchain is challenging because conventional BFT-style consensus incurs high message overhead and latency. We introduce CoCoChain, a concept-aware consensus mechanism tailored to VANETs. Instead of exchanging full payloads, CoCoChain trains a sparse autoencoder (SAE) offline on raw message payloads and encodes each message into a low-dimensional concept vector; only the top-k activations are broadcast during consensus. These compact semantic digests are integrated into a practical BFT workflow with per-phase semantic checks using a cosine-similarity threshold θ=0.85 (calibrated on validation data to balance detection and false positives). We evaluate CoCoChain in OMNeT++/SUMO across urban, highway, and multi-hop broadcast under congestion scenarios, measuring latency, throughput, packet delivery ratio, and Age of Information (AoI), and including adversaries that inject semantically corrupted concepts as well as cross-layer stress (RF jamming and timing jitter). Results show CoCoChain reduces consensus message overhead by up to 25% and confirmation latency by 20% while maintaining integrity with up to 20% Byzantine participants and improving information freshness (AoI) under high channel load. This work focuses on OBU/RSU semantic-aware consensus (not 6G joint sensing or multi-base-station fusion). The code, configs, and an anonymized synthetic replica of the dataset will be released upon acceptance. Full article
(This article belongs to the Special Issue Joint Communication and Sensing in Vehicular Networks)
Show Figures

Figure 1

24 pages, 4022 KB  
Article
Dynamic Vision Sensor-Driven Spiking Neural Networks for Low-Power Event-Based Tracking and Recognition
by Boyi Feng, Rui Zhu, Yue Zhu, Yan Jin and Jiaqi Ju
Sensors 2025, 25(19), 6048; https://doi.org/10.3390/s25196048 - 1 Oct 2025
Viewed by 556
Abstract
Spiking neural networks (SNNs) have emerged as a promising model for energy-efficient, event-driven processing of asynchronous event streams from Dynamic Vision Sensors (DVSs), a class of neuromorphic image sensors with microsecond-level latency and high dynamic range. Nevertheless, challenges persist in optimising training and [...] Read more.
Spiking neural networks (SNNs) have emerged as a promising model for energy-efficient, event-driven processing of asynchronous event streams from Dynamic Vision Sensors (DVSs), a class of neuromorphic image sensors with microsecond-level latency and high dynamic range. Nevertheless, challenges persist in optimising training and effectively handling spatio-temporal complexity, which limits their potential for real-time applications on embedded sensing systems such as object tracking and recognition. Targeting this neuromorphic sensing pipeline, this paper proposes the Dynamic Tracking with Event Attention Spiking Network (DTEASN), a novel framework designed to address these challenges by employing a pure SNN architecture, bypassing conventional convolutional neural network (CNN) operations, and reducing GPU resource dependency, while tailoring the processing to DVS signal characteristics (asynchrony, sparsity, and polarity). The model incorporates two innovative, self-developed components: an event-driven multi-scale attention mechanism and a spatio-temporal event convolver, both of which significantly enhance spatio-temporal feature extraction from raw DVS events. An Event-Weighted Spiking Loss (EW-SLoss) is introduced to optimise the learning process by prioritising informative events and improving robustness to sensor noise. Additionally, a lightweight event tracking mechanism and a custom synaptic connection rule are proposed to further improve model efficiency for low-power, edge deployment. The efficacy of DTEASN is demonstrated through empirical results on event-based (DVS) object recognition and tracking benchmarks, where it outperforms conventional methods in accuracy, latency, event throughput (events/s) and spike rate (spikes/s), memory footprint, spike-efficiency (energy proxy), and overall computational efficiency under typical DVS settings. By virtue of its event-aligned, sparse computation, the framework is amenable to highly parallel neuromorphic hardware, supporting on- or near-sensor inference for embedded applications. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

17 pages, 2361 KB  
Article
Joint Power Allocation Algorithm Based on Multi-Agent DQN in Cognitive Satellite–Terrestrial Mixed 6G Networks
by Yifan Zhai, Zhongjun Ma, Bo He, Wenhui Xu, Zhenxing Li, Jie Wang, Hongyi Miao, Aobo Gao and Yewen Cao
Mathematics 2025, 13(19), 3133; https://doi.org/10.3390/math13193133 - 1 Oct 2025
Viewed by 222
Abstract
The Cognitive Satellite–Terrestrial Network (CSTN) is an important infrastructure for the future development of 6G communication networks. This paper focuses on a potential communication scenario, where satellite users (SUs) dominate and are selected as the primary users, and terrestrial base station users (TUs) [...] Read more.
The Cognitive Satellite–Terrestrial Network (CSTN) is an important infrastructure for the future development of 6G communication networks. This paper focuses on a potential communication scenario, where satellite users (SUs) dominate and are selected as the primary users, and terrestrial base station users (TUs) are the secondary users. Additionally, each terrestrial base station owns multiple antennae, and the interference of TUs to SUs in the CSTN is limited to a low level or below. In this paper, based on the observation of diversity and the time-varying characteristics of a variety of user requirements, a multi-agent deep Q-network algorithm under interference limitation (MADQN-IL) was proposed, where the power of each antenna in the base station is allocated to maximize the total system throughput while meeting the interference constraints in the CSTN. In our proposed MADQN-IL, the base stations play the role of intelligent agents, and each agent selects the antenna power allocation and cooperates with other agents through sharing system states and the total rewards. Through a simulation comparison, it was discovered that the MADQN-IL algorithm can achieve a higher system throughput than the adaptive resource adjustment (ARA) algorithm and the fixed power allocation methods. Full article
Show Figures

Figure 1

23 pages, 1098 KB  
Article
HySecure: FPGA-Based Hybrid Post-Quantum and Classical Cryptography Platform for End-to-End IoT Security
by Bohao Zhang, Jinfa Hong, Gaoyu Mao, Shiyu Shen, Hao Yang, Guangyan Li, Shengzhe Lyu, Patrick S. Y. Hung and Ray C. C. Cheung
Electronics 2025, 14(19), 3908; https://doi.org/10.3390/electronics14193908 - 30 Sep 2025
Viewed by 211
Abstract
As the Internet of Things (IoT) continues to expand into mission-critical and long-lived applications, securing low-power wide-area networks (LPWANs) such as Narrowband IoT (NB-IoT) against both classical and quantum threats becomes imperative. Existing NB-IoT security mechanisms terminate at the core network, leaving transmission [...] Read more.
As the Internet of Things (IoT) continues to expand into mission-critical and long-lived applications, securing low-power wide-area networks (LPWANs) such as Narrowband IoT (NB-IoT) against both classical and quantum threats becomes imperative. Existing NB-IoT security mechanisms terminate at the core network, leaving transmission payloads exposed. This paper proposes HySecure, an FPGA-based hybrid cryptographic platform that integrates both classical elliptic curve and post-quantum schemes to achieve end-to-end (E2E) security for NB-IoT communication. Our architecture, built upon the lightweight RISC-V PULPino platform, incorporates hardware accelerators for X25519, Kyber, Ed25519, and Dilithium. We design a hybrid key establishment protocol combining ECDH and Kyber through HKDF, and a dual-signature scheme using EdDSA and Dilithium to ensure authenticity and integrity during handshake. Cryptographic functions are evaluated on FPGA, achieving a 32.2× to 145.4× speedup. NS-3 simulations under realistic NB-IoT configurations demonstrate acceptable latency and throughput for the proposed hybrid schemes, validating their practicality for secure constrained IoT deployments and communications. Full article
Show Figures

Figure 1

18 pages, 2031 KB  
Article
The Impact of Security Protocols on TCP/UDP Throughput in IEEE 802.11ax Client–Server Network: An Empirical Study
by Nurul I. Sarkar, Nasir Faiz and Md Jahan Ali
Electronics 2025, 14(19), 3890; https://doi.org/10.3390/electronics14193890 - 30 Sep 2025
Viewed by 331
Abstract
IEEE 802.11ax (Wi-Fi 6) technologies provide high capacity, low latency, and increased security. While many network researchers have examined Wi-Fi security issues, the security implications of 802.11ax have not been fully explored yet. Therefore, in this paper, we investigate how security protocols (WPA2, [...] Read more.
IEEE 802.11ax (Wi-Fi 6) technologies provide high capacity, low latency, and increased security. While many network researchers have examined Wi-Fi security issues, the security implications of 802.11ax have not been fully explored yet. Therefore, in this paper, we investigate how security protocols (WPA2, WPA3) affect TCP/UDP throughput in IEEE 802.11ax client–server networks using a testbed approach. Through an extensive performance study, we analyze the effect of security on transport layer protocol (TCP/UDP), internet protocol layer (IPV4/IPV6), and operating systems (MS Windows and Linux) on system performance. The impact of packet length on system performance is also investigated. The obtained results show that WPA3 offers greater security, and its impact on TCP/UDP throughput is insignificant, highlighting the robustness of WPA3 encryption in maintaining throughput even in secure environments. With WPA3, UDP offers higher throughput than TCP and IPv6 consistently outperforms IPv4 in terms of both TCP and UDP throughput. Linux outperforms Windows in all scenarios, especially with larger packet sizes and IPv6 traffic. These results suggest that WPA3 provides optimized throughput performance in both Linux and MS Windows in 802.11ax client–server environments. Our research provides some insights into the security issues in Gigabit Wi-Fi that can help network researchers and engineers to contribute further towards developing greater security for next-generation wireless networks. Full article
Show Figures

Figure 1

18 pages, 4037 KB  
Article
Research on Hybrid Communication Strategy for Low-Power Battery-Free IoT Terminals
by Shichao Zhang, Deyu Miao, Na Zhang, Yi Han, Yali Gao, Jiaqi Liu and Weidong Gao
Electronics 2025, 14(19), 3881; https://doi.org/10.3390/electronics14193881 - 30 Sep 2025
Viewed by 272
Abstract
The sharp increase in Internet of Things (IoT) terminal numbers imposes significant pressure on energy and wireless spectrum resources. Battery-free IoT technology has become an effective solution to address the high power consumption and cost issues of traditional IoT systems. While leveraging backscatter [...] Read more.
The sharp increase in Internet of Things (IoT) terminal numbers imposes significant pressure on energy and wireless spectrum resources. Battery-free IoT technology has become an effective solution to address the high power consumption and cost issues of traditional IoT systems. While leveraging backscatter communication, battery-free IoT faces challenges such as low throughput and poor fairness among wireless links. To tackle these problems, this study proposes a low-power hybrid communication mechanism for terminals. Within this mechanism, a time-frame partitioning method for hybrid communication strategies is designed based on sensing results of licensed spectrum channels. Considering terminal power constraints, quality of service (QoS) requirements of primary communication links, and time resource limitations, a hybrid communication strategy model is established to jointly optimize fairness and maximize throughput. To resolve the non-convexity in the Multi-objective Lexicographical Optimization Problem (MLOP), the Block Coordinate Descent (BCD) method and auxiliary variables are introduced. Simulation results demonstrate that, compared to the baseline scheme, the proposed approach reduces the throughput gap between links from 85.4% to 0.32% when the channel gain differences are small, while the total system throughput decreases by only 8.81%. As the channel gain disparity increases, the baseline scheme exhibits a more pronounced disadvantage in terms of throughput fairness, while the proposed approach still reduces the throughput gap between the best and worst links from 91.02% to 0.684% at the cost of a 9.18% decrease in total system throughput. These results demonstrate that the proposed scheme effectively balances fairness and throughput performance across diverse channel conditions, ensuring relatively equitable quality of service for all users in the IoT network. Full article
Show Figures

Figure 1

17 pages, 4563 KB  
Article
Improving Solar Energy-Harvesting Wireless Sensor Network (SEH-WSN) with Hybrid Li-Fi/Wi-Fi, Integrating Markov Model, Sleep Scheduling, and Smart Switching Algorithms
by Heba Allah Helmy, Ali M. El-Rifaie, Ahmed A. F. Youssef, Ayman Haggag, Hisham Hamad and Mostafa Eltokhy
Technologies 2025, 13(10), 437; https://doi.org/10.3390/technologies13100437 - 29 Sep 2025
Viewed by 293
Abstract
Wireless sensor networks (WSNs) are an advanced solution for data collection in Internet of Things (IoT) applications and remote and harsh environments. These networks rely on a collection of distributed sensors equipped with wireless communication capabilities to collect low-cost and small-scale data. WSNs [...] Read more.
Wireless sensor networks (WSNs) are an advanced solution for data collection in Internet of Things (IoT) applications and remote and harsh environments. These networks rely on a collection of distributed sensors equipped with wireless communication capabilities to collect low-cost and small-scale data. WSNs face numerous challenges, including network congestion, slow speeds, high energy consumption, and a short network lifetime due to their need for a constant and stable power supply. Therefore, improving the energy efficiency of sensor nodes through solar energy harvesting (SEH) would be the best option for charging batteries to avoid excessive energy consumption and battery replacement. In this context, modern wireless communication technologies, such as Wi-Fi and Li-Fi, emerge as promising solutions. Wi-Fi provides internet connectivity via radio frequencies (RF), making it suitable for use in open environments. Li-Fi, on the other hand, relies on data transmission via light, offering higher speeds and better energy efficiency, making it ideal for indoor applications requiring fast and reliable data transmission. This paper aims to integrate Wi-Fi and Li-Fi technologies into the SEH-WSN architecture to improve performance and efficiency when used in all applications. To achieve reliable, efficient, and high-speed bidirectional communication for multiple devices, the paper utilizes a Markov model, sleep scheduling, and smart switching algorithms to reduce power consumption, increase signal-to-noise ratio (SNR) and throughput, and reduce bit error rate (BER) and latency by controlling the technology and power supply used appropriately for the mode, sleep, and active states of nodes. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

17 pages, 1731 KB  
Article
Comparative Performance Analysis of Lightweight Cryptographic Algorithms on Resource-Constrained IoT Platforms
by Tiberius-George Sorescu, Vlad-Mihai Chiriac, Mario-Alexandru Stoica, Ciprian-Romeo Comsa, Iustin-Gabriel Soroaga and Alexandru Contac
Sensors 2025, 25(18), 5887; https://doi.org/10.3390/s25185887 - 20 Sep 2025
Viewed by 498
Abstract
The increase in Internet of Things (IoT) devices has introduced significant security challenges, primarily due to their inherent constraints in computational power, memory, and energy. This study provides a comparative performance analysis of selected modern cryptographic algorithms on a resource-constrained IoT platform, the [...] Read more.
The increase in Internet of Things (IoT) devices has introduced significant security challenges, primarily due to their inherent constraints in computational power, memory, and energy. This study provides a comparative performance analysis of selected modern cryptographic algorithms on a resource-constrained IoT platform, the Nordic Thingy:53. We evaluated a set of ciphers including the NIST lightweight standard ASCON, eSTREAM finalists Salsa20, Rabbit, Sosemanuk, HC-256, and the extended-nonce variant XChaCha20. Using a dual test-bench methodology, we measured energy consumption and performance under two distinct scenarios: a low-data-rate Bluetooth mesh network and a high-throughput bulk data transfer. The results reveal significant performance variations among the algorithms. In high-throughput tests, ciphers like XChaCha20, Salsa20, and ASCON32 demonstrated superior speed, while HC-256 proved impractically slow for large payloads. The Bluetooth mesh experiments quantified the direct relationship between network activity and power draw, underscoring the critical impact of cryptographic choice on battery life. These findings offer an empirical basis for selecting appropriate cryptographic solutions that balance security, energy efficiency, and performance requirements for real-world IoT applications. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

18 pages, 5085 KB  
Article
Developments in Microbial Communities and Interaction Networks in Sludge Treatment Ecosystems During the Transition from Anaerobic to Aerobic Conditions
by Xiaoli Pan, Lijun Luo, Hui Wang, Xinyu Chen, Yongjiang Zhang, Yan Dai and Feng Luo
Microorganisms 2025, 13(9), 2178; https://doi.org/10.3390/microorganisms13092178 - 18 Sep 2025
Viewed by 368
Abstract
The transition between anaerobic and aerobic conditions represents a fundamental ecological process occurring ubiquitously in both natural ecosystems and engineered wastewater treatment systems. This study investigated the microbial community succession and co-occurrence network dynamics during the transition from anaerobic sludge to aerobic cultivation. [...] Read more.
The transition between anaerobic and aerobic conditions represents a fundamental ecological process occurring ubiquitously in both natural ecosystems and engineered wastewater treatment systems. This study investigated the microbial community succession and co-occurrence network dynamics during the transition from anaerobic sludge to aerobic cultivation. High-throughput 16S and 18S rDNA sequencing revealed two distinct succession phases: an initial “aerobic adaptation period” (Day 1) and a subsequent “aerobic stable period” (Day 15). Eukaryotic communities shifted from Cryptomycota to the unassigned eukaryotes dominance, while prokaryotic communities maintained Firmicutes and Proteobacteria as core phyla, with persistent low-abundance archaea indicating functional adaptation. Network analysis highlighted predominant co-occurrence patterns between eukaryotic and prokaryotic communities, suggesting synergistic interactions. These findings provide insights into microbial ecological dynamics during anaerobic-to-aerobic transitions, offering potential applications for optimizing wastewater treatment processes. Full article
(This article belongs to the Special Issue Advances in Genomics and Ecology of Environmental Microorganisms)
Show Figures

Figure 1

16 pages, 2720 KB  
Article
Multi-Trait Phenotypic Extraction and Fresh Weight Estimation of Greenhouse Lettuce Based on Inspection Robot
by Xiaodong Zhang, Xiangyu Han, Yixue Zhang, Lian Hu and Tiezhu Li
Agriculture 2025, 15(18), 1929; https://doi.org/10.3390/agriculture15181929 - 11 Sep 2025
Viewed by 475
Abstract
In situ detection of growth information in greenhouse crops is crucial for germplasm resource optimization and intelligent greenhouse management. To address the limitations of poor flexibility and low automation in traditional phenotyping platforms, this study developed a controlled environment inspection robot. By means [...] Read more.
In situ detection of growth information in greenhouse crops is crucial for germplasm resource optimization and intelligent greenhouse management. To address the limitations of poor flexibility and low automation in traditional phenotyping platforms, this study developed a controlled environment inspection robot. By means of a SCARA robotic arm equipped with an information acquisition device consisting of an RGB camera, a depth camera, and an infrared thermal imager, high-throughput and in situ acquisition of lettuce phenotypic information can be achieved. Through semantic segmentation and point cloud reconstruction, 12 phenotypic parameters, such as lettuce plant height and crown width, were extracted from the acquired images as inputs for three machine learning models to predict fresh weight. By analyzing the training results, a Backpropagation Neural Network (BPNN) with an added feature dimension-increasing module (DE-BP) was proposed, achieving improved prediction accuracy. The R2 values for plant height, crown width, and fresh weight predictions were 0.85, 0.93, and 0.84, respectively, with RMSE values of 7 mm, 6 mm, and 8 g, respectively. This study achieved in situ, high-throughput acquisition of lettuce phenotypic information under controlled environmental conditions, providing a lightweight solution for crop phenotypic information analysis algorithms tailored for inspection tasks. Full article
Show Figures

Figure 1

22 pages, 2064 KB  
Review
Advances in Functional Genomics for Watermelon and Melon Breeding: Current Progress and Future Perspectives
by Huanhuan Niu, Junyi Tan, Wenkai Yan, Dongming Liu and Luming Yang
Horticulturae 2025, 11(9), 1100; https://doi.org/10.3390/horticulturae11091100 - 11 Sep 2025
Viewed by 785
Abstract
Watermelon (Citrullus lanatus) and melon (Cucumis melo) are globally important cucurbit crops, with China being the largest producer and consumer. Traditional breeding methods face difficulties in significantly improving yield and quality. Smart breeding, which combines genomics, gene editing, and [...] Read more.
Watermelon (Citrullus lanatus) and melon (Cucumis melo) are globally important cucurbit crops, with China being the largest producer and consumer. Traditional breeding methods face difficulties in significantly improving yield and quality. Smart breeding, which combines genomics, gene editing, and artificial intelligence (AI), holds great promise but fundamentally depends on understanding the molecular mechanisms controlling important agronomic traits. This review summarizes the progress made over recent decades in discovering and understanding the functions of genes that control essential traits in watermelon and melon, focusing on plant architecture, fruit quality, and disease resistance. However, major challenges remain: relatively few genes have been fully validated, the complex gene networks are not fully unraveled, and technical hurdles like low genetic transformation efficiency and difficulties in large-scale trait phenotyping limit progress. To overcome these and enable the development of superior new varieties, future research priorities should focus on the following: (1) systematic discovery of genes using comprehensive genome collections (pan-genomes) and multi-level data analysis (multi-omics); (2) deepening the study of gene functions and interactions using advanced gene editing and epigenetics; (3) faster integration of molecular knowledge into smart breeding systems; (4) solving the problems of genetic transformation and enabling efficient large-scale trait and genetic data collection (high-throughput phenotyping and genotyping). Full article
(This article belongs to the Special Issue Germplasm Resources and Genetics Improvement of Watermelon and Melon)
Show Figures

Figure 1

27 pages, 4238 KB  
Article
A Scalable Reinforcement Learning Framework for Ultra-Reliable Low-Latency Spectrum Management in Healthcare Internet of Things
by Adeel Iqbal, Ali Nauman, Tahir Khurshaid and Sang-Bong Rhee
Mathematics 2025, 13(18), 2941; https://doi.org/10.3390/math13182941 - 11 Sep 2025
Viewed by 409
Abstract
Healthcare Internet of Things (H-IoT) systems demand ultra-reliable and low-latency communication (URLLC) to support critical functions such as remote monitoring, emergency response, and real-time diagnostics. However, spectrum scarcity and heterogeneous traffic patterns pose major challenges for centralized scheduling in dense H-IoT deployments. This [...] Read more.
Healthcare Internet of Things (H-IoT) systems demand ultra-reliable and low-latency communication (URLLC) to support critical functions such as remote monitoring, emergency response, and real-time diagnostics. However, spectrum scarcity and heterogeneous traffic patterns pose major challenges for centralized scheduling in dense H-IoT deployments. This paper proposed a multi-agent reinforcement learning (MARL) framework for dynamic, priority-aware spectrum management (PASM), where cooperative MARL agents jointly optimize throughput, latency, energy efficiency, fairness, and blocking probability under varying traffic and channel conditions. Six learning strategies are developed and compared, including Q-Learning, Double Q-Learning, Deep Q-Network (DQN), Actor–Critic, Dueling DQN, and Proximal Policy Optimization (PPO), within a simulated H-IoT environment that captures heterogeneous traffic, device priorities, and realistic URLLC constraints. A comprehensive simulation study across scalable scenarios ranging from 3 to 50 devices demonstrated that PPO consistently outperforms all baselines, improving mean throughput by 6.2%, reducing 95th-percentile delay by 11.5%, increasing energy efficiency by 11.9%, lowering blocking probability by 33.3%, and accelerating convergence by 75.8% compared to the strongest non-PPO baseline. These findings establish PPO as a robust and scalable solution for QoS-compliant spectrum management in dense H-IoT environments, while Dueling DQN emerges as a competitive deep RL alternative. Full article
Show Figures

Figure 1

Back to TopTop