Topic Editors

Prof. Dr. Jian Li
College of Information Technology, Jilin Agricultural University, Changchun 130118, China
School of Control Science and Engineering, Dalian University of Technology, Dalian 116024, China
Dr. Jiaqi Yan
School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China

Object Detection and Control of Networked Autonomous Systems: Theories, Analysis Tools and Applications

Abstract submission deadline
30 June 2027
Manuscript submission deadline
31 August 2027
Viewed by
3461

Topic Information

Dear Colleagues,

Networked autonomous systems for object detection and control are rapidly transforming applications in agriculture, food systems, forestry, plant sciences, electrification, civil engineering, and beyond. Leveraging advances in computer vision, satellite and high-altitude remote sensing, unmanned aerial/surface/ground vehicles, robotics, and sensor networks, these platforms fuse data from satellites, drones, ground robots, and fixed sensors to deliver precise, real-time detection, state estimation, and decision making across production, processing, and distribution chains. Such systems support tasks including crop and forest health monitoring, post-harvest quality assessment and food safety monitoring, invasive species localization, infrastructure inspection, power grid surveillance, and ecosystem management. Integrated sensing and AI-driven control enable real-time quality assurance and rapid response across the food supply chain. Recent breakthroughs in deep learning, edge computing, energy-harvesting sensor design, and wireless communications have greatly enhanced autonomy, reliability, safety, and scalability. With sustainable resource management and resilient infrastructure growing in importance, as well as increasing attention being paid to food security and supply-chain resilience, there is an urgent need for new theoretical frameworks, control algorithms, hardware–software co-design methods, and field-validated case studies that advance object detection and control in networked autonomous systems from theory to practice. This topic invites original contributions on emerging concepts, design and analysis tools, and innovative application cases, including, but not limited to, the following:

  • Multi-platform object detection and data fusion algorithms.
  • Vision-based perception and object tracking with UAVs and ground robots.
  • Satellite and high-altitude remote sensing methods for precise object recognition and localization.
  • Energy-autonomous sensor and actuator network design.
  • Deep learning for real-time object recognition and control decision making.
  • Cooperative task planning and control among aerial, surface, and ground vehicles.
  • Robotics and automation for precision operations (weeding, seeding, harvesting, and post-harvest sorting and packaging).
  • Thermal–infrared and multispectral imaging for state estimation and environmental modeling.
  • Digital twin development for farmland and forest ecosystem management and control.
  • Autonomous navigation and obstacle avoidance in complex terrains.
  • Fault-tolerant power electronics and energy management for unmanned systems.
  • Formal verification and safety certification of large-scale networked systems.
  • Data-driven modeling and control of smart irrigation and fertilization systems.
  • Remote robotic inspection and rehabilitation of aging infrastructure.
  • Integration of LiDAR, RADAR, and photogrammetry for 3D environmental perception and control.
  • Networked sensing and control for food science, engineering, and production.Case studies on scalable deployment in agro-forestry, food supply chains, botanical research, and engineering projects.
  • Active disturbance rejection control in motor and generator control systems.
  • Optimization control of hybrid propulsion systems of aircraft, ships, and vehicles.
  • Mechanical vibration and noise control, friction nanogeneration technology, marine machinery condition monitoring, and fault diagnosis technology.
  • Data-driven modeling and distributed filtering for autonomous systems.
  • Resilient data-driven distributed filtering against complex cyberattacks.
  • Distributed multi-objective task allocation for autonomous systems.
  • Networked sensing and control for food processing and production—monitoring chemical and physical properties, real-time quality grading, and automated sorting.
  • Distributed detection and control for food safety and supply-chain resilience—microbial/contaminant surveillance, cold-chain and grain-storage monitoring, and traceability.
  • Distributed detection and control for food safety, quality assurance, and supply-chain resilience.
  • Networked autonomous detection and control for food production and processing.

Prof. Dr. Jian Li
Prof. Dr. Xu Fang
Dr. Jiaqi Yan
Topic Editors

Keywords

  • networked autonomous systems
  • object detection
  • distributed control
  • food processing automation
  • food cold-chain monitoring
  • multi-platform data fusion
  • food safety
  • deep learning for real-time perception
  • edge computing and energy-harvesting sensors
  • UAVs and ground robots
  • multispectral/thermal imaging
  • digital twins for food and agro-ecosystems
  • distributed filtering and resilient estimation
  • robotics for post-harvest processing

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Agriculture
agriculture
3.6 6.3 2011 18.8 Days CHF 2600 Submit
Plants
plants
4.1 7.6 2012 16.5 Days CHF 2700 Submit
Robotics
robotics
3.3 7.7 2012 23.7 Days CHF 1800 Submit
Foods
foods
5.1 8.7 2012 15 Days CHF 2900 Submit
Horticulturae
horticulturae
3.0 5.1 2015 16.7 Days CHF 2200 Submit
Electronics
electronics
2.6 6.1 2012 16.4 Days CHF 2400 Submit

Preprints.org is a multidisciplinary platform offering a preprint service designed to facilitate the early sharing of your research. It supports and empowers your research journey from the very beginning.

MDPI Topics is collaborating with Preprints.org and has established a direct connection between MDPI journals and the platform. Authors are encouraged to take advantage of this opportunity by posting their preprints at Preprints.org prior to publication:

  1. Share your research immediately: disseminate your ideas prior to publication and establish priority for your work.
  2. Safeguard your intellectual contribution: Protect your ideas with a time-stamped preprint that serves as proof of your research timeline.
  3. Boost visibility and impact: Increase the reach and influence of your research by making it accessible to a global audience.
  4. Gain early feedback: Receive valuable input and insights from peers before submitting to a journal.
  5. Ensure broad indexing: Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (5 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
31 pages, 21618 KB  
Article
Cohesion-Based Flocking Formation Using Potential Linked Nodes Model for Multi-Robot Agricultural Swarms
by Kevin Marlon Soza-Mamani, Marcelo Saavedra Alcoba, Felipe Torres and Alvaro Javier Prado-Romo
Agriculture 2026, 16(2), 155; https://doi.org/10.3390/agriculture16020155 - 8 Jan 2026
Viewed by 216
Abstract
Accurately modeling and representing the collective dynamics of large-scale robotic systems remains one of the fundamental challenges in swarm robotics. Within the context of agricultural robotics, swarm-based coordination schemes enable scalable and adaptive control of multi-robot teams performing tasks such as crop monitoring [...] Read more.
Accurately modeling and representing the collective dynamics of large-scale robotic systems remains one of the fundamental challenges in swarm robotics. Within the context of agricultural robotics, swarm-based coordination schemes enable scalable and adaptive control of multi-robot teams performing tasks such as crop monitoring and autonomous field maintenance. This paper introduces a cohesive Potential Linked Nodes (PLNs) framework, an adjustable formation structure that employs Artificial Potential Fields (APFs), and virtual node–link interactions to regulate swarm cohesion and coordinated motion (CM). The proposed model governs swarm formation, modulates structural integrity, and enhances responsiveness to external perturbations. The PLN framework facilitates swarm stability, maintaining high cohesion and adaptability while the system’s tunable parameters enable online adjustment of inter-agent coupling strength and formation rigidity. Comprehensive simulation experiments were conducted to assess the performance of the model under multiple swarm conditions, including static aggregation and dynamic flocking behavior using differential-drive mobile robots. Additional tests within a simulated cropping environment were performed to evaluate the framework’s stability and cohesiveness under agricultural constraints. Swarm cohesion and formation stability were quantitatively analyzed using density-based and inter-robot distance metrics. The experimental results demonstrate that the PLN model effectively maintains formation integrity and cohesive stability throughout all scenarios. Full article
Show Figures

Figure 1

14 pages, 1426 KB  
Article
A Lightweight and Efficient Approach for Distracted Driving Detection Based on YOLOv8
by Fu Li, Shenghao Gu, Lei Lu, Binghua Ren, Lijuan Zhang and Wangyu Wu
Electronics 2026, 15(1), 34; https://doi.org/10.3390/electronics15010034 - 22 Dec 2025
Viewed by 248
Abstract
To overcome the issues of excessive computation and resource usage in distracted driving detection systems, this study introduces a compact detection framework named YOLOv8s-FPNE, built upon the YOLOv8 architecture. The proposed model incorporates FasterNet, Partial Convolution (PConv) layers, a Normalized Attention Mechanism (NAM), [...] Read more.
To overcome the issues of excessive computation and resource usage in distracted driving detection systems, this study introduces a compact detection framework named YOLOv8s-FPNE, built upon the YOLOv8 architecture. The proposed model incorporates FasterNet, Partial Convolution (PConv) layers, a Normalized Attention Mechanism (NAM), and the Focal-EIoU loss to achieve an optimal trade-off between accuracy and efficiency. FasterNet together with PConv enhances feature extraction while reducing redundancy, NAM strengthens the model’s sensitivity to key spatial and channel information, and Focal-EIoU refines bounding-box regression, particularly for hard-to-detect samples. Experimental evaluations on a public distracted driving dataset show that YOLOv8s-FPNE reduces the number of parameters by 21.7% and computational cost (FLOPS) by 23.6% relative to the original YOLOv8s, attaining an mAP@0.5 of 81.6%, which surpasses existing lightweight detection methods. Ablation analyses verify the contribution of each component, and comparative studies further confirm the advantages of NAM and Focal-EIoU. The results demonstrate that the proposed method provides a practical and efficient solution for real-time distracted driving detection on embedded and resource-limited platforms. Full article
Show Figures

Figure 1

17 pages, 2669 KB  
Article
Extensible Heterogeneous Collaborative Perception in Autonomous Vehicles with Codebook Compression
by Babak Ebrahimi Soorchaei, Arash Raftari and Yaser Pourmohammadi Fallah
Robotics 2025, 14(12), 186; https://doi.org/10.3390/robotics14120186 - 10 Dec 2025
Viewed by 419
Abstract
Collaborative perception can mitigate occlusion and range limitations in autonomous driving, but deployment remains constrained by strict bandwidth budgets and heterogeneous agent stacks. We propose a communication-efficient and backbone-agnostic framework in which each agent’s encoder is treated as a black box, and a [...] Read more.
Collaborative perception can mitigate occlusion and range limitations in autonomous driving, but deployment remains constrained by strict bandwidth budgets and heterogeneous agent stacks. We propose a communication-efficient and backbone-agnostic framework in which each agent’s encoder is treated as a black box, and a lightweight interpreter maps its intermediate features into a canonical space. To reduce transmission cost, we integrate codebook-based compression that sends only compact discrete indices, while a prompt-guided decoder reconstructs semantically aligned features on the ego vehicle for downstream fusion. Training follows a two-phase strategy: Phase 1 jointly optimizes interpreters, prompts, and fusion components for a fixed set of agents; Phase 2 enables plug-and-play onboarding of new agents by tuning only their specific prompts. Experiments on OPV2V and OPV2VH+ show that our method consistently outperformed early-, intermediate-, and late-fusion baselines under equal or lower communication budgets. With a codebook of size 128, the proposed pipeline preserved over 95% of the uncompressed detection accuracy while reducing communication cost by more than two orders of magnitude. The model also maintained strong performance under bandwidth throttling, missing-agent scenarios, and heterogeneous sensor combinations. Compared to recent state-of-the-art methods such as PolyInter, MPDA, and PnPDA, our framework achieved higher AP while using significantly smaller message sizes. Overall, the combination of prompt-guided decoding and discrete Codebook compression provides a scalable, bandwidth-aware, and heterogeneity-resilient foundation for next-generation collaborative perception in connected autonomous vehicles. Full article
Show Figures

Figure 1

26 pages, 5890 KB  
Article
Research on Accurate Weed Identification and a Variable Application Method in Maize Fields Based on an Improved YOLOv11n Model
by Xiaoan Chen, Hongze Zhang, Xingcheng Liu, Zhonghui Guo, Wei Zheng and Yingli Cao
Agriculture 2025, 15(23), 2456; https://doi.org/10.3390/agriculture15232456 - 27 Nov 2025
Viewed by 416
Abstract
Uniform spraying by conventional plant protection drones often results in low herbicide utilization efficiency and environmental contamination, both of which are critical issues in agricultural production. To address these challenges, this study proposed a precision weed management system for maize fields that combines [...] Read more.
Uniform spraying by conventional plant protection drones often results in low herbicide utilization efficiency and environmental contamination, both of which are critical issues in agricultural production. To address these challenges, this study proposed a precision weed management system for maize fields that combines an improved YOLOv11n-OSAW detection model with DJI drones for variable-rate herbicide application. The YOLOv11n-OSAW model was enhanced with Omni-dimensional Dynamic Convolution (OD-Conv), the SEAM attention mechanism, a lightweight ADown module, and the Wise-IoU (WIoU) loss function, aiming to improve the detection accuracy of small and occluded weeds in maize fields. When the model was deployed on an uncrewed aerial vehicle (UAV) operating at 5 m altitude, it achieved mean Average Precision mAP@0.5 values of 97.8% and 97.0% for gramineous and broad-leaved weeds, respectively—representing increases of 2.9 and 1.6 percentage points over the baseline YOLOv11n model. Weed distribution maps generated from the detection results were used to develop site-specific herbicide prescription maps, guiding the drone to implement targeted spraying. Water-sensitive paper analysis verified that the system ensured effective droplet deposition and uniform coverage across different application rate areas. This integrated workflow, covering UAV image acquisition, weed detection, variable-rate application, and effect assessment, reduced herbicide consumption by 20.25% compared with conventional uniform spraying (450 L/ha) while maintaining excellent weed control efficiency and reducing environmental risks. The findings demonstrate that the proposed system provides a practical and sustainable solution for weed management in maize fields. Full article
Show Figures

Figure 1

17 pages, 10904 KB  
Article
Self-Supervised Infrared Image Denoising via Adaptive Gradient-Perception Network for FPN Suppression
by Yue Tang, Chaobo Min, Runzhe Miao and Jiajia Lu
Electronics 2025, 14(21), 4334; https://doi.org/10.3390/electronics14214334 - 5 Nov 2025
Viewed by 625
Abstract
Current denoising algorithms in infrared imaging systems predominantly target either high-frequency stripe noise or Gaussian noise independently, failing to adequately address the prevalent hybrid noise in real-world scenarios. To tackle this challenge, we propose a convolutional neural network (CNN)-based approach with a refined [...] Read more.
Current denoising algorithms in infrared imaging systems predominantly target either high-frequency stripe noise or Gaussian noise independently, failing to adequately address the prevalent hybrid noise in real-world scenarios. To tackle this challenge, we propose a convolutional neural network (CNN)-based approach with a refined composite loss function, specifically designed for hybrid noise removal in raw infrared images. Our method employs a residual network backbone integrated with an adaptive weighting mechanism and edge-preserving loss, enabling joint modeling of multiple noise types while safeguarding structural edges. Unlike reference-based CNN denoising methods requiring clean images, our solution leverages intrinsic gradient variations within image sequences for adaptive smoothing, eliminating dependency on ground-truth data during training. Rigorous experiments conducted on three public datasets have demonstrated the optimal or suboptimal performance of our method in mixed noise suppression and detail preservation (PSNR > 32.13/SSIM > 0.8363). Full article
Show Figures

Figure 1

Back to TopTop