Hardware and Software Co-Design in Intelligent Systems

A special issue of Electronics (ISSN 2079-9292).

Deadline for manuscript submissions: 15 September 2026 | Viewed by 2504

Special Issue Editors


E-Mail Website
Guest Editor
School of Engineering, Newcastle University, Newcastle upon Tyne NE1 7RU, UK
Interests: hardware-software co-design; very large scale integration (VLSI) design; hardware security; integrated circuit reliability

E-Mail Website
Guest Editor
School of Computer Science and Technology, University of Science and Technology of China, Hefei 230026, China
Interests: intelligent internet of things; wireless network; edge computing

Special Issue Information

Dear Colleagues,

We are seeking high-quality research papers for the Special Issue "Hardware and Software Co-design in Intelligent Systems".

The rapid advancement of artificial intelligence (AI) is reshaping scientific and industrial domains, driving the development of increasingly sophisticated intelligent systems. As the complexity of AI models grows, hardware-software (HW/SW) co-design has become essential to the creation of efficient, high-performance intelligent systems. AI algorithms must be optimized to suit specific platforms, and domain-specific hardware accelerators should be implemented to ensure the efficient execution of AI workloads. By seamlessly integrating hardware and software, co-design facilitates the effective deployment of AI across a broad range of applications, supporting both cloud and edge computing paradigms.

While HW/SW co-design offers significant advantages, it faces several limitations, such as compatibility issues, significant constraints—especially on edge devices—the need for continuous adaptation to evolving AI models, and a lack of standardization. To tackle these challenges, we are seeking submissions of original research articles and reviews from both academic and industrial experts in the field of HW/SW co-design for AI.

Potential topics include (but are not limited to):

  • Algorithms: Advanced AI and compression techniques, including pruning, quantization, and distillation, to enable low-cost, low-footprint deployment;
  • Hardware: Domain-specific architectures and implementations for machine learning (ML), including digital and analog circuits, FPGAs, SoCs, MCUs, and both in-memory and near-memory architectures;
  • Software: Compilers optimized for efficient ML execution; integrated environments for end-to-end AI development; embedded software and embedded AI solutions, as well as ML operations (MLOps) for streamlined model management;
  • Systems: Distributed ML systems; heterogeneous AI systems involving CPUs, GPUs, FPGAs, and domain-specific accelerators; edge AI systems;
  • Applications: Application-specific ML deployment tailored to industry needs and use cases;
  • Benchmarking and Evaluation: Performance benchmarking and evaluation for ML platforms.

Dr. Shengyu Duan
Dr. Chenhong Cao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • hardware-software co-design
  • artificial intelligent system
  • machine learning
  • model compression
  • domain-specific architecture
  • end-to-end framework
  • heterogeneous AI
  • edge AI ML benchmark

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 3783 KB  
Article
FPGA-Based Front-End Low-Light Enhancement for Deterministic Vision-Only Driving Perception
by Fuwen Xie, Hanhui Jing, Zhiting Lu, Shaoxin Ju, Bochun Peng, Tianle Xie, Linfang Yang, Wenman Han, Zhizhong Wang and Gaole Sai
Electronics 2026, 15(6), 1224; https://doi.org/10.3390/electronics15061224 - 15 Mar 2026
Viewed by 286
Abstract
Vision-only driving perception systems are highly sensitive to illumination variations, particularly under low-light conditions where reduced contrast and structural degradation impair detection and segmentation accuracy. Rather than treating enhancement as a post-processing step, this work investigates the system-level impact of relocating low-light enhancement [...] Read more.
Vision-only driving perception systems are highly sensitive to illumination variations, particularly under low-light conditions where reduced contrast and structural degradation impair detection and segmentation accuracy. Rather than treating enhancement as a post-processing step, this work investigates the system-level impact of relocating low-light enhancement to the FPGA-based front end within a heterogeneous FPGA–ARM architecture. A hardware-accelerated visual pipeline is designed to perform color space conversion, fixed-point convolutional enhancement, and multi-channel fusion prior to high-level perception on the ARM processor. Experimental results demonstrate that the proposed FPGA-based front-end enhancement introduces only 13 ms of additional processing latency, which executes in parallel with the preceding frame’s neural network inference and therefore imposes zero net overhead on the end-to-end pipeline. In contrast, an equivalent software-based back-end enhancement approach would add its full processing time serially to the inference stage, increasing total system latency proportionally. The system achieves a sustained throughput of 58 fps while supporting real-time multi-task perception including lane detection (YOLOPv2, 539 ms per frame), object detection and emergency braking (YOLOv5, 432 ms per frame), and hardware-level multi-camera synchronization. Full article
(This article belongs to the Special Issue Hardware and Software Co-Design in Intelligent Systems)
Show Figures

Figure 1

22 pages, 5621 KB  
Article
DocCLS_NMMH: A Benchmark for Native Multi-Modal Hybrid Document Classification in Enterprise Data Security Governance
by Zhenkai Wang, Yi Shen, Dong Zheng, Qi Liu, Peng Wang, Wutao Qin and Hongying Jia
Electronics 2026, 15(6), 1202; https://doi.org/10.3390/electronics15061202 - 13 Mar 2026
Viewed by 356
Abstract
In the practice of enterprise data security governance, document AI has emerged as a mission-critical component that seeks to underpin the prevention of document leakage via automatic accurate classification and identification of sensitive content. Arising from this, a need to bring document classification [...] Read more.
In the practice of enterprise data security governance, document AI has emerged as a mission-critical component that seeks to underpin the prevention of document leakage via automatic accurate classification and identification of sensitive content. Arising from this, a need to bring document classification benchmark closer to real-world engineering applications is highlighted. This paper identifies the lack of public datasets for native multi-modal hybrid document classification and, accordingly, proposes the dataset DocCLS_NMMH (Native Multi-Modal Hybrid Document Classification) along with its out-of-distribution (OOD) test subset. An experimental study on the proposed dataset demonstrates that current benchmarks have become irrelevant and need to be updated to evaluate native multi-modal hybrid documents. Meanwhile, accuracy degradation in heterogeneous documents and few-shot scenarios is assessed, as all of these are prevalent in the practice. The experimental results demonstrate that LayoutLM achieves a state-of-the-art (SOTA) performance with 98.66% accuracy on DocCLS_NMMH, with only approximately 7% accuracy degradation on its OOD test subset, while training-free models (Qwen2.5-VL-32B and Gemma3-27B) consistently achieve over 95% accuracy across the full dataset. The SOTA performance of these models on our benchmark provides an effective guidance for model selection in real engineering applications. Full article
(This article belongs to the Special Issue Hardware and Software Co-Design in Intelligent Systems)
Show Figures

Figure 1

23 pages, 8187 KB  
Article
A Secure UAV Swarm Architecture Based on Dynamic Heterogeneous Redundancy and Cooperative Supervision
by Wutao Qin, Qiang Li, Qi Liu and Zhenkai Wang
Electronics 2026, 15(5), 1130; https://doi.org/10.3390/electronics15051130 - 9 Mar 2026
Viewed by 344
Abstract
Current Unmanned Aerial Vehicle (UAV) swarm designs prioritize physical reliability over network security, leaving systems vulnerable to increasingly sophisticated cyber threats in complex environments. Existing defense methods are mostly limited to peripheral network security technologies, such as encryption, authentication, and firewalls. Consequently, they [...] Read more.
Current Unmanned Aerial Vehicle (UAV) swarm designs prioritize physical reliability over network security, leaving systems vulnerable to increasingly sophisticated cyber threats in complex environments. Existing defense methods are mostly limited to peripheral network security technologies, such as encryption, authentication, and firewalls. Consequently, they lack deep integration at the formation architecture level. This separation results in a disconnect between system reliability design and security protection mechanisms, making it difficult to effectively deal with high-level security threats such as internal backdoor vulnerabilities. To this end, this paper proposes an endogenous security architecture for UAV swarm based on dynamic heterogeneous redundancy (DHR) and cooperative supervision. Firstly, a theoretical model of DHR system for UAV swarm was constructed, and discrete nodes are abstracted as dynamic heterogeneous resource pools. Through the formal definition of the heterogeneous executor space, redundancy adjudication mechanism, and dynamic scheduling method, we demonstrate how this architecture suppresses common mode failures by introducing internal and external uncertainties, thereby realizing the coordination and unification of safety and security. Secondly, a distributed security control strategy based on cooperative supervision is proposed, which uses cross-validation between neighbors to replace the centralized adjudication of traditional DHR, solves the problem of anomaly detection in a decentralized environment, and combines reactive cleaning and periodic disturbance scheduling to give the system the ability to self-heal against unknown threats. Simulations in various attack scenarios demonstrate the proposed method’s superiority over traditional architectures. Especially in the simulated dormant multi-mode Advanced Persistent Threat (APT) scenario, the system can still maintain availability of more than 81%, which effectively verifies the key role of the coordination mechanism of heterogeneity, redundancy and dynamics in enhancing the safety and security of UAV swarms. Full article
(This article belongs to the Special Issue Hardware and Software Co-Design in Intelligent Systems)
Show Figures

Figure 1

14 pages, 2571 KB  
Article
RMP: Robust Multi-Modal Perception Under Missing Condition
by Xin Ma, Xuqi Cai, Yuansheng Song, Yu Liang, Gang Liu and Yijun Yang
Electronics 2026, 15(1), 119; https://doi.org/10.3390/electronics15010119 - 26 Dec 2025
Cited by 2 | Viewed by 507
Abstract
Multi-modal perception is a core technology for edge devices to achieve safe and reliable environmental understanding in autonomous driving scenarios. In recent years, most approaches have focused on integrating complementary signals from diverse sensors, including cameras and LiDAR, to improve scene understanding in [...] Read more.
Multi-modal perception is a core technology for edge devices to achieve safe and reliable environmental understanding in autonomous driving scenarios. In recent years, most approaches have focused on integrating complementary signals from diverse sensors, including cameras and LiDAR, to improve scene understanding in complex traffic environments, thereby attracting significant attention. However, in real-world applications, sensor failures frequently occur; for instance, cameras may malfunction in scenarios with poor illumination, which severely reduces the accuracy of perception models. To overcome this issue, we propose a robust multi-modal perception pipeline designed to improve model performance under missing modality conditions. Specifically, we design a missing feature reconstruction mechanism to reconstruct absent features by leveraging intra-modal common clues. Furthermore, we introduce a multi-modal adaptive fusion strategy to facilitate adaptive multi-modal integration through inter-modal feature interactions. Extensive experiments on the nuScenes benchmark demonstrate that our method achieves SOTA-level performance under missing-modality conditions. Full article
(This article belongs to the Special Issue Hardware and Software Co-Design in Intelligent Systems)
Show Figures

Figure 1

Back to TopTop