You are currently viewing a new version of our website. To view the old version click .
Computers
  • This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
  • Article
  • Open Access

25 December 2025

Towards Secure and Adaptive AI Hardware: A Framework for Optimizing LLM-Oriented Architectures

and
Department of Computer Engineering, Faculty of Engineering, The Hashemite University, Zarqa 13133, Jordan
*
Author to whom correspondence should be addressed.
Computers2026, 15(1), 10;https://doi.org/10.3390/computers15010010 
(registering DOI)

Abstract

With the increasing computational demands of large language models (LLMs), there is a pressing need for more specialized hardware architectures capable of supporting their dynamic and memory-intensive workloads. This paper examines recent studies on hardware acceleration for AI, focusing on three critical aspects: energy efficiency, architectural adaptability, and runtime security. While notable advancements have been made in accelerating convolutional and deep neural networks using ASICs, FPGAs, and compute-in-memory (CIM) approaches, most existing solutions remain inadequate for the scalability and security requirements of LLMs. Our comparative analysis highlights two key limitations: restricted reconfigurability and insufficient support for real-time threat detection. To address these gaps, we propose a novel architectural framework grounded in modular adaptivity, memory-centric processing, and security-by-design principles. The paper concludes with a proposed evaluation roadmap and outlines promising future research directions, including RISC-V-based secure accelerators, neuromorphic co-processors, and hybrid quantum-AI integration.

Article Metrics

Citations

Article Access Statistics

Article metric data becomes available approximately 24 hours after publication online.