Next Article in Journal
An Integrated Quasi-Zero-Stiffness Mechanism with Arrayed Piezoelectric Cantilevers for Low-Frequency Vibration Isolation and Broadband Energy Harvesting
Previous Article in Journal
A Data-Driven Framework for Digital Transformation in Smart Cities: Integrating AI, Dashboards, and IoT Readiness
Previous Article in Special Issue
Integrated Sensing and Communication Using Random Padded OTFS with Reduced Interferences
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integrating Large Language Models into Fluid Antenna Systems: A Survey

1
School of Information Science and Technology, Harbin Institute of Technology, Shenzhen 518055, China
2
School of Electronic and Computer Engineering, Peking University, Beijing 100871, China
3
Guangdong Provincial Key Laboratory of Aerospace Communication and Networking Technology, Harbin Institute of Technology, Shenzhen 518055, China
4
National Mobile Communications Research Laboratory, Southeast University, Nanjing 210096, China
5
State Key Laboratory of Mathematical Sciences, AMSS, Chinese Academy of Sciences, Beijing 100190, China
6
Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(16), 5177; https://doi.org/10.3390/s25165177
Submission received: 7 July 2025 / Revised: 13 August 2025 / Accepted: 19 August 2025 / Published: 20 August 2025

Abstract

Fluid antenna system (FAS) has emerged as a promising technology for next-generation wireless networks, offering dynamic reconfiguration capabilities to adapt to varying channel conditions. However, FAS faces critical issues from channel estimation to performance optimization. This paper provides a survey of a how large language model (LLM) can be leveraged to address these issues. We review potential approaches and recent advancements in LLM-based FAS channel estimation, LLM-assisted fluid antenna position optimization, and LLM-enabled FAS network simulation. Furthermore, we discuss the role of LLM agents in FAS management. As an experimental study, we evaluated the performance of our designed LLM-enhanced genetic algorithm. The results demonstrated a 75.9 % performance improvement over the traditional genetic algorithm on the Rastrigin function.

1. Introduction

The fluid antenna systems (FAS) has emerged as a promising reconfigurable antenna technology, attracting significant attention due to its ability to dynamically adjust physical characteristics, such as position, shape, or orientation, to actively adapt to time-varying wireless channel conditions [1,2,3]. Unlike conventional fixed antennas, FAS leverages software-defined control to extend spatial degrees of freedom, significantly enhancing communication performance in complex multipath environments. The growing interest in FAS stems primarily from the stringent demands of future wireless networks for high spectral efficiency, ultra-low latency, and environmental awareness communications. For instance, in dense multi-user environments, FAS can optimize antenna position to mitigate co-channel interference and improve multi-user diversity gain [4,5]. In integrated sensing and communication (ISAC) systems, its flexible spatial reconfiguration capability enables simultaneous optimization of radar detection accuracy and data transmission rates [6]. Consequently, FAS is a key technology for overcoming the limitations of traditional antenna designs, with its development holding substantial significance for next-generation wireless networks.
Despite its advantages, FAS faces several issues that limit its practical applications. The first issue is channel estimation, which demands extensive resource-intensive measurements. The second issue is antenna position optimization, where the solution must achieve high performance while remaining computationally efficient. The third issue lies in FAS network simulation, which must accurately replicate real-world conditions while enabling rapid evaluation and dynamic scenario reconfiguration. These issues have attracted significant attention, yet they remain insufficiently addressed.
Large language models represent a class of deep learning-based artificial intelligence systems characterized by their massive parameter scale and multimodal comprehension capabilities, enabling them to perform diverse tasks such as natural language processing, computer vision, behavioral understanding, and content generation. The advent of LLMs has precipitated transformative advances across multiple disciplines, including wireless communications [7,8,9,10], autonomous driving [11,12,13], and more. In wireless communications, LLM has envisioned being applied into large-scale multi-input multi-output (XL-MIMO) [7], reconfigurable intelligent surface (RIS) [8], ISAC [9], and radio map generation [10], demonstrating significant performance improvements over traditional approaches. For instance, Dai et al. [7] applied LLM to optimize near-field XL-MIMO communications for the low-altitude economy, addressing key challenges in beam focusing and spectrum efficiency maximization through a novel LLM-based scheme that outperforms existing benchmarks. Xu et al. [8] proposed an LLM-enhanced RIS framework for 6G internet of vehicles (IoV), leveraging LLM’s analytical capabilities to dynamically optimize RIS configurations based on real-time vehicular data, thereby achieving energy-efficient and reliable communications while overcoming the challenges of vehicular environment dynamics, with simulations validating its superior performance. Li et al. [9] investigated a network of unmanned aerial vehicles (UAVs) with ISAC capabilities, formulating a multi-objective optimization problem to balance communication and sensing performance and proposed an LLM-enhanced evolutionary algorithm that outperforms baseline methods in achieving optimal trade-offs. Quan et al. [10] developed an automated LLM agent framework for radio map generation and wireless network planning, which significantly minimizes manual intervention.
Addressing these fundamental FAS challenges and unlocking the system’s potential, this paper systematically investigates the potential to revolutionize FAS through the integration of LLM. We examine the following: LLM-enhanced channel estimation for improved signal characterization, LLM-driven antenna position optimization for dynamic reconfiguration, LLM-based network traffic simulation for realistic performance evaluation, and specialized LLM agents for autonomous FAS management. The challenges and future research opportunities in applying LLM to FAS are finally highlighted. Furthermore, motivated by the idea from MindEvolution [14], we designed an LLM-enhanced genetic algorithm and evaluated its performance on the Rastrigin function and a three-path FAS channel function. Our experimental results include a 75.9% performance improvement over the traditional genetic algorithm on the Rastrigin function. For the three-path FAS channel, both the LLM-enhanced and traditional genetic algorithms demonstrated comparable performance, each approaching theoretical optimality. We attribute this improvement to the LLM’s ability to perform more effective crossover and mutation by leveraging its knowledge of the Rastrigin function. As for the three-path FAS channel function, the results may reflect the presence of numerous local optima with comparable performance. Our code is available at https://github.com/TingsongDeng/LLM-GAs.git (accessed on 8 August 2025).
The remainder of this paper is organized as follows: Section 1 introduces the background of FAS and the challenges, along with potential applications, of LLM in FAS deployment. Section 2 provides a detailed analysis of LLM-based fluid antenna channel estimation. Section 3 focuses on LLM-driven antenna position optimization strategies, with four possible solutions being proposed and a comprehensive comparison of their performance and requirements being presented. Section 4 highlights LLM-assisted FAS network simulation techniques. Section 5 introduces LLM agents and discusses several potential application schemes for addressing specific problems that may arise during FAS deployment. Additionally, Section 6 incorporates LLM into the crossover and mutation operations of genetic algorithms, experimental tests were conducted on both the Rastrigin function and the FAS channel function. Challenges and research opportunities are discussed in Section 7, while conclusions are provided in Section 8. An illustration of this organization is given in Figure 1.

2. LLM for Fluid Antenna Channel Estimation

In FAS, channel estimation is crucial as it identifies the high-performance antenna positions (“good” positions) while avoiding mediocre ones (“bad” positions). Accurate channel state information enables FAS to dynamically switch to optimal ports, significantly outperforming traditional fixed antenna arrays. However, since FAS typically employs a large number of ports (e.g., 256 or continuous ports) but can only measure a few at a time (e.g., four antennas due to hardware constraints), channel estimation demands substantial resources. In addition, unlike conventional MIMO systems, FAS features extremely small port spacing, resulting in high-dimensional channels with strong spatial correlation. To address this challenge, state-of-the-art FAS channel estimation research can be divided into two categories. The first one employs numerical methods, including linear minimum mean square error (LMMSE) [15], compressed sensing [16], successive Bayesian learning [17], sparse Bayesian learning [18], and orthogonal matching pursuit (OMP) [19]. The second one is leveraging neural networks, including dedicated sub-networks [20], asymmetric graph masked autoencoder [21], conditional generative adversarial network [22], and diffusion model [23]. However, existing methods still struggle to achieve accurate FAS channel estimation with limited samples.
The LLM introduces novel possibilities for FAS channel estimation by effectively modeling complex real-world signal propagation, enabling few-shot sampling and accurate prediction. Unlike conventional techniques, LLM adapts dynamically to varying channel conditions and leverages contextual data (e.g., geographic and environmental factors) to improve accuracy. Additionally, LLM can optimize signal training patterns through data-driven insights. A key enabler is tokenizing channel characteristics, which allows LLM to process FAS estimation efficiently. As demonstrated in [24], this approach significantly reduces estimation overhead via predictive modeling.
At present, research on leveraging LLM to improve fluid antenna channel estimation remains limited, with notable contributions from [25,26]. In [25], the authors explore LLM-enhanced channel estimation for low-earth orbit (LEO) satellite Internet of Things (IoT) networks. By employing a low-rank-adaptation (LoRA)-optimized LLaMA-3 model to process compressed channel representations, their approach achieves a 10 dB normalized mean squared error (NMSE) improvement over traditional predictors such as LSTM and GRU. Meanwhile, in ref. [26], the authors introduce a novel framework addressing two key challenges in massive MIMO channel prediction. First, it resolves the modality mismatch between linguistic knowledge in pre-trained LLMs and channel state information (CSI) via a cross-modal alignment module. This module projects CSI features into the LLM’s semantic space using principal-component-analysis (PCA)-reduced word embeddings and similarity-based semantic prompts. Second, to mitigate computational inefficiency, the authors propose CSI-ALM-Light, a lightweight variant distilled from CSI-ALM through attention matrix optimization. With only 0.34 million parameters, CSI-ALM-Light matches the performance of its larger counterpart while offering practical deployability.

3. LLM for Fluid Antenna Position Optimization

Fluid antenna position optimization is a critical issue for FAS. In particular, a suitable antenna position will greatly enhance FAS performance, as different antenna positions may have significant distinctions. Currently, research on fluid antenna position optimization can be categorized into three main approaches. The first approach employs convex and nonconvex optimization methods, such as gradient descent [27], alternating optimization (AO) [28], successive convex approximation (SCA) [29], and majorization–minimization (MM) [30]. These methods seek tractable approximations of the original problem formulation, typically yielding convergent solutions with moderate computational overhead. The second approach utilizes evolutionary algorithms, including standard particle swarm optimization (PSO) [31], multi-velocity particle swarm optimization (MVPSO) [32], and genetic algorithms [33]. The third approach leverages deep reinforcement learning (DRL) techniques, such as advantage actor-critic (A2C) [34], team-inspired DRL [35], and multi-agent deep deterministic policy gradient (MADDPG) [36].
Nevertheless, each of these methods exhibits distinct advantages and limitations. Convex and nonconvex optimization techniques achieve the fastest convergence rates but are prone to becoming trapped in stationary points. In contrast, evolutionary algorithms demonstrate superior performance by thoroughly exploring the objective function’s landscape, albeit at the cost of significantly slower convergence. DRL typically delivers the best overall performance; however, it demands substantial computational resources, extensive training time, and large datasets.
LLM-driven optimization holds substantial promise for overcoming the limitations of traditional optimization methods. By leveraging the advanced reasoning capabilities and computational power of LLMs, recent work (e.g., [37]) has demonstrated their effectiveness in solving optimization problem. In the following section, we review the state-of-the-art LLM-driven optimization approaches relevant to FAS.

3.1. LLM as a Black-Box Optimization Search Model

LLM can explore solution spaces through generative search without requiring explicit problem structure modeling. In particular, LLM can generate candidate solutions without mathematical formulation. Furthermore, LLM can inherently avoid generating infeasible solutions that violate physical laws or common sense constraints.
An example is DeepMind’s Optimization by PROmpting (OPRO) framework [38], which introduces a specialized architecture for mathematical optimization through natural language processing. The framework uniquely transforms optimization objectives into natural language prompts while systematically incorporating historical solving attempts into its reasoning process. By generating new candidate solutions based on previous optimization trajectories and continuously updating its meta-prompt vocabulary with these results, the system creates a self-improving loop that progressively refines the LLM’s problem-solving strategy. This iterative approach enables the model to learn directly from its optimization history while maintaining the flexibility to adapt its search direction based on accumulated experience. However, this approach still exhibits significant limitations:
  • No convergence guarantee. Prompt engineering and parameter tuning critically affect the solution quality of LLM-based black-box optimization.
  • Susceptibility to local optima. The iterative generation process often converges to suboptimal solutions and exhibits instability in continuous variable optimization.
  • High computational cost. Each optimization iteration requires a complete forward pass of the LLM, making the process substantially less efficient than conventional optimization algorithms.

3.2. LLM-Guided Deep Reinforcement Learning

Deep reinforcement learning (DRL) employs neural networks to approximate value functions or policy functions or both functions, thereby addressing the curse of dimensionality in high-dimensional state or action spaces. The end-to-end learning characteristic of DRL enables direct mapping from raw inputs to action outputs, eliminating the need for complex feature engineering and modular design in traditional RL and thus streamlining the process. Additionally, DRL’s nonlinear approximation capability allows it to represent complex strategies, far surpassing the linear functions or tabular methods used in traditional RL. Despite its advantages, DRL inevitably has four weaknesses:
  • Low sample efficiency. DRL demands extensive environmental interaction samples, resulting in high training costs and challenges for real-world physical systems.
  • Difficulty in learning effectively under sparse rewards. DRL struggles to learn efficiently in sparse reward scenarios.
  • High sensitivity to hyperparameters. Extensive experimentation is needed for tuning, and training can be unstable due to unsuitable hyper-parameters.
  • Limited generalization in highly dynamic environments. DRL generalization performance may drop sharply when environmental conditions change significantly.
To combat the above weakness of DRL, LLM-guided DRL aims at leveraging LLM to enhance the training efficiency, generalization capability, and interpretability of DRL. LLM-guided DRL brings the semantic understanding, task decomposition, and reasoning abilities of LLM into DRL, which excels at the following.
  • LLM-generated data for enhancing sample efficiency. LLMs can provide prior knowledge by generating reasonable initial policies or sub-goals based on existing knowledge, thereby reducing random exploration and accelerating DRL convergence. Additionally, LLMs can generate data to simulate expert trajectories. For example, Wang et al. [39] use LLM to generate construction plans while DRL manages execution, drastically cutting training time. For another example, Zhu et al. [40] proposed a novel approach called LAMARL (LLM-aided multi-agent reinforcement learning), which utilizes LLMs to generate prior policies and achieves an average 185.9% improvement in sample efficiency. Most recently, Du et al. [41] introduced RLLI, integrating RL with LLM interaction. It employs LLM-generated feedback as RL rewards to enhance convergence and uses LLM-assisted optimization to improve sample efficiency by reducing redundant computations.
  • LLM for handling sparse rewards. LLM can automatically generate dense reward signals by designing intermediate rewards based on task descriptions to guide the learning process. For example, Ma et al. [42] proposed Eureka, which leverages LLMs to automate and optimize reward functions for DRL tasks.
  • LLM-based hyperparameter optimization. LLM-based hyperparameter optimization automates the tuning process for DRL, efficiently discovering optimal configurations. For example, Liu et al. [43] proposed a framework named AgentHPO, which utilizes LLM agents to automate the hyperparameter optimization process.
  • LLM-enhanced generalization. LLM-guided DRL improves generalization by decomposing tasks and facilitating transfer learning. It breaks complex tasks into familiar subtasks, enabling policy reuse and zero-shot or few-shot adaptation through natural language understanding. For example, Ahn et al. [44] proposed SayCan, an innovative framework that leverages LLMs to interpret high-level instructions and decompose them into executable subtasks, which are then sequentially executed by DRL agents.

3.3. LLM-Guided Evolutionary Algorithms

Evolutionary algorithms are a class of biologically inspired, population-based optimization methods. As a key branch of computational intelligence, they excel at solving complex optimization problems by mimicking natural evolutionary processes, such as selection, mutation, and recombination. However, despite their advantages in addressing certain optimization challenges, evolutionary algorithms still exhibit notable weaknesses:
  • Trapped into local optima. Traditional evolutionary algorithms, such as genetic algorithms, often become trapped in suboptimal solutions due to limited population diversity or deceptive fitness landscapes.
  • Curse of dimensionality. Search efficiency declines exponentially as problem dimensionality increases, requiring specialized handling for problems exceeding thousands of dimensions. Certain optimization processes may require tens of thousands of fitness evaluations, leading to high computational expenses.
  • Constraint handling difficulties. Formalizing expert knowledge into fitness function constraints proves challenging.
To combat the above weaknesses, LLM-guided evolutionary algorithms fundamentally reposition LLMs from potential direct solvers to sophisticated design assistants that synergistically combine domain-specific knowledge with advanced analytical capabilities. This approach demonstrates the following distinct advantages over conventional evolutionary algorithms.
  • Helps to escape from local optima. By leveraging LLMs’ capabilities in crossover, mutation, and other exploration operations, LLM-enhanced evolutionary algorithms demonstrate improved ability to escape local optima. For example, as reported in [14], Deepmind developed MindEvolution, which adapts a genetic algorithm to solve natural language problems using LLM to perform crossover and mutation in text-based solutions.
  • LLM-powered dimensionality reduction. LLMs leverage natural language to describe complex individuals, mapping high-dimensional spaces into low-dimensional semantic representations and thus overcoming the limitations of traditional numerical encoding. The introduction of LLM can break through the efficiency bottleneck of hyperparameter optimization in evolutionary algorithms. For example, Romera et al. [45] proposed the FunSearch method, which combines LLM and evolutionary algorithms to compress the search space, achieving significant breakthroughs in solving problems such as CapSet. For another example, Hameed et al. [46] proposed an innovative approach leveraging ChatGPT-3.5 and Llama3 to generate optimized particle positions and velocity suggestions, which replace underperforming particles in PSO, thereby reducing model evaluation calls by 20–60% and significantly accelerating convergence.
  • LLM-based semantic constraint processing. LLM can automatically transform expert-described fuzzy constraints into computable mathematical expressions while dynamically adjusting constraint weights and integrating multimodal constraints, thereby eliminating the need for manual constraint design in traditional evolutionary algorithms. For example, Shinohara et al. [47] proposed a method called LMPSO. For constraint handling, this method enables users to directly specify constraints in natural language, with the LLM automatically adhering to these constraint requirements when generating solutions. Additionally, through a meta-prompt mechanism, it supports dynamic adjustment of constraints or heuristic rules during the optimization process.

3.4. AlphaEvolve-like Approach: LLM as Solution Program Generator and Critic

Google DeepMind proposed AlphaEvolve in May 2025, a programming agent that combines LLMs with evolutionary algorithms [48]. This agent integrates generative models (e.g., policy networks) and critics (e.g., value networks) to iteratively optimize solutions. This architecture can be adapted to LLM application scenarios, enabling them to serve dual roles as both solution generators and quality evaluators, thereby constructing a self-optimizing closed-loop system.
The core of this AlphaEvolve-like methodology lies in leveraging LLMs as a collaborative system of solution generators and critics, as illustrated in Figure 2. The LLM-based optimization follows an iterative generate–evaluate–refine cycle. First, as a policy network, it generates diverse candidate solutions through multi-path reasoning, with domain-specific fine-tuning improving solution relevance. Then, as a critic, it evaluates solutions across key metrics such as correctness, efficiency, and robustness. The system retains top solutions, refines or discards others, and produces improved variants, repeating until meeting quality standards. Compared to conventional LLM inference, this evolutionary approach offers several distinct advantages:
  • Iterative solution refinement. The optimization solution evolves through iterations, mitigating early-stage error accumulation and outperforming single-pass self-optimization in closed-loop systems.
  • Feedback learning mechanism. This AlphaEvolve-like approach can demonstrate effective learning from performance feedback.
  • Creativity–rigor Synergy. This approach successfully integrates the creative generation of optimization solutions with rigorous evaluation protocols.

3.5. Comparison and Discussion

When FAS is deployed in practice, real-time and solution quality should be the top priorities. In additional, lower memory and floating-point operations (FLOPs) requirements, along with a simple design, are desirable. Real-time performance is crucial, as capturing channel dynamics requires rapid processing. In this regard, a neural network trained via LLM-guided DRL (LLM-DRL) undoubtedly offers fast inference speeds, meeting real-time demands. Furthermore, if evolving from a simple yet fast baseline solution, such as gradient descent, LLM acting as both a solution generator and critic (LLM-Alpha) has the potential to produce a significantly better program. Solution quality directly governs FAS effectiveness. Here, LLM-DRL- and LLM-guided evolutionary search (LLM-EA) likely perform similarly, as both strategies pursue near-complete search space coverage. The LLM-Alpha, which searches for optimal programs through extensive computation, demands significantly higher FLOPs and memory resources than do other methods due to its exhaustive search requirements. The detailed comparative results are presented in Table 1, where we estimate the number of stars by our knowledge. For FAS deployment, we recommend the LLM-DRL, as its trained model can deliver satisfactory performance with efficient real-time execution.

4. LLM for FAS Network Simulation

With the rapid advancement of FAS, the demand for accurate and efficient traffic simulation tailored to these new network architectures is steadily increasing [49]. Traditional traffic simulation methods are often limited in their ability to model the fine-grained, dynamic nature of FAS networks. These methods struggle to capture complex spatial-temporal effects and the diverse behaviors of users in realistic environments, making them less effective in FAS scenarios [50]. Statistical models such as Poisson and Markov processes also fail to reflect the high variability of traffic patterns in real FAS networks. LLM has demonstrated remarkable capabilities in data modeling, contextual understanding, task planning, and cross-modal reasoning for network simulation [51]. Integrating LLM into FAS network traffic simulation introduces a novel paradigm with significant advantages in the following aspects:
  • Human-like behavior modeling. LLMs are capable of simulating human behaviors in a highly realistic manner, thereby generating network traffic patterns that closely resemble real-world conditions. Human interactions with networks are typically personalized and diverse, involving variations in access time, usage habits, application types, and traffic fluctuations. In complex environments, such behaviors are multi-modal and context-dependent rather than uniform. LLMs can capture such dynamics, learn behavioral diversity and patterns from real-world data, and generate adaptive, anthropomorphic traffic that evolves over time and context.
  • Strong adaptivity to new scenarios. With the integration of LLMs, FAS systems can adapt in real-time to changing network conditions, such as congestion or signal degradation, by modifying user behaviors such as adjusting video quality or pausing downloads. While LLMs generally exhibit some inference latency, they can still simulate dynamic network environments by leveraging pre-trained models and responding quickly to shifts in network conditions. Achieving true real-time adaptability would require specialized hardware or further optimization, such as model compression, to enhance inference speed and efficiency.
  • Controllable and interpretable traffic generation. Unlike traditional black-box models, LLMs offer explicit control over simulation parameters through prompt engineering, such as quality of service (QoS) requirements and hardware limitations, improving the transparency and repeatability of traffic generation.
  • Reduced complexity and improved simulation efficiency. By converting complex simulation tasks into high-level natural language descriptions, LLMs streamline the simulation process, reducing computational burden and improving the speed of generating large-scale simulations.
For example, the recently proposed TrafficLLM framework in [52] employs a fine-tuning mechanism that enables LLMs to generate network traffic based on natural language instructions. This framework outperforms traditional models by improving detection task F1 scores (a measure of a model’s accuracy in classification tasks, considering both precision and recall) by over 10% and reducing the Jensen–Shannon divergence between generated and real traffic by approximately 39.3%. Similarly, the ChatSim framework in [53] utilizes multiple LLM agents working in collaboration to transform natural language descriptions into executable traffic scripts. This system showcases how LLMs can handle complex, multimodal scenarios and generate high-fidelity traffic that aligns with dynamic FAS demands. The flexibility and realism provided by ChatSim highlight the potential of LLMs to revolutionize traffic simulation in FAS environments.
Incorporating LLMs into FAS network traffic simulation not only enhances realism and efficiency but also enables adaptive modeling and real-time optimization. As LLM technology advances, it is expected to drive the development of generative network simulators, creating self-evolving systems that optimize network configurations and operations. This will play a pivotal role in the future of intelligent communication systems, especially in the context of ISAC and 6G networks.

5. Building Specialized LLM Agents for FAS

LLM agents now serve as the core in autonomous systems. These agents combine three critical functions—task planning through goal decomposition and strategy optimization, adaptive memory for both short-term context and long-term knowledge, and tool integration—to extend capabilities beyond pure language processing, enabling everything from software coding to hardware control. In the field of LLM Agents, there exist three fundamental patterns:
  • Tool use and planning pattern. The model employs a multi-task coordination and strategic decomposition mechanism to break down objectives into sequentially executed subtasks, dynamically optimizing task priorities through real-time feedback. By integrating external tools, it effectively mitigates hallucination issues and knowledge obsolescence, enabling efficient handling of complex problems. For example, The LLM-Planner proposed by Song et al. [54] employs a hierarchical planning and dynamic re-planning coordination framework, significantly enhancing agent decision-making capabilities in complex scenarios. Experimental results on the ALFRED benchmark demonstrate the system’s strong generalization ability and dynamic environment adaptation under few-shot conditions.
  • ReAct pattern. The ReAct pattern enables cognitive–behavioral unity through its “reason–plan–act–optimize” loop, creating self-correcting intelligence via dynamic task processing. This paradigm moves beyond one-way generation, achieving “knowledge–action unity” through environmental interaction and continuous optimization. For example, Wang et al. [55] proposed an LLM-based agent framework that generates hyperparameter optimization strategies through a multi-path reasoning mechanism while dynamically integrating environmental feedback for strategy adaptation. The framework employs an enhanced WS-PSO-CM algorithm for hyperparameter evaluation, establishing an advanced ReAct pattern. Experimental results demonstrate that compared to conventional manual heuristic methods, this framework achieves a remarkable 54.34% performance improvement in hyperparameter optimization tasks, exhibiting significant optimization effectiveness.
  • Multi-agent pattern. The multi-agent establishes a complex task-processing system through multi-agent collaboration. Task execution in multi-agent systems can be parallelized, unlike in single-agent systems where it is sequential, thereby improving efficiency and reducing latency. For example, Lowe et al. [56] proposed the MADDPG algorithm, which introduces a novel centralized training with decentralized execution (CTDE) paradigm. During training, each agent’s critic network receives the concatenated observations and actions of all agents as input, enabling the learning of global coordination patterns while maintaining policy independence. For execution, agents rely solely on local observations to make autonomous decisions, achieving full decentralization. Experimental results demonstrate this framework’s superior capability in addressing the non-stationarity challenges inherent in multi-agent reinforcement learning (MARL) environments.
The application of LLM agents in FAS enables automated adjustment, significantly outperforming traditional human-experience-dependent design approaches while adapting to rapidly changing communication environments. By real-time monitoring of CSI and interference conditions, the agent can autonomously optimize antenna configurations to dynamically enhance signal quality without relying on fixed rules or manual intervention. Its data-driven decision-making capability overcomes the inherent limitations of human expertise and response latency in conventional design while also improving energy efficiency, coverage, and interference mitigation.

6. Experiment Study on LLM-Enhanced Genetic Algorithm for FAS

6.1. System Model and Problem Formulation

The considered system has one transmitter and one receiver, where both of them are equipped with a single fluid antenna. According to the field response model [57], the channel between the transmitter and the receiver can be given as follows:
h ( v , u ) = f H ( v ) Σ g ( u ) ,
where f ( v ) = [ e j 2 π λ ρ 1 r ( v ) , , e j 2 π λ ρ L r ( v ) ] T C L × 1 denotes the field response vector at the receiver, and v = ( x r , y r , z r ) denotes the position of receive antenna; Σ = diag { σ 1 , , σ L } C L × L denotes the path response matrix with L paths and σ j CN ( 0 , g 0 d α / L ) , j [ L ] ; and g ( u ) = [ e j 2 π λ ρ 1 t ( u ) , , e j 2 π λ ρ L t ( u ) ] T C L × 1 denotes the field response vector at the transmitter, and u = ( x t , y t , z t ) denotes the position of transmit antenna. In particular, ρ j r ( v ) = x r η j r + y r β j r + z r ω j r denotes the phase for path j [ L ] from the receiver side, where with pitch angle θ j r [ π / 2 , π / 2 ] and azimuth angle ϕ j r [ π / 2 , π / 2 ] for path j from the receiver side, we have
η j r = cos θ j r cos ϕ j r ,
β j r = cos θ j r sin ϕ j r ,
ω j r = sin θ j r .
Moreover, ρ i t ( v ) = x t η i t + y t β i t + z t ω i t denotes the phase for path i [ L ] from the transmitter side, where with pitch angle θ i t [ π / 2 , π / 2 ] and azimuth angle ϕ i t [ π / 2 , π / 2 ] for path i from the transmitter side, we have
η i t = cos θ i t cos ϕ i t ,
β i t = cos θ i t sin ϕ i t ,
ω i t = sin θ i t .
We aim to maximize the rate of this point-to-point FAS by optimizing the position of fluid antennas at the transmitter and receiver. This problem is mathematically formulated as
P 0 : max v , u log 2 ( 1 + P | h ( v , u ) | 2 σ 2 )
s . t . v C r , u C t ,
where P denotes transmit power, σ 2 denotes the variance of additive white Gaussian noise (AWGN), C r denotes the receive antenna movable region, and C t denotes the transmit antenna movable region. Note that Problem P0 is a non-convex problem.

6.2. Traditional Genetic Algorithm

Genetic algorithm belongs to evolutionary optimization techniques inspired by natural selection and genetics [58]. Genetic algorithm works by maintaining a population of candidate solutions, which evolve over generations through selection, crossover, and mutation. This algorithm is particularly useful for solving complex, non-linear, and non-convex problems. Specifically, the process begins by initializing a population with random or heuristic-based individuals, each encoded as chromosomes (binary, real-valued, etc.). A fitness function evaluates each solution’s quality, guiding selection, where high-fitness individuals are more likely to reproduce. Selected parents undergo crossover, exchanging genetic material to produce offspring, while mutation introduces small random changes to maintain diversity. This cycle repeats until a termination condition (e.g., max generations or convergence) is satisfied.

6.3. LLM-Enhanced Genetic Algorithm

Motivated by the idea introduced in [14], we designed an LLM-enhanced genetic algorithm that leverages large language models to redefine evolutionary operations. Instead of traditional crossover and mutation, LLMs generate and refine candidate solutions by interpreting contextual prompts, enabling more intelligent exploration of the search space. We compare the traditional genetic algorithm with the proposed LLM-enhanced genetic algorithm in Figure 3. It can be seen that crossover and mutation operations in the traditional genetic algorithm are replaced by LLM in the LLM-enhanced genetic algorithm. The LLM is aware of the objective function and constraints. Along with historical optimization knowledge, LLM can perform crossover and mutation superior to traditional randomized crossover and mutation. This point was validated by the experiments described below.

6.4. Experiment Study

To examine the performance of the LLM-enhanced genetic algorithm, we performed experiments on two objective functions. We implemented our LLM-enhanced genetic algorithm on the DeepSeek R1. First of all, we considered the celebrated Rastrigin function, which is a classic multimodal optimization test function, commonly used to evaluate the performance of optimization algorithms (such as genetic algorithms and particle swarm optimization). It features a large number of local minima within the search space, with the global minimum located at the origin (or a specified offset point), making it well-suited for testing an algorithm’s global search capability. The Rastrigin function is given by the following:
f ( x ) = 10 n + i = 1 n ( x i 2 10 cos ( 2 π x i ) ) .
Next, we tried to minimize the Rastrigin function with x = ( x 1 , , x 5 ) using a population size of 30. As shown in Figure 4, the LLM-enhanced genetic algorithm exhibited a significant advantage over the traditional genetic algorithm. More specifically, LLM-enhanced genetic algorithm and traditional genetic algorithm achieved a minimum of 3.4642 and 14.3703 , respectively, achieving a 75.9 % improvement. The performance gain may come from the following: (1) The traditional genetic algorithm relies on fixed-probability mutation and crossover rules, which often lead to local optima. In contrast, LLM-enhanced algorithms dynamically analyze population diversity and fitness distribution to intelligently adjust mutation rates or design superior crossover schemes. (2) The traditional genetic algorithm relies on numerical perturbations (e.g., Gaussian mutation) that lack directional guidance. The LLM-enhanced method leverages natural language understanding to interpret problem constraints and objectives, generating semantically valid candidate solutions.
h ( y t , z t , y r , z r ) = i = 1 3 A i exp j ω γ i ( y t , z t , y r , z r ) , where ω = 1256.6 , A = [ 0.634 , 0.1768 , 0.1768 ] , γ 1 = 0.1545 + 0.4755 y t + 0.8660 z t + 0.3536 y r + 0.7071 z r , γ 2 = 0.6371 + 0.3068 y t + 0.7071 z t + 0.3510 y r + 0.5878 z r , γ 3 = 0.6123 + 0.6123 y t + 0.5 z t + 0.2939 y r + 0.8660 z r .
Finally, we shifted our focus to FAS and using a population size of 10. As a representative, according to the field response model, the three-path channel was considered, as shown in (6). The function has a theoretical upper bound, given as h ( y t , z t , y r , z r ) = i = 1 3 A i exp ( j ω γ i ) i = 1 3 A i = 0.9856 , where the triangle inequality is applied. The details parameters of this three-path channel are given in Table 2. Figure 5 shows that the LLM-enhanced genetic algorithm achieved performance levels similar to those of the traditional genetic algorithm, with both nearing theoretical optimality. We may attribute this to the structure of the objective function (6), which can be transformed into a summation of multiple cosine functions, leading to numerous local optimal points with comparable performance.

7. Challenges and Research Opportunities

Despite the promising potential of LLMs in enabling FAS, several significant challenges remain that need to be addressed before their full integration into real-world systems. These challenges span technical, practical, and conceptual domains, and overcoming them will be essential for realizing the full potential of LLM-powered FAS:
  • Fusion and alignment of multi-modal data. FAS environments involve a variety of heterogeneous information sources, such as antenna configurations, CSI, environmental semantics, and user behavior patterns. However, most LLMs are primarily trained on text data, making it difficult to align and integrate multimodal inputs (e.g., numerical, visual, and semantic data). Developing unified frameworks capable of processing and aligning these diverse data sources remains a significant technical challenge [59]. Bridging this gap will require the development of sophisticated multimodal learning systems that can handle both structured data (e.g., CSI) and unstructured data (e.g., user behavior) seamlessly.
  • Low-latency inference and computational efficiency. FASs often operate in edge environments with limited computational resources and stringent latency requirements. Current LLMs are computationally expensive and exhibit high inference latency, limiting their real-time adaptability in dynamic FAS scenarios. Lightweight approaches such as LoRA and mixture-of-experts (MoE) models have shown promise in improving efficiency, but further research is required to optimize these models for low-latency inference in resource-constrained environments [60]. The development of efficient model architectures that can provide high-performance inference with minimal computational overhead will be crucial for enabling real-time FAS applications.
  • Security and interpretability. While LLM-driven optimization in FAS offers greater flexibility and autonomy, the black-box nature of these models presents significant risks, especially in mission-critical communication scenarios [61]. Unpredictable behaviors could arise if LLMs are left uncontrolled, leading to potential instability in system performance. Therefore, building interpretable, constraint-aware LLM controllers that allow users to understand and predict model decisions is essential for ensuring safe and trustworthy deployment. Research focused on enhancing the transparency and robustness of LLMs will play a key role in addressing these concerns.
  • Simulation fidelity and generalization. Despite the success of generative traffic models such as TrafficLLM and ChatSim in controlled environments, their performance in complex, real-world FAS networks with bursty interference or rare user scenarios remains limited. These models often struggle to generalize beyond the specific datasets they were trained on, particularly when exposed to unpredicted network conditions. Expanding and diversifying datasets, along with creating practical test beds that closely align with real-world FAS scenarios, is essential for improving the robustness and generalization of these models.
  • Standardization and evaluation of LLM-based agents. Current LLM agent designs for FAS are still fragmented, with significant variation in architectural choices, interaction protocols, and tool integrations across different use cases. There is a pressing need for a unified framework that allows for the benchmarking, evaluation, and development of reusable, FAS-specific intelligent agents [62]. Establishing standardized evaluation criteria and fostering collaboration within the research community will help streamline progress in this area and enable fair comparisons across different approaches.
Looking forward, several promising research opportunities can be pursued to overcome the aforementioned challenges:
  • Hybrid optimization framework. Integrating LLMs with black-box optimization search model, reinforcement learning, evolutionary algorithms, and AlphaEvolve-like approach can lead to the development of hybrid intelligence frameworks that improve both the adaptability and efficiency of FASs in dynamic environments;
  • Self-optimization, continuously evolving, autonomous agents. Developing self-optimizing, continuously evolving autonomous agents, such as Auto-Agent for FAS, that can adapt in real-time based on feedback from the network environment will be crucial for building intelligent, autonomous systems capable of optimizing FAS operations without human intervention.
  • Generalizable simulation libraries and benchmark environments. Constructing generalizable simulation libraries and benchmark environments specifically designed for FAS, 6G, and ISAC scenarios will provide a strong foundation for evaluating and comparing various traffic simulation models. These platforms will be critical for validating new techniques and ensuring their effectiveness in real-world deployments.
  • LLM-empowered FAS-ISAC. Leveraging LLMs to enhance joint sensing and communication systems within FAS-ISAC networks offers significant potential for intelligent environmental perception, adaptive waveform optimization, and real-time data fusion. Such integrations will enable next-generation integrated networks that are more efficient, intelligent, and adaptable to the ever-changing network environment.
In addition to the technical challenges mentioned, the practical deployment of LLM-enhanced FAS faces critical issues related to model compression, fine-tuning, and domain adaptation, which require in-depth exploration for successful real-world applications. LLMs, while powerful, are often too large to be deployed effectively in resource-constrained environments such as edge devices. Model compression techniques [63], such as pruning, quantization, and knowledge distillation, offer potential solutions for reducing the size of these models without sacrificing performance. However, these approaches introduce challenges in maintaining accuracy, especially in dynamic FAS environments where real-time adaptation to user behavior is essential. Fine-tuning LLMs to specific network conditions and traffic scenarios is another avenue for improvement, enabling LLMs to perform optimally in specialized FAS contexts. This fine-tuning process requires continuous learning from real-world data, which adds complexity but is necessary for real-world deployment.
Another significant challenge lies in domain adaptation [64]. Most LLMs are pre-trained on generic datasets, which may not fully capture the diverse and dynamic nature of FAS network traffic. The ability of LLMs to adapt to specific domains, such as rural vs. urban environments or varying user behaviors in different regions, is crucial. Incorporating domain-specific data for training and fine-tuning LLMs ensures that models can generalize well across different FAS environments. This capability is particularly important for low-latency inference, where real-time decisions based on local network conditions are necessary. As FASs are often deployed in edge environments with stringent latency and computational resource limitations, optimizing computational efficiency is paramount. Research on lightweight models and edge-aware architectures is critical for ensuring that LLMs can provide real-time responses without excessive computational overhead. Techniques such as model distillation or parameter sharing across models could help mitigate the computational load while maintaining performance. Moreover, integrating LLMs with low-latency communication protocols will ensure that the system can adapt to rapid network changes, thereby improving the overall responsiveness of FAS in dynamic environments.

8. Conclusions

This survey examined the emerging role of LLMs in advancing FAS, demonstrating their effectiveness in overcoming key technical challenges. While these LLM-driven approaches show significant potential for optimizing channel estimation, antenna positioning, and network simulation, substantial work remains to address computational demands and ensure reliable deployment. The continued development of efficient algorithms and rigorous validation methods will be crucial for realizing practical implementations. As research progresses, the integration of artificial intelligence with reconfigurable antenna technology promises to transform future wireless communication systems.

Author Contributions

Writing—original draft, T.D. and Y.G.; Writing—review & editing, T.D., T.Z. and Y.G.; Supervision, T.Z., M.S., W.N. and H.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under Grant 62401340, in part by the Open Research Fund of National Mobile Communications Research Laboratory, Southeast University under Grant 2025D07, in part by the Fundamental ResearchFunds for the Central Universities under grant 2242025R10001, and in part by Natural Science Foundation of Shandong Province under Grant ZR2023QF103.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Wong, K.K.; Shojaeifard, A.; Tong, K.F.; Zhang, Y. Fluid Antenna Systems. IEEE Trans. Wirel. Commun. 2021, 20, 1950–1962. [Google Scholar] [CrossRef]
  2. Wong, K.K.; Tong, K.F. Fluid Antenna Multiple Access. IEEE Trans. Wirel. Commun. 2022, 21, 4801–4815. [Google Scholar] [CrossRef]
  3. Wong, K.K.; Shojaeifard, A.; Tong, K.F.; Zhang, Y. Performance Limits of Fluid Antenna Systems. IEEE Commun. Lett. 2020, 24, 2469–2472. [Google Scholar] [CrossRef]
  4. Wong, K.K.; Chae, C.B.; Tong, K.F. Compact Ultra Massive Antenna Array: A Simple Open-Loop Massive Connectivity Scheme. IEEE Trans. Wirel. Commun. 2024, 23, 6279–6294. [Google Scholar] [CrossRef]
  5. Xu, H.; Wong, K.K.; New, W.K.; Ghadi, F.R.; Zhou, G.; Murch, R.; Chae, C.B.; Zhu, Y.; Jin, S. Capacity Maximization for FAS-assisted Multiple Access Channels. IEEE Trans. Commun. 2024, 73, 4713–4731. [Google Scholar] [CrossRef]
  6. Zhang, Q.; Shao, M.; Zhang, T.; Chen, G.; Liu, J.; Ching, P.C. An Efficient Sum-Rate Maximization Algorithm for Fluid Antenna-Assisted ISAC System. IEEE Commun. Lett. 2025, 29, 200–204. [Google Scholar] [CrossRef]
  7. Xu, Z.; Zheng, T.; Dai, L. LLM-Empowered Near-Field Communications for Low-Altitude Economy. IEEE Trans. Commun. 2025, 1. [Google Scholar] [CrossRef]
  8. Liu, Q.; Mu, J.; Chen, D.; Zhang, R.; Liu, Y.; Hong, T. LLM Enhanced Reconfigurable Intelligent Surface for Energy-Efficient and Reliable 6G IoV. IEEE Trans. Veh. Technol. 2025, 74, 1830–1838. [Google Scholar] [CrossRef]
  9. Li, H.; Xiao, M.; Wang, K.; Kim, D.I.; Debbah, M. Large Language Model Based Multi-Objective Optimization for Integrated Sensing and Communications in UAV Networks. IEEE Wirel. Commun. Lett. 2025, 14, 979–983. [Google Scholar] [CrossRef]
  10. Quan, H.; Ni, W.; Zhang, T.; Ye, X.; Xie, Z.; Wang, S.; Liu, Y.; Song, H. Large Language Model Agents for Radio Map Generation and Wireless Network Planning. IEEE Netw. Lett. 2025, 1. [Google Scholar] [CrossRef]
  11. Wu, D.; Han, W.; Liu, Y.; Wang, T.; Xu, C.z.; Zhang, X.; Shen, J. Language prompt for autonomous driving. In Proceedings of the AAAI Conference on Artificial Intelligence, Philadelphia, PA, USA, 25 February–4 March 2025; Volume 39, pp. 8359–8367. [Google Scholar]
  12. Kou, W.B.; Zhu, G.; Ye, R.; Wang, S.; Tang, M.; Wu, Y.C. Label Anything: An Interpretable, High-Fidelity and Prompt-Free Annotator. arXiv 2025, arXiv:2502.02972. [Google Scholar] [CrossRef]
  13. Kou, W.B.; Lin, Q.; Tang, M.; Lei, J.; Wang, S.; Ye, R.; Zhu, G.; Wu, Y.C. Enhancing Large Vision Model in Street Scene Semantic Understanding through Leveraging Posterior Optimization Trajectory. arXiv 2025, arXiv:2501.01710. [Google Scholar] [CrossRef]
  14. Lee, K.H.; Fischer, I.; Wu, Y.H.; Marwood, D.; Baluja, S.; Schuurmans, D.; Chen, X. Evolving deeper llm thinking. arXiv 2025, arXiv:2501.09891. [Google Scholar]
  15. Skouroumounis, C.; Krikidis, I. Skip-Enabled LMMSE-Based Channel Estimation for Large-Scale Fluid Antenna-Enabled Cellular Networks. In Proceedings of the ICC 2023-IEEE International Conference on Communications, Rome, Italy, 28 May–1 June 2023; pp. 2779–2784. [Google Scholar] [CrossRef]
  16. Ma, W.; Zhu, L.; Zhang, R. Compressed Sensing Based Channel Estimation for Movable Antenna Communications. IEEE Commun. Lett. 2023, 27, 2747–2751. [Google Scholar] [CrossRef]
  17. Zhang, Z.; Zhu, J.; Dai, L.; Heath, R.W. Successive Bayesian Reconstructor for Channel Estimation in Fluid Antenna Systems. IEEE Trans. Wirel. Commun. 2025, 24, 1992–2006. [Google Scholar] [CrossRef]
  18. Xu, B.; Chen, Y.; Cui, Q.; Tao, X.; Wong, K.K. Sparse Bayesian Learning-Based Channel Estimation for Fluid Antenna Systems. IEEE Wirel. Commun. Lett. 2025, 14, 325–329. [Google Scholar] [CrossRef]
  19. Xiao, Z.; Cao, S.; Zhu, L.; Liu, Y.; Ning, B.; Xia, X.G.; Zhang, R. Channel Estimation for Movable Antenna Communication Systems: A Framework Based on Compressed Sensing. IEEE Trans. Wirel. Commun. 2024, 23, 11814–11830. [Google Scholar] [CrossRef]
  20. Ji, S.; Psomas, C.; Thompson, J. Correlation-Based Machine Learning Techniques for Channel Estimation with Fluid Antennas. In Proceedings of the ICASSP 2024–2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Republic of Korea, 14–19 April 2024; pp. 8891–8895. [Google Scholar] [CrossRef]
  21. Zhang, H.; Wang, J.; Wang, C.; Wang, C.C.; Wong, K.K.; Wang, B.; Chae, C.B. Learning-Induced Channel Extrapolation for Fluid Antenna Systems Using Asymmetric Graph Masked Autoencoder. IEEE Wirel. Commun. Lett. 2024, 13, 1665–1669. [Google Scholar] [CrossRef]
  22. Eskandari, M.; Burr, A.G.; Cumanan, K.; Wong, K.K. cGAN-Based Slow Fluid Antenna Multiple Access. IEEE Wirel. Commun. Lett. 2024, 13, 2907–2911. [Google Scholar] [CrossRef]
  23. Tang, E.; Guo, W.; He, H.; Song, S.; Zhang, J.; Letaief, K.B. Accurate and Fast Channel Estimation for Fluid Antenna Systems with Diffusion Models. arXiv 2025, arXiv:2505.04930. [Google Scholar] [CrossRef]
  24. Liu, B.; Liu, X.; Gao, S.; Cheng, X.; Yang, L. LLM4CP: Adapting Large Language Models for Channel Prediction. J. Commun. Inf. Netw. 2024, 9, 113–125. [Google Scholar] [CrossRef]
  25. Yang, H.; Lambotharan, S.; Derakhshani, M. FAS-LLM: Large Language Model-Based Channel Prediction for OTFS-Enabled Satellite-FAS Links. arXiv 2025, arXiv:2505.09751. [Google Scholar] [CrossRef]
  26. Li, Z.; Yang, Q.; Xiong, Z.; Shi, Z.; Quek, T.Q.S. Bridging the Modality Gap: Enhancing Channel Prediction with Semantically Aligned LLMs and Knowledge Distillation. arXiv 2025, arXiv:2505.12729. [Google Scholar] [CrossRef]
  27. Cheng, Z.; Li, N.; Zhu, J.; She, X.; Ouyang, C.; Chen, P. Sum-Rate Maximization for Fluid Antenna Enabled Multiuser Communications. IEEE Commun. Lett. 2024, 28, 1206–1210. [Google Scholar] [CrossRef]
  28. Chen, Y.; Chen, M.; Xu, H.; Yang, Z.; Wong, K.K.; Zhang, Z. Joint Beamforming and Antenna Design for Near-Field Fluid Antenna System. IEEE Wirel. Commun. Lett. 2025, 14, 415–419. [Google Scholar] [CrossRef]
  29. Qin, H.; Chen, W.; Li, Z.; Wu, Q.; Cheng, N.; Chen, F. Antenna Positioning and Beamforming Design for Fluid Antenna-Assisted Multi-User Downlink Communications. IEEE Wirel. Commun. Lett. 2024, 13, 1073–1077. [Google Scholar] [CrossRef]
  30. Yao, J.; Xin, L.; Wu, T.; Jin, M.; Wong, K.K.; Yuen, C.; Shin, H. FAS for Secure and Covert Communications. IEEE Internet Things J. 2025, 12, 18414–18418. [Google Scholar] [CrossRef]
  31. Xiao, Z.; Pi, X.; Zhu, L.; Xia, X.G.; Zhang, R. Multiuser Communications with Movable-Antenna Base Station: Joint Antenna Positioning, Receive Combining, and Power Control. IEEE Trans. Wirel. Commun. 2024, 23, 19744–19759. [Google Scholar] [CrossRef]
  32. Ding, J.; Zhou, Z.; Jiao, B. Movable Antenna-Aided Secure Full-Duplex Multi-User Communications. IEEE Trans. Wirel. Commun. 2025, 24, 2389–2403. [Google Scholar] [CrossRef]
  33. Guan, J.; Lyu, B.; Liu, Y.; Tian, F. Secure Transmission for Movable Antennas Empowered Cell-Free Symbiotic Radio Communications. In Proceedings of the 2024 16th International Conference on Wireless Communications and Signal Processing (WCSP), Hefei, China, 24–26 October 2024; pp. 578–584. [Google Scholar] [CrossRef]
  34. Wang, C.; Li, G.; Zhang, H.; Wong, K.K.; Li, Z.; Ng, D.W.K.; Chae, C.B. Fluid Antenna System Liberating Multiuser MIMO for ISAC via Deep Reinforcement Learning. IEEE Trans. Wirel. Commun. 2024, 23, 10879–10894. [Google Scholar] [CrossRef]
  35. Waqar, N.; Wong, K.K.; Chae, C.B.; Murch, R.; Jin, S.; Sharples, A. Opportunistic Fluid Antenna Multiple Access via Team-Inspired Reinforcement Learning. IEEE Trans. Wirel. Commun. 2024, 23, 12068–12083. [Google Scholar] [CrossRef]
  36. Weng, C.; Chen, Y.; Zhu, L.; Wang, Y. Learning-Based Joint Beamforming and Antenna Movement Design for Movable Antenna Systems. IEEE Wirel. Commun. Lett. 2024, 13, 2120–2124. [Google Scholar] [CrossRef]
  37. Huang, S.; Yang, K.; Qi, S.; Wang, R. When large language model meets optimization. Swarm Evol. Comput. 2024, 90, 101663. [Google Scholar] [CrossRef]
  38. Yang, C.; Wang, X.; Lu, Y.; Liu, H.; Le, Q.V.; Zhou, D.; Chen, X. Large language models as optimizers. arXiv 2023, arXiv:2309.03409. [Google Scholar] [PubMed]
  39. Wang, G.; Xie, Y.; Jiang, Y.; Mandlekar, A.; Xiao, C.; Zhu, Y.; Fan, L.; Anandkumar, A. Voyager: An open-ended embodied agent with large language models. arXiv 2023, arXiv:2305.16291. [Google Scholar] [CrossRef]
  40. Zhu, G.; Zhou, R.; Ji, W.; Zhao, S. LAMARL: LLM-Aided Multi-Agent Reinforcement Learning for Cooperative Policy Generation. IEEE Robot. Autom. Lett. 2025, 10, 7476–7483. [Google Scholar] [CrossRef]
  41. Du, H.; Zhang, R.; Niyato, D.; Kang, J.; Xiong, Z.; Cui, S.; Shen, X.; Kim, D.I. Reinforcement Learning with LLMs Interaction for Distributed Diffusion Model Services. IEEE Trans. Pattern Anal. Mach. Intell. 2025, 1–18. [Google Scholar] [CrossRef]
  42. Ma, Y.J.; Liang, W.; Wang, G.; Huang, D.A.; Bastani, O.; Jayaraman, D.; Zhu, Y.; Fan, L.; Anandkumar, A. Eureka: Human-level reward design via coding large language models. arXiv 2023, arXiv:2310.12931. [Google Scholar]
  43. Liu, S.; Gao, C.; Li, Y. AgentHPO: Large language model agent for hyper-parameter optimization. In Proceedings of the The Second Conference on Parsimony and Learning (Proceedings Track), Stanford, CA, USA, 24 March 2025. [Google Scholar]
  44. Ahn, M.; Brohan, A.; Brown, N.; Chebotar, Y.; Cortes, O.; David, B.; Finn, C.; Fu, C.; Gopalakrishnan, K.; Hausman, K.; et al. Do as i can, not as i say: Grounding language in robotic affordances. arXiv 2022, arXiv:2204.01691. [Google Scholar] [CrossRef]
  45. Romera-Paredes, B.; Barekatain, M.; Novikov, A.; Balog, M.; Kumar, M.P.; Dupont, E.; Ruiz, F.J.; Ellenberg, J.S.; Wang, P.; Fawzi, O.; et al. Mathematical discoveries from program search with large language models. Nature 2024, 625, 468–475. [Google Scholar] [CrossRef]
  46. Hameed, S.; Qolomany, B.; Belhaouari, S.B.; Abdallah, M.; Qadir, J.; Al-Fuqaha, A. Large Language Model Enhanced Particle Swarm Optimization for Hyperparameter Tuning for Deep Learning Models. IEEE Open J. Comput. Soc. 2025, 6, 574–585. [Google Scholar] [CrossRef]
  47. Shinohara, Y.; Xu, J.; Li, T.; Iba, H. Large language models as particle swarm optimizers. arXiv 2025, arXiv:2504.09247. [Google Scholar]
  48. Novikov, A.; Vũ, N.; Eisenberger, M.; Dupont, E.; Huang, P.S.; Wagner, A.Z.; Shirobokov, S.; Kozlovskii, B.; Ruiz, F.J.; Mehrabian, A.; et al. AlphaEvolve: A coding agent for scientific and algorithmic discovery. arXiv 2025, arXiv:2506.13131. [Google Scholar] [CrossRef]
  49. Wang, C.; Wong, K.K.; Li, Z.; Jin, L.; Chae, C.B. Large Language Model Empowered Design of Fluid Antenna Systems: Challenges, Frameworks, and Case Studies for 6G. arXiv 2025, arXiv:2506.14288. [Google Scholar] [CrossRef]
  50. Pollacia, L.F. A survey of discrete event simulation and state-of-the-art discrete event languages. Acm Sigsim Simul. Dig. 1989, 20, 8–25. [Google Scholar] [CrossRef]
  51. Zhou, H.; Huang, X.; Deng, L. Enhancing Network Traffic Classification with Large Language Models. In Proceedings of the 2024 IEEE International Conference on Big Data, Washington, DC, USA, 15–18 December 2024; pp. 7282–7291. [Google Scholar] [CrossRef]
  52. Cui, T.; Lin, X.; Li, S.; Chen, M.; Yin, Q.; Li, Q.; Xu, K. TrafficLLM: Enhancing Large Language Models for Network Traffic Analysis with Generic Traffic Representation. arXiv 2025, arXiv:2504.04222. [Google Scholar] [CrossRef]
  53. Wei, Y.; Wang, Z.; Lu, Y.; Xu, C.; Liu, C.; Zhao, H.; Chen, S.; Wang, Y. Editable Scene Simulation for Autonomous Driving via Collaborative LLM-Agents. arXiv 2024, arXiv:2402.05746. [Google Scholar] [CrossRef]
  54. Song, C.H.; Wu, J.; Washington, C.; Sadler, B.M.; Chao, W.L.; Su, Y. Llm-planner: Few-shot grounded planning for embodied agents with large language models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–3 October 2023; pp. 2998–3009. [Google Scholar]
  55. Wang, W.; Peng, J.; Hu, M.; Zhong, W.; Zhang, T.; Wang, S.; Zhang, Y.; Shao, M.; Ni, W. LLM Agent for Hyper-Parameter Optimization. arXiv 2025, arXiv:2506.15167. [Google Scholar] [CrossRef]
  56. Lowe, R.; Wu, Y.I.; Tamar, A.; Harb, J.; Pieter Abbeel, O.; Mordatch, I. Multi-agent actor-critic for mixed cooperative-competitive environments. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar] [CrossRef]
  57. Zhu, L.; Ma, W.; Ning, B.; Zhang, R. Movable-Antenna Enhanced Multiuser Communication via Antenna Position Optimization. IEEE Trans. Wirel. Commun. 2024, 23, 7214–7229. [Google Scholar] [CrossRef]
  58. Sampson, J.R. Adaptation in Natural and Artificial Systems; MIT Press: Cambridge, MA, USA, 1976. [Google Scholar]
  59. Tian, Y.; Zhao, Q.; el abidine Kherroubi, Z.; Boukhalfa, F.; Wu, K.; Bader, F. Multimodal Transformers for Wireless Communications: A Case Study in Beam Prediction. arXiv 2023, arXiv:2309.11811. [Google Scholar] [CrossRef]
  60. Zoph, B.; Bello, I.; Kumar, S.; Du, N.; Huang, Y.; Dean, J.; Shazeer, N.; Fedus, W. ST-MoE: Designing Stable and Transferable Sparse Expert Models. arXiv 2022, arXiv:2202.08906. https://arxiv.org/abs/2202.08906. [Google Scholar]
  61. Shi, D.; Shen, T.; Huang, Y.; Li, Z.; Leng, Y.; Jin, R.; Liu, C.; Wu, X.; Guo, Z.; Yu, L.; et al. Large Language Model Safety: A Holistic Survey. arXiv 2024, arXiv:2412.17686. [Google Scholar] [CrossRef]
  62. Wang, S.; Long, Z.; Fan, Z.; Huang, X.; Wei, Z. Benchmark Self-Evolving: A Multi-Agent Framework for Dynamic LLM Evaluation. In Proceedings of the 31st International Conference on Computational Linguistics, Abu Dhabi, United Arab Emirates, 19–24 January 2025; pp. 3310–3328. [Google Scholar]
  63. Wang, W.; Chen, W.; Luo, Y.; Long, Y.; Lin, Z.; Zhang, L.; Lin, B.; Cai, D.; He, X. Model Compression and Efficient Inference for Large Language Models: A Survey. arXiv 2024, arXiv:2402.09748. [Google Scholar] [CrossRef]
  64. Ke, Z.; Ming, Y.; Joty, S. NAACL2025 Tutorial: Adaptation of Large Language Models. arXiv 2025, arXiv:2504.03931. [Google Scholar] [CrossRef]
Figure 1. The organization of this paper.
Figure 1. The organization of this paper.
Sensors 25 05177 g001
Figure 2. LLM as solution program generator and critic.
Figure 2. LLM as solution program generator and critic.
Sensors 25 05177 g002
Figure 3. Comparison of traditional genetic algorithm and LLM-enhanced genetic algorithm.
Figure 3. Comparison of traditional genetic algorithm and LLM-enhanced genetic algorithm.
Sensors 25 05177 g003
Figure 4. Genetic algorithm performance comparison on the Rastrigin function (5).
Figure 4. Genetic algorithm performance comparison on the Rastrigin function (5).
Sensors 25 05177 g004
Figure 5. Genetic algorithm performance comparison on the three-path FAS channel function (6).
Figure 5. Genetic algorithm performance comparison on the three-path FAS channel function (6).
Sensors 25 05177 g005
Table 1. Comparative evaluation of four LLM-based optimization methods. LLM-BB refers to LLM as the black-box optimization search model, LLM-DRL refers to LLM-guided deep reinforcement learning, LLM-EA refers to LLM-guided evolutionary algorithms, and LLM-Alpha refers to LLM as a solution program generator and critic; the maximum number of stars is 4.
Table 1. Comparative evaluation of four LLM-based optimization methods. LLM-BB refers to LLM as the black-box optimization search model, LLM-DRL refers to LLM-guided deep reinforcement learning, LLM-EA refers to LLM-guided evolutionary algorithms, and LLM-Alpha refers to LLM as a solution program generator and critic; the maximum number of stars is 4.
LLM-BBLLM-DRLLLM-EALLM-Alpha
Design difficulty✩✩✩✩✩✩✩
Solution quality✩✩✩✩✩✩✩✩✩✩✩
FLOP requirement✩✩✩✩✩✩✩✩✩
Memory requirement✩✩✩✩✩✩✩
Real time✩✩✩✩✩✩✩✩
Table 2. Three-path parameters for performance evaluation.
Table 2. Three-path parameters for performance evaluation.
ParameterValueParameterValue
Path 1 Rx pitch angle, θ 1 r π / 4 Path 1 Rx azimuth angle, θ 1 t π / 3
Path 1 Tx pitch angle, ϕ 1 r π / 6 Path 1 Tx azimuth angle, ϕ 1 t 2 π / 5
Path 1 Rx pitch angle, θ 2 r π / 5 Path 2 Rx azimuth angle, θ 2 t π / 4
Path 1 Tx pitch angle, ϕ 2 r π / 7 Path 2 Tx azimuth angle, ϕ 2 t π / 7
Path 1 Rx pitch angle, θ 3 r π / 3 Path 3 Rx azimuth angle, θ 3 t π / 6
Path 1 Tx pitch angle, ϕ 3 r π / 5 Path 3 Tx azimuth angle, ϕ 3 t π / 4
Movable range[−1,1] × [−1,1]Path loss coefficient, α 2.5
Path 1 Gain, σ 1 0.634Path 2 gain, σ 2 0.1768
Path 3 Gain, σ 3 0.1768Frequency, f60 GHz
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Deng, T.; Gao, Y.; Zhang, T.; Shao, M.; Ni, W.; Xu, H. Integrating Large Language Models into Fluid Antenna Systems: A Survey. Sensors 2025, 25, 5177. https://doi.org/10.3390/s25165177

AMA Style

Deng T, Gao Y, Zhang T, Shao M, Ni W, Xu H. Integrating Large Language Models into Fluid Antenna Systems: A Survey. Sensors. 2025; 25(16):5177. https://doi.org/10.3390/s25165177

Chicago/Turabian Style

Deng, Tingsong, Yan Gao, Tong Zhang, Mingjie Shao, Wanli Ni, and Hao Xu. 2025. "Integrating Large Language Models into Fluid Antenna Systems: A Survey" Sensors 25, no. 16: 5177. https://doi.org/10.3390/s25165177

APA Style

Deng, T., Gao, Y., Zhang, T., Shao, M., Ni, W., & Xu, H. (2025). Integrating Large Language Models into Fluid Antenna Systems: A Survey. Sensors, 25(16), 5177. https://doi.org/10.3390/s25165177

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop