Next Article in Journal
Regular Wave Effects on the Hydrodynamic Performance of Fine-Mesh Nettings in Sampling Nets
Previous Article in Journal
Object Tracking Algorithm Based on Multi-Layer Feature Fusion and Semantic Enhancement
Previous Article in Special Issue
Survey of Blockchain-Based Applications for IoT
 
 
Article
Peer-Review Record

Large-Language-Model-Enabled Text Semantic Communication Systems

Appl. Sci. 2025, 15(13), 7227; https://doi.org/10.3390/app15137227
by Zhenyi Wang 1, Li Zou 1, Shengyun Wei 2, Kai Li 1,*, Feifan Liao 1, Haibo Mi 1 and Rongxuan Lai 1,*
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Appl. Sci. 2025, 15(13), 7227; https://doi.org/10.3390/app15137227
Submission received: 19 May 2025 / Revised: 18 June 2025 / Accepted: 24 June 2025 / Published: 26 June 2025
(This article belongs to the Special Issue Recent Advances in AI-Enabled Wireless Communications and Networks)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

Some more specific comments addressing the following points:

  1. What is the main question addressed by the research?

The main question addressed by this research is a novel LLM-SC framework for textual data transmission within wireless communication systems. This novel framework uses the artificial intelligence LLM algorithm and knowledge base as a means of semantic communication.

  1. Do you consider the topic original or relevant to the field? Does it address a specific gap in the field? Please also explain why this is/ is not the case.

The suggested LLM-enabled decoding and demodulation method is used to resiliently demodulate text by integrating the LLM's contextual understanding. The pre-training of LLMs gives the encoding and decoding mechanisms for semantic communication, achievable without altering the original training process. Existing pre-trained models can be utilized for semantic encoding and decoding. This decision helps the topic become original and relevant to the research field.

  1. What does it add to the subject area compared with other published material?

The authors propose an innovative LLM-enabled semantic communication system framework, named LLM-SC, that applies LLMs directly to the physical layer coding and decoding for the first time. By analyzing the relationship between the training process of LLMs and the optimization objectives of semantic communication, the paper proposes training a semantic encoder through LLMs’ tokenizer training and establishing a semantic knowledge base via the LLMs’ unsupervised pre-training process. Such a knowledge base aids in constructing the optimal decoder by providing the prior probability of the transmitted language sequence. Based on this, in the paper, the optimal decoding criterion for the receiver is proposed and introduces the beam search algorithm to further reduce the complexity.

  1. What specific improvements should the authors consider regarding the methodology?

 

It is possible to recommend that authors try to find algorithms for logical fuzzy rules for knowledge base development to further reduce the computational complexity.

 

  1. Are the conclusions consistent with the evidence and arguments presented and do they address the main question posed? Please also explain why this is/is not the case.

The conclusions are consistent with the evidence and arguments presented, and they address the main question posed in the research. The detailed numerical results are presented in part 4, and they support the Conclusions.

  1. Are the references appropriate?

 

All references are appropriate and relevant.

 

  1. Any additional comments on the tables and figures.

The paper is interesting due to a specific form of communication - Semantic Communication.
But there are some comments:

1) In Induction, it is necessary to clearly explain the sense of the proposed Framework (explain what the structure of the proposed Framework is, focusing on the use of the knowledge base);

2) Formulas 7,8,18,19 should be presented traditionally, without combining them into one;

3) Many abbreviations are used, which complicates the understanding of the text; it is recommended to explain brackets.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The authors in the manuscript entitled “Large Language Model Enabled Text Semantic Communication Systems” focused on LLM.  They highlighted the importance of LLM and proposed a framework based on it. They claimed that they are the pioneers in applying LLM to physical layer. The results look promising. The overall work is interesting. I suggest the following changes.

  1. The authors have selected two channel models. They need to write few lines in explaining why they have selected these and would the performance change if a different channel model is applied.
  2. Conclusion section is long.
  3. Authors can create another section “Discussion” and can move some test from sections 4 and 5.
  4. Authors need to improve quality of English.
  5. iThenticate report shows 85% match. It seems that the authors copied their own published conference paper.
  6. Authors need to provide reference to the statement at the start of Abstract
  7. Create a table in section 2 which will provide a review summary 
Comments on the Quality of English Language

Authors need to improve quality of English 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

The paper proposes a new LLM based decoder for text semantic communication systems. The effectiveness of the proposed technique is verified with comparative simulations with conventional techniques. However, the paper needs to address the following comments for publication.

  1. Since DeepSC is an important ML method used for comparative performance analysis, more detail on this method should be added in Section 3.
  2. Logical reason and background explanation for inclusion of Viterbi algorithm should be added in Section 3.3. Furthermore, using Viterbi algorithm for beam search is not clear.
  3. Better explanation on Algorithm 1 and the relationship between input parameters, beam size and decoding tokens should be added.
  4. More detail on Vicuna used as the LLM module should be added in Section 4 Numerical Results.
  5. Meaningful simulation result and insight on the effect of beam size in Section 4.3 is very short. Please add more detail.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 3 Report

Comments and Suggestions for Authors

The authors have adequately addressed all the review comments. Therefore, I have no further comments.

Back to TopTop