This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Open AccessArticle
Innovative Guardrails for Generative AI: Designing an Intelligent Filter for Safe and Responsible LLM Deployment
by
Olga Shvetsova
Olga Shvetsova
,
Danila Katalshov
Danila Katalshov
and
Sang-Kon Lee
Sang-Kon Lee *
School of Industrial Management, Korea University of Technology and Education (KOREATECH), Cheonan-si 31254, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(13), 7298; https://doi.org/10.3390/app15137298 (registering DOI)
Submission received: 13 May 2025
/
Revised: 7 June 2025
/
Accepted: 10 June 2025
/
Published: 28 June 2025
Featured Application
The proposed intelligent filtering system can be seamlessly integrated into platforms utilizing large language models (LLMs), including customer service chatbots, educational tutors, healthcare assistants, and code-generation tools. By dynamically identifying and mitigating harmful, biased, or unethical outputs in real time, the system significantly enhances the safety and reliability of LLM-powered applications. This approach supports the responsible deployment of generative artificial intelligence (AI) technologies by ensuring adherence to ethical standards and regulatory frameworks while simultaneously preserving a high-quality user experience.
Abstract
This paper proposes a technological framework designed to mitigate the inherent risks associated with the deployment of artificial intelligence (AI) in decision-making and task execution within the management processes. The Agreement Validation Interface (AVI) functions as a modular Application Programming Interface (API) Gateway positioned between user applications and LLMs. This gateway architecture is designed to be LLM-agnostic, meaning it can operate with various underlying LLMs without requiring specific modifications for each model. This universality is achieved by standardizing the interface for requests and responses and applying a consistent set of validation and enhancement processes irrespective of the chosen LLM provider, thus offering a consistent governance layer across a diverse LLM ecosystem. AVI facilitates the orchestration of multiple AI subcomponents for input–output validation, response evaluation, and contextual reasoning, thereby enabling real-time, bidirectional filtering of user interactions. A proof-of-concept (PoC) implementation of AVI was developed and rigorously evaluated using industry-standard benchmarks. The system was tested for its effectiveness in mitigating adversarial prompts, reducing toxic outputs, detecting personally identifiable information (PII), and enhancing factual consistency. The results demonstrated that AVI reduced successful fast injection attacks by 82%, decreased toxic content generation by 75%, and achieved high PII detection performance (F1-score ≈ 0.95). Furthermore, the contextual reasoning module significantly improved the neutrality and factual validity of model outputs. Although the integration of AVI introduced a moderate increase in latency, the overall framework effectively enhanced the reliability, safety, and interpretability of LLM-driven applications. AVI provides a scalable and adaptable architectural template for the responsible deployment of generative AI in high-stakes domains such as finance, healthcare, and education, promoting safer and more ethical use of AI technologies.
Share and Cite
MDPI and ACS Style
Shvetsova, O.; Katalshov, D.; Lee, S.-K.
Innovative Guardrails for Generative AI: Designing an Intelligent Filter for Safe and Responsible LLM Deployment. Appl. Sci. 2025, 15, 7298.
https://doi.org/10.3390/app15137298
AMA Style
Shvetsova O, Katalshov D, Lee S-K.
Innovative Guardrails for Generative AI: Designing an Intelligent Filter for Safe and Responsible LLM Deployment. Applied Sciences. 2025; 15(13):7298.
https://doi.org/10.3390/app15137298
Chicago/Turabian Style
Shvetsova, Olga, Danila Katalshov, and Sang-Kon Lee.
2025. "Innovative Guardrails for Generative AI: Designing an Intelligent Filter for Safe and Responsible LLM Deployment" Applied Sciences 15, no. 13: 7298.
https://doi.org/10.3390/app15137298
APA Style
Shvetsova, O., Katalshov, D., & Lee, S.-K.
(2025). Innovative Guardrails for Generative AI: Designing an Intelligent Filter for Safe and Responsible LLM Deployment. Applied Sciences, 15(13), 7298.
https://doi.org/10.3390/app15137298
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
here.
Article Metrics
Article Access Statistics
For more information on the journal statistics, click
here.
Multiple requests from the same IP address are counted as one view.