Next Article in Journal
“In Situ” Studies on Coke Drilled from Tuyere in a Working COREX Melter Gasifier
Previous Article in Journal
Analysis on the Transient Synchronization Stability of a Wind Farm with Multiple PLL-Based PMSGs
Previous Article in Special Issue
Intelligent Disassembly System for PCB Components Integrating Multimodal Large Language Model and Multi-Agent Framework
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Neuro-Symbolic Verification for Preventing LLM Hallucinations in Process Control

by
Boris Galitsky
1,* and
Alexander Rybalov
2
1
Knowledge Trail Inc., San Jose, CA 95127, USA
2
LAMBDA Laboratory, Tel Aviv University, Ramat Aviv, Tel Aviv 6997801, Israel
*
Author to whom correspondence should be addressed.
Processes 2026, 14(2), 322; https://doi.org/10.3390/pr14020322
Submission received: 11 December 2025 / Revised: 9 January 2026 / Accepted: 13 January 2026 / Published: 16 January 2026

Abstract

Large Language Models (LLMs) are increasingly used in industrial monitoring and decision support, yet they remain prone to process-control hallucinations—diagnoses and explanations that sound plausible but conflict with physical constraints, sensor data, or plant dynamics. This paper investigates hallucination as a failure of abductive reasoning, where missing premises, weak mechanistic support, or counter-evidence lead an LLM to propose incorrect causal narratives for faults such as pump restriction, valve stiction, fouling, or reactor runaway. We develop a neuro-symbolic framework in which Abductive Logic Programming (ALP) evaluates the coherence of model-generated explanations, counter-abduction generates rival hypotheses that test whether the explanation can be defeated, and Discourse-weighted ALP (D-ALP) incorporates nucleus–satellite structure from operator notes and alarm logs to weight competing explanations. Using our 500-scenario Process-Control Hallucination Dataset, we assess LLM reasoning across mechanistic, evidential, and contrastive dimensions. Results show that abductive and counter-abductive operators substantially reduce explanation-level hallucinations and improve alignment with physical process behavior, particularly in “easy-but-wrong’’ cases where a superficially attractive explanation contradicts historian trends or counter-evidence. These findings demonstrate that abductive reasoning provides a practical and verifiable foundation for improving LLM reliability in safety-critical process-control environments.
Keywords: abductive reasoning; process-control hallucinations; counter-abduction; discourse-weighted ALP; fault diagnosis; neuro-symbolic LLM verification abductive reasoning; process-control hallucinations; counter-abduction; discourse-weighted ALP; fault diagnosis; neuro-symbolic LLM verification

Share and Cite

MDPI and ACS Style

Galitsky, B.; Rybalov, A. Neuro-Symbolic Verification for Preventing LLM Hallucinations in Process Control. Processes 2026, 14, 322. https://doi.org/10.3390/pr14020322

AMA Style

Galitsky B, Rybalov A. Neuro-Symbolic Verification for Preventing LLM Hallucinations in Process Control. Processes. 2026; 14(2):322. https://doi.org/10.3390/pr14020322

Chicago/Turabian Style

Galitsky, Boris, and Alexander Rybalov. 2026. "Neuro-Symbolic Verification for Preventing LLM Hallucinations in Process Control" Processes 14, no. 2: 322. https://doi.org/10.3390/pr14020322

APA Style

Galitsky, B., & Rybalov, A. (2026). Neuro-Symbolic Verification for Preventing LLM Hallucinations in Process Control. Processes, 14(2), 322. https://doi.org/10.3390/pr14020322

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop