You are currently on the new version of our website. Access the old version .
AlgorithmsAlgorithms
  • This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
  • Article
  • Open Access

10 January 2026

Tiny Language Model Guided Flow Q Learning for Optimal Task Scheduling in Fog Computing

and
1
Department of CSE, Siddaganga Institute of Technology, Tumakuru 572103, Karnataka, India
2
Department of CS, University of Memphis, Memphis, TN 38152, USA
*
Authors to whom correspondence should be addressed.
Algorithms2026, 19(1), 60;https://doi.org/10.3390/a19010060 
(registering DOI)

Abstract

Fog computing is one of the rapidly growing platforms with an exponentially increasing demand for real-time data processing. The fog computing market is expected to reach USD 8358 million by the year 2030 with a compound annual growth of 50%. The wide adaptation of fog computing by the industries worldwide is due to the advantages like reduced latency, high operational efficiency, and high-level data privacy. The highly distributed and heterogeneous nature of fog computing leads to significant challenges related to resource management, data security, task scheduling, data privacy, and interoperability. The task typically represents a job generated by the IoT device. The action indicates the way of executing the tasks whose decision is taken by the scheduler. Task scheduling is one of the prominent issues in fog computing which includes the process of effectively scheduling the tasks among fog devices to effectively utilize the resources and meet the Quality of Service (QoS) requirements of the applications. Improper task scheduling leads to increased execution time, overutilization of resources, data loss, and poor scalability. Hence there is a need to do proper task scheduling to make optimal task distribution decisions in a highly dynamic resource-constrained heterogeneous fog computing environment. Flow Q learning (FQL) is a potential form of reinforcement learning algorithm which uses the flow matching policy for action distribution. It can handle complex forms of data and multimodal action distribution which make it suitable for the highly volatile fog computing environment. However, flow Q learning struggles to achieve a proper trade-off between the expressive flow model and a reduction in the Q function, as it relies on a one-step optimization policy that introduces bias into the estimated Q function value. The Tiny Language Model (TLM) is a significantly smaller form of a Large Language Model (LLM) which is designed to operate over the device-constrained environment. It can provide fair and systematic guidance to disproportionally biased deep learning models. In this paper a novel TLM guided flow Q learning framework is designed to address the task scheduling problem in fog computing. The neutrality and fine-tuning capability of the TLM is combined with the quick generable ability of the FQL algorithm. The framework is simulated using the Simcan2Fog simulator considering the dynamic nature of fog environment under finite and infinite resources. The performance is found to be good with respect to parameters like execution time, accuracy, response time, and latency. Further the results obtained are validated using the expected value analysis method which is found to be satisfactory.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.