You are currently viewing a new version of our website. To view the old version click .
Computers
  • Article
  • Open Access

16 December 2025

Optimizing Cloudlets for Faster Feedback in LLM-Based Code-Evaluation Systems

,
and
Computer Science Department, Faculty of Automatic Control and Computer Science, National University of Science and Technology Politehnica Bucharest, 060042 Bucharest, Romania
*
Authors to whom correspondence should be addressed.
This article belongs to the Special Issue Transformative Approaches in Education: Harnessing AI, Augmented Reality, and Virtual Reality for Innovative Teaching and Learning

Abstract

This paper addresses the challenge of optimizing cloudlet resource allocation in a code evaluation system. The study models the relationship between system load and response time when users submit code to an online code-evaluation platform, LambdaChecker, which operates a cloudlet-based processing pipeline. The pipeline includes code correctness checks, static analysis, and design-pattern detection using a local Large Language Model (LLM). To optimize the system, we develop a mathematical model and apply it to the LambdaChecker resource management problem. The proposed approach is evaluated using both simulations and real contest data, with a focus on improvements in average response time, resource utilization efficiency, and user satisfaction. The results indicate that adaptive scheduling and workload prediction effectively reduce waiting times without substantially increasing operational costs. Overall, the study suggests that systematic cloudlet optimization can enhance the educational value of automated code evaluation systems by improving responsiveness while preserving sustainable resource usage.

1. Introduction

Writing maintainable, high-quality code and receiving prompt, clear feedback significantly enhance learning outcomes [1] in computer science education. As students develop programming skills, rapid, reliable evaluation helps reinforce good practices, correct misconceptions early, and maintain engagement—especially in campus-wide online judge systems. Similarly, in the software industry, timely code review and automated testing play crucial roles in ensuring code quality, reducing defects, and accelerating development cycles, underscoring the importance of feedback-driven workflows in both education and professional practice.
This paper presents optimization strategies for managing cloudlet resources of LambdaChecker, an online code evaluation system developed at the National University of Science and Technology Politehnica Bucharest. The system was designed to enhance both teaching and learning in programming-focused courses such as Data Structures and Algorithms and Object-Oriented Programming. It does this by supporting hands-on activities, laboratory sessions, examinations, and programming contests. A key advantage of LambdaChecker is its extensibility, which enables the integration of custom features tailored to specific exam formats and project requirements. Moreover, the platform facilitates the collection of coding-related metrics, including the detection of pasted code fragments, the recording of job timestamps, and the measurement of the average time required to solve a task.
In its current implementation, LambdaChecker assesses code correctness through input/output testing and evaluates code quality [2] using PMD [3], an open-source static code analysis tool. PMD, which is also integrated into widely used platforms such as SonarQube, provides a set of carefully selected object-oriented programming metrics. These metrics are sufficiently simple to be applied effectively in an educational context while still aligning with industry standards.
In our previous work [4], building on earlier studies [5,6], we extended LambdaChecker to include support for design pattern detection using Generative AI (GenAI). We have used the LLaMA 3.1 model [7] (70.6B parameters, Q4_0 quantization) to detect design patterns in the submitted user’s code. This functionality has proven particularly valuable for teaching assistants during the grading process, as it facilitates the assessment of higher-level software design principles beyond code correctness and quality. This integration leverages GenAI models to enhance the reliability of automated feedback while reducing the grading workload. Nonetheless, this advancement is associated with higher resource requirements and increased latency in code evaluation systems without optimization.
During a contest, the workload on our system differs significantly from regular semester activity, exhibiting a sharp increase in submission rates. While day-to-day usage remains relatively irregular and influenced by teaching assistants’ instructional styles, content periods generate highly concentrated bursts of activity. The submission rate rises steadily as users progress with their tasks, culminating in peak workloads (e.g., the peak in Figure 1, approximately 54 submissions per minute) during the contest. Such intense demand places substantial pressure on computational and storage resources, making it essential to optimize resource allocation. Without efficient resource management, the platform risks performance degradation or downtime precisely when reliability is most critical. Consequently, adapting the system’s infrastructure to match workload patterns dynamically ensures both cost-effectiveness during low-activity periods and robustness during high-stakes contest scenarios.
Figure 1. Frequency of submissions in 5 min intervals during the OOP practical examination.
To address the resource inefficiencies observed in the previous version of LambdaChecker with LLM-based design pattern detection [4], we employed optimization strategies to manage computational resources within the cloudlet. Our main contributions are:
  • Mathematical modeling of resource allocation: We develop a model for dynamically allocating computational resources to ensure stable performance during contests and regular lab activities.
  • Simulation-based validation: We evaluate the model under high submission rates using real contest data, demonstrating reduced latency, improved throughput, and enhanced stability while remaining cost-effective.
  • Integration with LLM-assisted feedback: We demonstrate that LLM-powered design pattern detection can coexist with optimized resource management without compromising responsiveness.
The paper proceeds as follows: Section 2 reviews the related work in the field, while Section 3 describes the cluster, the LambdaChecker system, and the mathematical model and optimization techniques. Section 4 applies the mathematical model in a case study—LambdaChecker during a contest. Section 5 contains the discussion and user experience implications, and Section 6 concludes the paper.

3. Materials and Methods

In the initial iterations of the system, we observed the critical role played by platform performance and students’ sensitivity to the overhead introduced by the tooling. When integrating the LLM and PMD components, our primary concern was to avoid increasing feedback time. This constraint directly influenced the choice and applicability of the proposed model to the LambdaChecker data, as it reflects real-world performance requirements observed empirically during early deployments.

3.1. Cluster Setup, LLM-Based Code Evaluation Tool

Owing to limited computational resources, we initially assigned all submission-processing tasks for the LambdaChecker scheduler to a dedicated compute node. This node is equipped with Intel® Xeon® E5-2640 v3 multicore processor two NVIDIA H100 GPUs with 80 GB of HBM3 memory each, providing substantial memory capacity and computational power. This setup is ideal for large-scale models; for instance, a pair of these GPUs can efficiently run a 70B-parameter model like LLaMA 3.1 while maintaining high performance and optimized memory usage.
The machine supports the entire evaluation pipeline, handling CPU-intensive tasks such as PMD static analysis and correctness testing, as well as GPU-accelerated design-pattern detection workloads. Each step increases the load on the evaluation system, with LLM-based design pattern detection being the most resource-intensive, consuming up to 70% of the total average evaluation time.
Figure 2 presents the architecture of the LambdaChecker queue-driven code evaluation system deployed within the cloudlet environment. Users submit source code via a web-based frontend that communicates with a backend API responsible for managing submission requests and responses. Each submission is encapsulated as a job and placed into a centralized job queue within the cloudlet. A dynamically scaled pool of worker virtual machines retrieves jobs from the queue. It executes an evaluation pipeline that begins with test case execution, then performs object-oriented code quality analysis with PMD, and concludes with software design pattern detection by the LLaMA 3.1 large language model. Once the entire evaluation pipeline is complete, the output is retrieved by the backend API and presented to the user through the frontend interface. System-level monitoring continuously observes job-queue metrics, which are fed into an auto-scaling subsystem that provisions or deprovisions worker VMs in response to workload fluctuations, ensuring scalable throughput and low-latency processing under varying demand conditions.
Figure 2. LambdaChecker queueing system in the cloudlet environment.
Evaluation tasks include precedence constraints within the worker pipeline. Each step depends on the successful completion of the previous one: test cases run first, followed by PMD analysis, and then LLaMA pattern detection. If test cases fail or take longer than the timeout time, subsequent steps are skipped.
A cloudlet-based LLM offers three main advantages over a traditional cloud deployment: (1) lower latency, since computation occurs closer to the user and enables faster, more interactive code analysis; (2) improved data privacy, as sensitive source code remains within a local or near-edge environment rather than being transmitted to distant cloud servers; and (3) reduced operational costs, because frequent analysis tasks avoid cloud usage fees and bandwidth charges while still benefiting from strong computational resources.
User-submitted code is incorporated into a pre-defined, improved prompt tailored to the specific coding problem to facilitate the detection of particular design patterns. The large language model then performs inference on a dedicated GPU queue supported by an NVIDIA H100 accelerator (NVIDIA Corporation, Santa Clara, CA, USA).
The prompt was refined to be clearer, more structured, and easier for a generative AI model to follow. It explicitly defines the JSON output, clarifies how to report missing patterns, and instructs the model to focus on real structural cues rather than class names. The rule mapping publisher–subscriber to the Observer pattern ensures consistent terminology. Overall, the prompt improves reliability and reduces ambiguity in pattern detection.
The prompt is concise, focuses only on essential instructions, and specifies a strict JSON output format, allowing the cloudlet-hosted LLM to process the code more efficiently without unnecessary reasoning or verbose explanations:
  • “You analyze source code to identify design patterns. Output ONLY this JSON:
 {
  "design_patterns": [
   {
    "pattern": "<pattern_name>",
    "confidence": <0-100>,
    "adherence": <0-100>
   }
  ]
 }
  • If no patterns are found, return {"design_patterns": []}."
Using this prompt, we obtained the following precision results. As previously reported in [4], the LLaMA model exhibited varying effectiveness in detecting design patterns on a separate dataset of student homework submissions. In that study, the model achieved high precision for Singleton (1.00) and Visitor (0.938), moderate precision for Factory (0.903) and Strategy (0.722), and lower precision for Observer (0.606). Recall values ranged from 0.536 for Visitor to 0.952 for Observer, highlighting differences in the model’s performance across patterns. We include the model overall results in Table 1.
Table 1. Overall results for the LLaMA model on design pattern detection.
The system dynamically scales across multiple computing nodes in response to the number of active submissions. It continuously monitors the average service rate, which depends on the current submissions workload, and adjusts resources in a feedback-driven loop, scaling nodes up or down as required to ensure efficient processing. The scaling mechanism is based on a mathematical model described in the following subsection.

3.2. Mathematical Model

In our analysis, we consider only submissions that successfully pass all test cases. Submissions that fail or exceed the allowed time at any stage are eliminated, ensuring that only items which progress through all three stages of the pipeline (Test Cases → PMD Analysis → LLaMA Pattern Detection) are included. This allows us to measure the complete processing time W for each item consistently.
Additionally, there is no serial dependency beyond per-user ordering. Tasks from different users are independent and can run concurrently, so the system-level analysis assumes parallel execution across users, while each user’s pipeline remains sequential.
We consider a generic queueing system and introduce the following notation:
  • L ( t ) : average queue length at time t;
  • λ ( t ) : arrival rate at time t (jobs per unit time);
  • λ eff ( t ) : effective arrival rate at time t (jobs per unit time);
  • μ : service rate per machine (jobs processed per unit time);
  • m: number of parallel processing machines (queues).
Even if the exact inter-arrival and service-time distributions are unknown, empirical traces in real systems typically exhibit diurnal variability, burstiness near deadlines, and transient queue-drain phases. A useful analytical approximation is to treat the system as an M/M/m queue with effective rates
μ eff = m μ , λ eff ( t ) λ ( t ) ,
leading to an approximate expected queue length
L ( t ) λ eff ( t ) μ eff λ eff ( t ) .
This approximation is valid whenever the stability condition
λ eff ( t ) < μ eff
is satisfied; otherwise, the system becomes overloaded, and the queue diverges.
Although submission traces (Figure 1) exhibit burstiness and deviations from Poisson arrivals, we use the M/M/m model as a first-order approximation to capture the average system load. While the Poisson/exponential assumptions do not strictly hold, this approximation is valuable because it provides analytical insight into expected queue lengths and utilization. To partially account for burstiness, we introduce an effective arrival rate  λ eff ( t ) λ ( t ) , which smooths short-term bursts and avoids overestimating congestion. The approximate expected queue length is then the one presented in the equations above.
For periods when λ ( t ) > μ eff and the system is overloaded, no stationary distribution exists. In this case, we use a fluid approximation that captures average queue growth:
d L d t = λ μ eff .
If arrivals are indexed deterministically so that the n-th submission occurs at
t n n λ ,
then the approximate waiting time experienced by the n-th job is as follows:
W n L ( t n ) μ = λ μ λ μ n .
Limitations: This model may underestimate peak queue lengths and does not accurately capture tail latencies. It is therefore intended primarily for average-load analysis rather than for fine-grained delay predictions.

4. Results

4.1. Case Study: LambdaChecker in a Programming Contest Increasing the Service Rate

We begin by analyzing a highly loaded contest scenario through a hypothetical M/M/1 queueing model.
We take these values from our dataset. As shown in Figure 1, we observe an average peak of approximately 270 submissions within 5-min intervals during the OOP practical examination. This gives an arrival rate of
λ 270 jobs 5 min = 54 jobs / min .
After analyzing a historical contest dataset, the total evaluation time for submitted code ranges from 6.5 s to 17 s, with an average duration of 12.02 s. The code correctness evaluation is strictly limited to 2 s; executions that exceed this threshold result in a timeout exception and are excluded from further analysis. The PMD code quality analysis has a relatively fixed execution time of approximately 1.5 s. The remaining portion of the evaluation time is dominated by LLM inference, which varies with the input size (i.e., the number of tokens) and takes approximately 8.5 s on average. Given the average total evaluation time, this corresponds to a service rate of
μ = 1 12.02 s × 60 s min 4.99 jobs / min .
The traffic intensity is
ρ = λ μ = 54 4.99 = 10.82 ,
indicating severe overload and unbounded queue growth.
If we consider a single queue system, we can compute the final submission with n = 2119 its waiting time:
λ μ = 49.01 , λ μ = 269.46 , λ μ λ μ = 49.01 269.46 0.1818 , W 2119 0.1818 · 2119 385 min .
Thus, the 2119th job waits roughly 6 h and 25 min before service begins.

4.2. Determining the Required Number of Machines

To prevent unbounded queue growth during peak contest activity, we model LambdaChecker as an M/M/c system with c identical processing nodes.
  • Peak Load Parameters (from our dataset)
    λ = 54 jobs / min , μ = 4.99 jobs / min per machine .
  • Total Service Capacity
    μ total = c μ .
  • Stability Condition
    ρ = λ μ total 1 λ c μ 1 .
  • Minimum Machines Required
    c λ μ = 54 4.99 10.8 11
Thus, at least 11 machines are needed to sustain the peak arrival rate without unbounded queueing. However, c = 11 puts the system close to critical load ( ρ = 1 ), meaning small fluctuations can still cause delays. In practice, provisioning at least 12 machines yields significantly more stable performance.
By constraining the number of jobs permitted per user and discarding any excess submissions, the system’s effective load is reduced, thereby decreasing the required number of nodes. This represents just one specific instance of a broader class of resource-management strategies commonly employed to limit system load. Such methods, while effective at stabilizing resource usage, may result in discarded submissions; thus, an appropriate balance between system robustness and user experience must be carefully maintained.

5. Discussion

We evaluated the impact of our optimization strategies, based on our mathematical model, on system performance as the number of processing nodes increased. In addition, we discuss applying queue policies to limit the number of submissions per user. To validate and explore the queuing behavior empirically, we implemented a discrete-event simulation in Python [24].
In our simulation, arrivals enter a single global queue managed in first-come, first-served (FCFS) order by arrival time. When a processing node becomes free, the job at the head of the global queue is assigned to that node for immediate processing. Thus, the system operates as a single pooled resource. This discipline corresponds exactly to the standard M / M / m queuing model used in Section 3.2, which assumes a pooled set of identical servers with a shared waiting line. This ensures that no VMs remain idle while jobs wait in the queue. The results are visualized in Figure 3 and Figure 4—in plots of wait times per submission, providing a clear picture of how increasing the number of queues improves performance.
Figure 3. Per-submission wait times across varying numbers of nodes, illustrating the impact of node availability on processing delays. We prove that for our case study, using 12 nodes makes the system stable.
Figure 4. Submission frequency and wait times with a fixed number of 12 processing nodes.

5.1. Effect of Scaling Machines

Figure 3 illustrates the expected waiting time W ( t ) for different numbers of machines (processing nodes). Consistent with M/M/m queueing theory, increasing the number of nodes reduces the waiting time, particularly during peak submission periods. For the original single-server setup, the traffic intensity ρ 1 , resulting in extremely long queues—our calculations show that the last submissions could experience waiting times exceeding 6 h. Doubling or appropriately scaling the number of machines significantly reduces W ( t ) and stabilizes the queue. While effective, this approach requires additional infrastructure, highlighting the trade-off between system performance and operational costs.
In Figure 4, we include a simulation with 12 processing nodes to validate the accuracy of our previous computations. Among the 2119 submissions, the maximum queueing delay observed is approximately ~10.3 s. With 12 processing nodes, the queueing delay remains below 11 s throughout the contest. Given a maximum job evaluation time of approximately 15 s, the resulting end-to-end user-perceived latency, defined as the sum of queueing delay and evaluation time, remains below 25 s. In addition, we overlay the same submission frequency shown in Figure 1 over the contest duration to visualize the submission workload on LambdaChecker for a contest with 200 participants.

5.2. Effect of Queue Policies

By limiting the number of jobs per user and discarding excess submissions, the system reduces load and lowers the number of required nodes. This is an example of a common resource-management strategy that stabilizes usage but may discard submissions, necessitating a careful balance between system robustness and user experience.

5.3. Contest Insights from Code Evaluation System

Several insights arise from our analysis. First, limiting the effective arrival rate is especially impactful when the system is heavily loaded. Due to the non-linear relationship between waiting time and traffic intensity, small reductions in λ eff can lead to disproportionately large decreases in W ( t ) . Second, adding new processing nodes improves performance but increases operational costs, whereas queue policies provide significant benefits at minimal expense. Finally, controlling submissions not only reduces average waiting times but also stabilizes the queue, leading to more predictable processing times for users.
Beyond system metrics, these strategies have a direct impact on user experience and performance. When feedback was delayed or unavailable during the content, participants experienced uncertainty and frustration, often resulting in lower engagement and lower scores. In the initial run, some students waited up to five minutes to receive feedback. In contrast, when feedback was returned more quickly (e.g., in under 25 s), users could identify and correct errors promptly, boosting both learning and confidence. To ensure comparability, participants were offered problems of similar difficulty across conditions. Our data show that with faster feedback, the average score increased from 53 to 69 out of 100. This demonstrates that system-level optimizations not only improve operational efficiency but also meaningfully enhance users’ outcomes and engagement.
While several alternative queueing strategies exist in the literature, such as Shortest Job First (SJF) [25] and Multilevel Feedback Queues (MLFQ) [26], these approaches are typically motivated by environments with high variability in job runtimes, preemption support, or runtime estimates. In contrast, student submissions in our system have short, tightly bounded evaluation times (typically 6–18 s) and are executed non-preemptively. Under these conditions, a First-Come-First-Served (FCFS) policy provides a simple, fair, and effective scheduling strategy without the overhead or assumptions required by more complex schedulers. In addition, we incorporate LLM-based design pattern detection to accelerate evaluation and improve accuracy.

5.4. Limitations and Future Work

The M/M/m approximation assumes exponential inter-arrival and service times, which may not fully capture the real-world behavior of submissions. Variability in machine performance and user submission patterns may affect the observed service rate μ , and queue-limiting policies could influence user behavior in ways not captured by this static analysis. Despite these limitations, our combined approach provides a practical and data-driven strategy for managing waiting times effectively, balancing system performance, stability, and operational cost. Our mathematical model may apply to other institutions or online judge platforms with different user populations, submission behaviors, or infrastructure. Future work could explore more realistic arrival distributions and system configurations to capture these practical considerations better.
While our results demonstrate significant performance improvements over our previous system, we did not perform a direct experimental comparison against queueing disciplines such as SJF or MLFQ. We acknowledge this as a limitation of the current study and identify the empirical evaluation of these alternative scheduling policies as a subject for future work.

6. Conclusions

Our analysis shows that a combination of server scaling and queue-control policies can effectively manage system performance during peak submission periods. Analytical calculations indicate that a single-server setup becomes severely overloaded under peak demand, leading to waiting times of several hours. Adding new processing nodes reduces waiting times, although it incurs additional infrastructure costs.
Queue-management policies that limit the number of active submissions per user substantially reduce the effective arrival rate, decreasing both the average waiting time and the queue variability. This approach delivers significant performance improvements at minimal cost and helps stabilize the system, preventing extreme delays even during high submission rates.
The combination of server scaling and submission control yields the best results, maintaining manageable queues and near-optimal waiting times throughout peak periods. Timely feedback directly improves participants’ experience and performance. Delays create uncertainty and frustration, lowering engagement, while feedback within 25 s lets users quickly correct errors, boosting understanding, confidence, and contest results. We found that providing faster feedback boosted average scores by approximately 30% on tasks of similar difficulty, highlighting how reducing wait times can improve performance outcomes. These findings demonstrate that system-level optimizations not only improve operational efficiency but also meaningfully enhance users’ outcomes and engagement. This emphasizes the importance of balancing technical performance with participants’ satisfaction.

Author Contributions

  Conceptualization, D.-F.D. and A.-C.O.; methodology, N.Ț.; software, D.-F.D.; validation, A.-C.O. and N.Ț.; formal analysis, D.-F.D.; investigation, D.-F.D. and A.-C.O.; resources, N.Ț.; data curation, D.-F.D.; writing—original draft preparation, D.-F.D.; writing—review and editing, A.-C.O. and N.Ț.; visualization, D.-F.D.; supervision, N.Ț.; project administration, N.Ț.; funding acquisition, A.-C.O. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Program for Research of the National Association of Technical Universities under Grant GNAC ARUT 2023.

Institutional Review Board Statement

This study utilized fully anonymized data with no collection of personally identifiable information. Based on these characteristics, the study qualified for exemption from Institutional Review Board (IRB) review.

Data Availability Statement

Paper dataset and scripts can be found at https://tinyurl.com/2z4tf4yc (accessed on 10 December 2025).

Acknowledgments

The authors would like to thank Emil Racec for coordinating the development of LambdaChecker.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kuklick, L. When Computers Give Feedback: The Role of Computer-Based Feedback and Its Effects on Motivation and Emotions. IPN News. 24 June 2024. Available online: https://www.leibniz-ipn.de/en/the-ipn/current/news/when-computers-give-feedback-the-role-of-computer-based-feedback-and-its-effects-on-motivation-and-emotions (accessed on 10 December 2025).
  2. Dosaru, D.-F.; Simion, D.-M.; Ignat, A.-H.; Negreanu, L.-C.; Olteanu, A.-C. A Code Analysis Tool to Help Students in the Age of Generative AI. In Proceedings of the European Conference on Technology Enhanced Learning, Krems, Austria, 16–20 September 2024; pp. 222–228. [Google Scholar]
  3. PMD. An Extensible Cross-Language Static Code Analyzer. Available online: https://pmd.github.io/ (accessed on 10 December 2025).
  4. Dosaru, D.-F.; Simion, D.-M.; Ignat, A.-H.; Negreanu, L.-C.; Olteanu, A.-C. Using GenAI to Assess Design Patterns in Student Written Code. IEEE Trans. Learn. Technol. 2025, 18, 869–876. [Google Scholar] [CrossRef]
  5. Bavota, G.; Linares-Vásquez, M.; Poshyvanyk, D. Generative AI for Code Quality and Design Assessment: Opportunities and Challenges. IEEE Softw. 2022, 39, 17–24. [Google Scholar]
  6. Chen, M.; Tworek, J.; Jun, H.; Yuan, Q.; de Oliveira Pinto, H.P.; Kaplan, J.; Edwards, H.; Burda, Y.; Joseph, N.; Brockman, G.; et al. Evaluating Large Language Models Trained on Code. arXiv 2021, arXiv:2107.03374. [Google Scholar] [CrossRef]
  7. Meta AI. Introducing Meta Llama 3.1: Our Most Capable Models to Date. 23 July 2024. Available online: https://ai.meta.com/blog/meta-llama-3-1/ (accessed on 10 December 2025).
  8. Lu, J.; Chen, Z.; Zhang, L.; Qian, Z. Design and Implementation of an Online Judge System Based on Cloud Computing. In Proceedings of the IEEE 2nd International Conference on Cloud Computing and Big Data Analysis, Chengdu, China, 28–30 April 2017; pp. 36–40. [Google Scholar]
  9. Singh, A.; Sharma, T. A Scalable Online Judge System Architecture Using Container-Based Sandboxing. Int. J. Adv. Comput. Sci. Appl. 2019, 10, 245–252. [Google Scholar]
  10. Keuning, H.; Jeuring, J.; Heeren, B. A Systematic Literature Review of Automated Feedback Generation for Programming Exercises. ACM Trans. Comput. Educ. (TOCE) 2018, 19, 1–43. [Google Scholar] [CrossRef]
  11. Frolov, A.; Buliaiev, M.; Sandu, R. Automated Assessment of Programming Assignments: A Survey. Inform. Educ. 2021, 20, 551–580. [Google Scholar]
  12. Li, X.; Tang, W.; Yuan, Y.; Li, K. Dynamic Task Offloading and Resource Scheduling for Edge-Cloud Collaboration. Future Gener. Comput. Syst. 2019, 95, 522–533. [Google Scholar]
  13. Wang, S.; Zhang, X.; Zhang, Y.; Wang, L.; Yang, J.; Wang, W. A Survey on Mobile Edge Networks: Convergence of Computing, Caching and Communications. IEEE Access 2020, 8, 197689–197709. [Google Scholar] [CrossRef]
  14. Chouliaras, S.; Sotiriadis, S. An adaptive auto-scaling framework for cloud resource provisioning. Future Gener. Comput. Syst. 2023, 148, 173–183. [Google Scholar] [CrossRef]
  15. Zhang, Q.; He, Q.; Chen, W.; Chen, S.; Xiang, Y. Adaptive Autoscaling for Cloud Applications via Reinforcement Learning. IEEE Trans. Cloud Comput. 2021, 9, 1162–1176. [Google Scholar]
  16. Khan, M.G.; Taheri, J.; Al-Dulaimy, A.; Kassler, A. PerfSim: A Performance Simulator for Cloud Native Microservice Chains. IEEE Trans. Cloud Comput. 2021, 11, 1395–1413. [Google Scholar]
  17. da Silva Pinheiro, T.F.; Pereira, P.; Silva, B.; Maciel, P.R. A performance modeling framework for microservices-based cloud infrastructures. J. Supercomput. 2022, 79, 7762–7803. [Google Scholar] [CrossRef]
  18. Moiseeva, S.; Polin, E.; Moiseev, A.; Sztrik, J. Performance Modeling of Cloud Systems by an Infinite-Server Queue Operating in Rarely Changing Random Environment. Future Internet 2025, 17, 462. [Google Scholar] [CrossRef]
  19. Ahmed, U.Z.; Srivastava, N.; Sindhgatta, R.; Karkare, A. Characterizing the pedagogical benefits of adaptive feedback for compilation errors by novice programmers. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering: Software Engineering Education and Training (ICSE-SEET), Seoul, Republic of Korea, 27 June–19 July 2020; pp. 139–150. [Google Scholar]
  20. Law, Y.K.; Tobin, R.W.; Wilson, N.R.; Brandon, L.A. Improving student success by incorporating instant-feedback questions and increased proctoring in online science and mathematics courses. J. Teach. Learn. Technol. 2020, 9, 1. [Google Scholar] [CrossRef]
  21. Govea, J.; Edye, E.O.; Revelo-Tapia, S.; Villegas-Ch, W. Optimization and scalability of educational platforms: Integration of artificial intelligence and cloud computing. Computers 2023, 12, 223. [Google Scholar] [CrossRef]
  22. Kim, Y.; Lee, K.; Park, H. Watcher: Cloud-based coding activity tracker for fair evaluation of programming assignments. Sensors 2022, 22, 7284. [Google Scholar] [CrossRef] [PubMed]
  23. Mas, L.; Vilaplana, J.; Mateo, J.; Solsona, F. A Queuing Theory Model for Fog Computing. J. Supercomput. 2022, 78, 11138–11155. [Google Scholar] [CrossRef]
  24. Dosaru, D. Optimized Code Evaluation Simulation; GitHub Repository, Main Branch. Available online: https://github.com/dosarudaniel/optimized_code_evaluation_simulation/tree/main (accessed on 10 December 2025). Software version used: Python script as available in the main branch at the time of access.
  25. Alworafi, M.A.; Dhari, A.; Al-Hashmi, A.A.; Darem, A.B.; Suresha. An improved SJF scheduling algorithm in cloud computing environment. In Proceedings of the 2016 International Conference on Electrical, Electronics, Communication, Computer and Optimization Techniques (ICEECCOT), Mysuru, India, 9–10 December 2016; pp. 208–212. [Google Scholar]
  26. Khan, A.; Khan, M.A.; Kim, S.W. Exploring multilevel feedback queue combinations and dynamic time quantum adjustments. J. Inf. Sci. Eng. 2020, 36, 1045–1063. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.