Previous Article in Journal
A Dynamic Current Pulsing Technique to Improve the Noise Efficiency Factor of Neural Recording Amplifiers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analysis of Core Temperature Dynamics in Multi-Core Processors

1
Department of Information Technology, SIES Graduate School of Technology, University of Mumbai, Navi Mumbai 400706, India
2
Department of Computer Engineering, Sardar Patel Institute of Technology, University of Mumbai, Mumbai 400058, India
3
Department of Electronics & Telecommunication Engineering, Sardar Patel Institute of Technology, University of Mumbai, Mumbai 400058, India
*
Author to whom correspondence should be addressed.
J. Low Power Electron. Appl. 2025, 15(4), 68; https://doi.org/10.3390/jlpea15040068 (registering DOI)
Submission received: 1 September 2025 / Revised: 5 November 2025 / Accepted: 18 November 2025 / Published: 2 December 2025

Abstract

As technologies like Artificial Intelligence, Blockchain, Virtual Reality, etc., are advancing, there is a high requirement for High-Performance Computers and multi-core processors to find many applications in today’s Cyber–Physical World. Subsequently, multi-core systems have now become ubiquitous. The core temperature is affected by intensive computational tasks, parallel execution of tasks, thermal coupling effects, and limitations on cooling methods. High temperatures may further decrease the performance of the chip and the overall system. In this paper, we have studied different parameters related to core performance. The MSI Afterburner utility is used to extract the hardware parameters. Single and multivariate analyses are carried out on core temperature, core usage, and core clock to study the performance of all cores. Single-variate analysis shows the need for action when core temperatures, core usage, and clock speeds exceed threshold values. Multivariate analysis reveals correlations between these parameters, guiding optimization strategies. We have also implemented the ARIMA model for core temperature estimation and obtained an average RMSE of 2.44 °C. Our analysis and ARIMA model for temperature estimation are useful in developing smart scheduling algorithms that optimize thermal management and energy efficiency.

1. Introduction

Embedded systems have a vital role today across various industries and applications. Some key areas where embedded systems have significance include Consumer Electronics, Automotive Industry, Vehicle Control Systems, Healthcare, Control Systems, Communication Systems, Aerospace and Defense, Internet of Things (IoT), Entertainment and Gaming, Energy Management, etc. Multi-core systems commonly find applications in all devices [1]. Due to the fast-growing demands for computational power, high performance, and efficiency, there is a paradigm shift in the architecture of processors, making multi-core processors more significant. Multi-core designs incorporate multiple processing units or cores, leading to a reduction in hardware footprint and minimizing inter-core communication latency, thereby enhancing system performance [2]. S. Borkar [3] has presented the architecture of a many-core system that delivers extraordinary computational performance. The advent of multi-core processors aligns with the surge in applications using intensive data, such as big data analytics. By distributing tasks across multiple cores, applications can exploit parallelism to achieve faster execution times and handle more extensive computational work. Technological advancements drive an increase in the number of cores within multi-core chips. These technological trends directly influence the chip’s temperature [4]. The core temperature of a multi-core processor can increase due to several factors, and understanding these reasons is crucial for effective thermal management. Some key factors contribute to an increase in core temperature in multi-core processors, such as increased workload, clock speed, voltage, poor cooling systems, spatial and temporal power variability, thermal interference, and environmental factors.
Maintaining core temperature under a certain threshold in multi-core processors is crucial for reliability, longevity, performance optimization, stability, consistency, and energy efficiency. Traditional scheduling algorithms often lack adaptability, demanding the integration of intelligent, data-driven solutions. Thermal-aware scheduling in multi-core processors leverages machine learning techniques to dynamically optimize task allocation and scheduling decisions based on the processor temperature at the current time. As transfer learning supports EEG models to adapt to varying subjects [5], thermal-aware scheduling aims for power and performance optimization in multi-core systems. The integration of machine learning enables the system to learn and adapt to complex thermal patterns, predicting and mitigating potential thermal issues before they impact performance. Addressing thermal challenges through intelligent scheduling gives way to more sustainable computing practices.
The primary contributions of this paper are real-time data set generation to study system parameters in a High-Performance Computing environment, detailed analysis of core temperature, core usage, and core clock in multi-core processors, and an evaluation of how various system parameters affect the temperature of the core. This study is useful in implementing a thermal-aware scheduling algorithm for multi-core processors.
Our paper has five sections. Section 2 presents the work of other researchers that conveys the importance of minimizing the consumption of power and temperature at the core level and the processor level. The section mainly focuses on the necessity of task scheduling based on the core temperature in multi-core systems. Section 3 discusses the methodology of our research work, which highlights the analysis of core temperature dynamics in multi-core processors. The results for single-variate and multivariate analysis are presented in the Section 4. Section 5 includes the conclusion of the work we have carried out.

2. Related Work

Many researchers are now focusing on the issues arising due to the energy consumption of computational devices [6], as the use of battery-powered devices is increasing. In the paper [7], the crucial factors of power and temperature of processors used in embedded systems, emphasizing their growing significance in ensuring reliability and efficiency across diverse applications, are discussed. Through a comprehensive literature review, the study provides insights into the intricate relationship between temperature and power consumption, evaluating existing techniques such as temperature modeling, power consumption analysis, and thermal management strategies like Dynamic Voltage and Frequency Scaling (DVFS) and power gating. Dynamic Voltage and Frequency Scaling (DVFS) and Dynamic Power Management (DPM) are the commonly used methods for reducing the power consumption in the processing unit. DVFS is a method in which the voltage and/or frequency of a core or processor are varied according to the requirement [8]. DVFS policies can balance energy efficiency and performance and help avoid hotspots [9], and adaptive, behavior-driven frameworks adapt thermal management as per the workload patterns in heterogeneous systems [10]. Transforming these theoretical scheduling approaches into practice has been proven to have clear energy and thermal benefits on commercial processors, though uneven DVFS steps and sensor delays still have a few constraints [11]. Paper [12] discusses that workload based adjustments in voltage and frequency can reduce the generation of heat.
Eva et al. present different techniques of energy consumption approximation [13]. The authors have also discussed the tool used to monitor performance and power. The thermal modeling of various processors was explained by Sheikh et al. [14]. Guowei Wu et al. [15] have described the process of calculating the core temperature. Techniques for reducing temperature and power for heterogeneous System-on-Chip (SoC) platforms are discussed by Bhat et al. [16]. The review of methods adopted for energy management is presented in [17]. The research to reduce the total consumption of power is carried out for the systems comprising multiprocessors [18]. Mona A. et al. have presented a literature review on power–energy simulation methodologies tailored for processors with multi-cores within the context of benchmarking [19]. They have considered factors such as workload distribution, task scheduling algorithms, and architectural characteristics. The authors of [20] have provided systematic explanations for temperature calculation and approaches to examine the feasibility of temperature constraints. In [21], the authors focused on processors with heterogeneous multi-cores for calculating the minimum completion time required for all tasks under the given constraints. Zhou et al. [22] have conducted research in the field of diverse real-time multiprocessor system-on-chip (MPSoC) platforms. Authors have analyzed the relation between power and temperature, identifying conditions that help avoid thermal issues in multiprocessor systems [23].
The paper [24] discusses a survey of many schedulers that are energy-cognizant and their role in multi-core architecture. Bambagini et al. [25] have presented the classification of energy-aware scheduling algorithms for single and multiple cores, giving details of the approaches that are built based on Dynamic Voltage and Frequency Scaling (DVFS) and Dynamic Power Management (DPM), also combining the two methods. Researchers have focused on reliability, power conversion, and long-term thermal resilience. On the hardware side, innovative controllers for DC–DC buck converters—like switched-capacitor and twin-frequency designs—reduce thermal stress in embedded power modules while improving efficiency [26,27]. The algorithms are categorized into Independent Frequencies and Voltage Islands. The overview of thermal-aware scheduling is presented by Chrobak M et al. [28]. They have also discussed various alternatives and challenges that exist in this domain. The existing research exploring various techniques for thermal management in systems with 3D multi-core, including DVFS, workload scheduling, and thermal-aware task allocation algorithms, is discussed by A. Coskun et al. [29]. The paper [30] discusses simulation techniques for evaluating power consumption across both hardware and software components in multi-core architectures, considering factors such as workload characterization, power modeling, and performance analysis. In the paper [31], a design approach for better performance in terms of computations and improving energy efficiency has been presented for Central Processing Unit (CPU), Field Programmable Gate Array, and Graphics Processing Unit multi-core architectures. A spatiotemporal optimization algorithm is suggested in [32] to reduce power consumption and voltage offsets for multi-core computers.
Smart scheduling can reduce power and temperature significantly and maintain performance in multi-core architectures [33]. The fuzzy-logic-based approaches for optimizing task scheduling to mitigate temperature rise and improve energy efficiency in heterogeneous MPSoC architectures are discussed by Zohreh et al. in [34]. They have considered factors such as workload distribution, temperature prediction, and energy consumption. Bartlomiej Kocot et al. have discussed how, as the need for computational power is increasing, energy consumption is becoming a serious concern in High-Performance Computing (HPC). Bio-inspired algorithms and Reinforcement Learning (RL) are a few of the emerging trends [35]. In the paper [36], many effective ways of managing power in multi-core processors are discussed. The software approach for efficient management of power can be implemented through better task scheduling algorithms based on the temperature of the cores. Santiago Pagani et al. thoroughly explored the complexities of energy, power, and temperature management in multi-core processors [37]. Through a detailed review encompassing power management, energy optimization, and thermal management, the authors evaluate the effectiveness of machine learning algorithms in optimizing power settings, minimizing energy consumption, and regulating temperature levels. Simplified learning models are integrated into task schedulers, which keep CPU temperatures within safe values [38]. In the paper [39], the authors have used adaptive estimation and online identification to improve real-time temperature forecasting for CPU–GPU systems. Graph Convolutional Neural Networks can accurately map spatial temperature distributions, supporting precise runtime control [40]. Statistical learning methods for allocation of tasks to cores help minimize thermal hotspots without affecting throughput [41]. Few co-design approaches optimize task scheduling and temperature control, ensuring that the deadlines are met under strict thermal limits [42,43]. Machine learning models are also used in reliability-aware control strategies to predict the lifetime of the components, thus maintaining safe temperature ranges in industrial cyber–physical systems and enhancing the device lifespan [44]. Paper [45] also states that machine-learning-based techniques can be used in multi-core systems to optimize various parameters. The Reinforcement Learning algorithm is also used by Yang et al. [46] to reduce temperature in multi-core systems. RL policy for DVFS and scheduling tasks is used in multi-core systems [47], but they could not schedule multiple applications simultaneously. Our research work focuses primarily on the following identified research gaps. Effective energy management is crucial for embedded systems using battery-powered devices, as industrial studies have shown that the lifespan of the device can be increased by reducing the operating temperature [48]. To reduce the temperature, software-based techniques are comparatively simple, less costly, and involve less risk as compared to hardware approaches.

3. Methodology

The research carried out is explained through the flowchart shown in Figure 1 below. The entire methodology is divided into five major steps. We start by generating the data sets required for our study. This data set is then preprocessed to check if any data values are missing or have any inconsistencies. This is followed by an algorithm implementation to extract the features. A detailed study is performed, which consists of single-variate and multivariate analysis. The impact of system parameters on core temperature in the multi-core processor environment is discussed in detail. Finally, we have implemented ARIMA-based core temperature estimation, which can enable the development of intelligent scheduling algorithms.

3.1. Data Set Generation

As the data sets required for our analysis were not available, we have generated the data set. We have used the tool MSI Afterburner utility (Micro-Star International Co., Ltd., Taipei City, Taiwan) [49], which is a widely used graphics card software that works on any card. With this tool, we can monitor details of hardware parameters in real time. Whenever the system requires executing heavy applications, such as multimedia and gaming, it is necessary to explore the limits of the system in general. This tool helps in accessing system-level parameters, which further helps in making a balance between the temperature of the cores or processors and system performance. This tool was run on a system comprising an Intel i5 processor with Hexa-core (with 6 efficient cores), Intel(R) UHD Graphics 620. The various system parameters extracted through the logs are shown in Table 1 below.
For single-variate analysis, two logs of 5 MB each were extracted when the number of applications running was 2 and 13, respectively. The third log is 17 MB when more than 20 applications are running. For multivariate analysis, we are using only one log of 17 MB. Each of the logs has the specifications mentioned in Table 2 below.

3.2. Preprocessing of Data

The 5 MB and 17 MB log files had initial 58 rows containing primary information about the hardware monitoring log of the processor, which was deleted, and from the 59th row onwards, core-wise data is maintained. A few rows that did not have all values fetched were also deleted. The data was accessed every one second. For the convenience of analysis and visualization, the values were averaged every one minute. The timestamp column is in the form “27-02-2024 10:57:37” or “dd-mm-yyyy hr:mn:se”.

3.3. Feature Extraction

Correlation analysis is a very important part of research that makes use of machine learning (ML). It is also an essential statistical tool that helps to learn the relationships between variables. In ML, it helps in many ways, like selecting features that have a greater impact in training the model. Correlation allows exploring data with primary patterns and relationships. It is influential in interpreting the model and finding multicollinearity issues in regression analysis. Therefore, correlation analysis appears as a foundation for carrying out research in ML, improving the performance of the model. In our analysis, core temperature and core usage are the parameters of interest. The correlation is expressed [50] using the following Equations: Equations (1) and (2).
ρ t e m p , u s a g e = c o v t e m p , u s a g e σ t e m p   σ u s a g e
where
  • ρ ( t e m p , u s a g e ) :   P e a r s o n   c o r r e l a t i o n   c o e f f i c i e n t   b e t w e e n   t e m p e r a t u r e   a n d   u s a g e .
  • c o v ( t e m p , u s a g e ) :   C o v a r i a n c e   b e t w e e n   t e m p e r a t u r e   a n d   u s a g e .
  • σ t e m p :   S t a n d a r d   d e v i a t i o n   o f   t e m p e r a t u r e .
  • σ u s a g e :   S t a n d a r d   d e v i a t i o n   o f   u s a g e .
c o v ( X , Y ) = i = 1 n ( X i X ) ¯ ( Y Y ) ¯ i = 1 n ( X i X ) ¯ 2   i = 1 n ( Y i Y ) ¯ 2
where
  •   X , Y :   T w o   r a n d o m   v a r i a b l e s   ( t e m p   a n d   u s a g e ) .
  •   X i , Y i :   I n d i v i d u a l   o b s e r v a t i o n s   o f   X   a n d   Y ,   r e s p e c t i v e l y .
  •   X ¯ :   M e a n   ( a v e r a g e )   o f   v a r i a b l e   X
  •   Y ¯ :   M e a n   ( a v e r a g e )   o f   v a r i a b l e   Y
  •   n :   N u m b e r   o f   d a t a   p o i n t s   ( s a m p l e   s i z e ) .

4. Results

4.1. Single-Variate Analysis

The single-variate analysis involves the study of only one variable without considering the other. Three major parameters are analyzed here, namely core temperature, core usage, and core frequency, for each of the logs, with a different number of applications running in the system. In Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10, each core is represented using a unique colour to differentiate it from the others. Core 1 is shown in pastel red, Core 2 in orange, Core 3 in yellowish green, Core 4 in green, Core 5 in olive green, Core 6 in teal, Core 7 in light blue, Core 8 in medium blue, Core 9 in steel blue, Core 10 in violet, Core 11 in lavender, and Core 12 in pink. In the box plots, the box denotes the interquartile range (Q1–Q3), the median is represented by the horizontal line inside the box, the vertical whiskers represent the minimum and maximum value within 1.5 times the interquartile range, and outliers that fall beyond the whiskers are shown by black dots. The following figures, Figure 2, Figure 3 and Figure 4, are plotted for all 12 cores, presenting only one variable, which is “core temperature”.
Figure 5, Figure 6 and Figure 7, shown below, are plotted for all 12 cores, presenting only one variable, “core usage”. The association of core temperature and core usage is complex in nature, especially in multi-core processor environments, as it is impacted by many parameters. Some of the common observations are discussed here.
The graphs shown below in Figure 8, Figure 9 and Figure 10 are plotted for all 12 cores, presenting a variable “core clock”. The relationship between core temperature and core clock speed in a multi-core processor is generally found to be linear. Higher clock speeds increase power consumption, which in turn generates more heat within the processor.

4.2. Multivariate Analysis

Multivariate analysis includes a more detailed analysis of multiple parameters simultaneously. This analysis helps to learn patterns that may not be seen while understanding the individual variables. In this analysis, again, three major parameters are considered, namely core temperature, core usage, and core frequency. The data set with a 17 MB log is considered here for the study.
Figure 11 below shows the temperature of all 12 cores plotted against the timestamp when the log was extracted.
Figure 12 below demonstrates the usage of all 12 cores plotted against the timestamp when the log was extracted. We observe that all 12 cores have the same pattern for usage. Core usage is the degree to which individual cores are actively involved in executing computational tasks at any given time.
The clock variation in all 12 cores is plotted against the timestamp in Figure 13 below. We again observe that all 12 cores have the same pattern for clock speed. When the clock speed increases, it also leads to an increase in power consumption, as more electrical energy is required.

4.3. Estimation of Core Temperature

Our above analysis is useful in implementing task scheduling algorithms based on core temperature, in which case the estimation of core temperature is essential in mapping the tasks to cores. Paper [51] discusses the use of Graph Convolutional Neural Networks (GCNs) for predicting the core temperatures. We have implemented three models for estimating the core temperatures, namely Linear Regression, Polynomial Regression, and ARIMA.
The Linear Regression model explains how independent parameters relate to a dependent variable. The Polynomial Regression model is an extension of Linear Regression. It helps explain the non-linear relation between dependent and independent variables. In our case, independent parameters are all 12 cores, their core-wise temperature, core-wise usage, core-wise clock, overall CPU usage, overall CPU clock, overall CPU power, overall CPU temperature and RAM usage. The dependent parameter was core-wise temperature. The Equation for Linear Regression is given in Equation (3), and an equation for Polynomial Regression is given in Equation (4) below.
T = β 0 + β 1 X 1 + β 2 X 2 + + β n X n + ε
where
  • T = predicted core temperature (°C)
  • X 1 , X 2 , . . . , X n = independent variables
  • β 0 = intercept term (baseline temperature when all X i   = 0 )
  • β i = regression coefficients representing the sensitivity of temperature to each variable
  • ϵ = random error term
T = β 0 + β 1 X + β 2 X 2 + β 3 X 3 + + β d X d + ϵ
where
  • T = predicted core temperature (°C)
  • X = independent variable
  • d = degree of the polynomial
  • β 0 , β 1 , . . . , β d = model coefficients
  • ϵ = random error term
We also implemented the AutoRegressive Integrated Moving Average (ARIMA) to estimate the core temperatures. The ARIMA model is a statistical method used for time series forecasting; thus, it is very effective in estimating core-level temperature. Since core temperature has patterns and interdependencies over time, ARIMA is effective in predicting future values from past observations. The model has three major components: AutoRegression (AR), which considers previous temperature values; integration (I), which levels the data by eliminating trends; and Moving Average (MA), which considers previous forecasting errors. ARIMA can effectively model the temperature fluctuations, providing accurate predictions. The general Equation for an ARIMA (p, d, q) model is given in Equation (5) below:
Y t =   c   +   i = 1 p φ i Y t i +   j = 1 q θ j ε t j +   ε t
where
  • Yt = actual value at time t;
  • c = constant term;
  • p = order of the AutoRegressive (AR) part;
  • φi = AR coefficients;
  • d = degree of differencing;
  • q = order of the Moving Average (MA) part;
  • θj = MA coefficients;
  • εt = white noise error term.
An ARIMA (1,1,1) model is given by Equation (6) below.
Y t Y t 1 = c + ϕ 1 Y t 1 Y t 2 + θ 1 ε t 1 + ε t
We have obtained the following results, as shown in Table 3 below. The average RMSE is observed to be 2.44.
The RMSE values obtained for all 12 cores range from 1.651 (Core 9) to 3.414 (Core 2). The fluctuations indicate that although the ARIMA model is yielding a fairly accurate temperature prediction, some cores can have greater deviations, likely caused by variations in workload, dynamic power usage, or model constraints. For Core 10, the predicted values are as shown in Figure 14 and Figure 15 below.
Figure 14 is a time-series plot of the temperature of Core 10. The blue-colored graph represents the actual temperature value, which is also an expected value from the ARIMA model. The red-colored graph represents an estimated value of Core 10 temperature. Figure 15 shows the console output comparing the estimated/predicted temperature values with actual temperature values from rows 2904, 2905, and 2906.

5. Discussion

5.1. Single-Variate Analysis

Based on the graphical analysis from Figure 2, Figure 3 and Figure 4, it is observed that as the number of applications running on the system increases, the temperature of the cores keeps increasing. Also, there is an increase in the outliers, which indicates a need to take some action for the temperatures exceeding a certain threshold value or 60 °C. Maintaining the processor temperature is important to ensure optimal performance, reliability, and safety. It is observed from Figure 5, Figure 6 and Figure 7 that as core usage increases, their power consumption increases, which further leads to more heat generation. This causes a proportionate temperature increase at the core and processor levels. When the usage of the core reaches the maximum level, the degree of increase in core temperature quickly rises due to thermal limitations. Such situations could lead to degradation of the performance. In the case of processors having multiple cores, tasks could be distributed unevenly, resulting in fewer cores experiencing higher temperatures than other cores. In such cases, a few cores are underutilized and a few are loaded heavily, which could lead to uneven distribution of heat. It is observed that in multi-core processors, the usage of cores directly affects core temperatures due to increased power consumption and heat generation during task execution. It is observed that as the number of applications running on the system increases, the usage of the cores also increases. The box plots drawn for the three cases show the outliers for usages going beyond 60% and sometimes reaching nearly 100%. These values of core usages can potentially impact the system and hence require more detailed examination to learn their significance on the overall performance. For correlating core temperature with core usage, it is observed from Figure 8, Figure 9 and Figure 10 that, as the number of applications running on the system increases, the clock speed of all the cores increases above 1000 MHz. The box plots drawn for the three cases also represent the outliers for clock values beyond 3000 MHz, and a few values reaching more than 4500 MHz. These values of core usages can potentially impact the system and hence require more detailed examination to learn their significance on the overall performance.

5.2. Multivariate Analysis

From Figure 11, it is observed that Core 1 and Core 2 have higher temperature values as compared to other cores. One reason for this could be that Core 1 and Core 2 may be executing more challenging or rigorous tasks compared to the other cores. System-level processes could also be assigned to Core 1 and Core 2, which results in increased activity and an increase in their temperatures as compared to other cores. Figure 12 illustrates that the type of workload distribution, scheduling policies, settings of task affinity, etc., can make all cores exhibit similar patterns of core usage. It is also observed that Core 1 and Core 2 have larger usage values as compared to other cores. As was explained earlier, Core 1 and Core 2 may be executing more challenging or rigorous tasks compared to the other cores, due to which their usages are also higher. Any changes in the pattern of core usage can influence core temperature dynamics. Heavy computational tasks may cause a speedy temperature rise, while idle time slots may permit cores to cool down. Figure 13 shows that sometimes, processors are overclocked, increasing their speed beyond their threshold values to obtain higher performance. This can lead to generating more heat. The temperature variations at various clock frequencies can be one of the reasons that affect the reliability of the cores. This can further reduce the performance of the processor.
Therefore, making a good balance in clock speeds, workload stress, and thermal conditions is very crucial for optimizing the performance of the processor. This also ensures long-term reliability of the processor and the device.

5.3. Estimation of Core Temperature

We have implemented Linear Regression with an average RMSE of 2.58 and a polynomial model with an average RMSE of 2.48 for estimating the core temperature but continued working with ARIMA, as it is most suitable for time series forecasting. In multi-core processors, the evolution of temperature is distinct per core; the framework of ARIMA adapts easily to these dynamics, therefore making it suitable for real-time thermal estimation and prediction. We have obtained an average RMSE of 2.44 for the ARIMA model. If the parameters of the ARIMA model are finetuned or if hybrid models can be used for further experimentation, we can observe improvement in the prediction accuracy for better thermal management.

6. Conclusions

In multi-core systems, the technological trends affect the temperature at the core level and processor level, which in turn reduces the performance and life span of the processor. Therefore, it is essential to maintain the temperature below the threshold levels. Software methods are relatively simpler than hardware methods. Thermal-aware scheduling allocates tasks to cores based on the core temperature, which helps maintain the temperature below a certain threshold. Therefore, to build a thermal-aware scheduling algorithm, one can integrate information about core temperature, core usage, and core clock into the scheduling process. To examine these parameters, we have generated a data set to extract the system-level parameters while running different applications on the multi-core system.
The single-variate analysis on core temperature indicates that some action is essential when core temperatures exceed the threshold, such as 60 °C, to prevent performance degradation and hardware damage. The single-variate analysis on core usage shows outliers with core usage exceeding 60% and occasionally reaching near 100%. These high usage levels may affect system performance, demanding further analysis to evaluate their impact and address potential risks. The single-variate analysis on core clocks reveals outliers with values exceeding 3000 MHz, with some greater than 4500 MHz, potentially affecting system performance. The multivariate analysis also focuses on core temperature, core usage, and core clock, where all 12 cores follow the same pattern. Since there is a correlation between core temperature and core usage and a correlation between core temperature and core clock, the regression can be used to derive meaningful insights to apprise thermal-aware scheduling and optimization strategies. Balancing core temperature, usage, and clock speeds is important for optimizing the performance of the multi-core processor and ensuring long-term reliability of both the processor and the device.
We have obtained an average RMSE of 2.44 °C for the ARIMA model. By finetuning the parameters of the ARIMA model or using hybrid models, we can obtain an enhancement in the prediction results for better thermal management. The possible solutions for optimizing the processor temperature are designing Dynamic Thermal Management (DTM), which is driven by AI, by combining ARIMA with Reinforcement Learning or Deep Learning models to make real-time adjustments in CPU scheduling and cooling mechanisms. The other proposed solution is to implement an Adaptive Dynamic Voltage and Frequency Scaling approach using ARIMA’s predictions. Also, for multi-core or cloud-based systems, based on ARIMA’s predictions, we can offload the tasks to cooler cores. The developed estimation model can be integrated with a scheduling algorithm to anticipate thermal hotspots in real time and improve energy efficiency. This integration helps with proactive allocation of tasks across all cores to balance the temperature.

Author Contributions

Conceptualization, L.L.; methodology, L.L.; software, L.L.; validation, L.L.; formal analysis, L.L.; investigation, L.L.; resources, L.L.; data curation, L.L.; writing—original draft preparation, L.L.; writing—review and editing, Y.S.R.; visualization, L.L.; supervision, Y.S.R.; project administration, Y.S.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Singh, A.K.; Dey, S.; McDonald-Maier, K.; Basireddy, K.R.; Merrett, G.V.; Al-Hashimi, B.M. Dynamic energy and thermal management of multi-core mobile platforms: A survey. IEEE Des. Test 2020, 37, 25–33. [Google Scholar] [CrossRef]
  2. Everman, B.; Villwock, T.; Chen, D.; Soto, N.; Zhang, O.; Zong, Z. Evaluating the Carbon Impact of Large Language Models at the Inference Stage. In Proceedings of the 2023 IEEE International Performance, Computing, and Communications Conference (IPCCC), Anaheim, CA, USA, 17–19 November 2023; pp. 150–157. [Google Scholar] [CrossRef]
  3. Borkar, S. Thousand core chips: A technology perspective. In Proceedings of the 44th Annual Design Automation Conference, New York, NY, USA, 4–8 June 2007; pp. 746–749. [Google Scholar]
  4. Mishra, A.; Tripathi, A.K. Energy efficient voltage scheduling for multi-core processors with software controlled dynamic voltage scaling. Appl. Math. Model. 2014, 38, 3456–3466. [Google Scholar] [CrossRef]
  5. Kok, C.L.; Ho, C.K.; Aung, T.H.; Koh, Y.Y.; Teo, T.H. Transfer Learning and Deep Neural Networks for Robust Intersubject Hand Movement Detection from EEG Signals. Appl. Sci. 2024, 14, 8091. [Google Scholar] [CrossRef]
  6. Stavrou, K.; Trancoso, P. Thermal-aware scheduling for future chip multiprocessors. EURASIP J. Embed. Syst. 2007, 2007, 048926. [Google Scholar] [CrossRef]
  7. Roeder, J.; Rouxel, B.; Altmeyer, S.; Grelck, C. Energy-aware scheduling of multi-version tasks on heterogeneous real-time systems. In Proceedings of the 36th Annual ACM Symposium on Applied Computing, New York, NY, USA, 22–26 March 2021; pp. 501–510. [Google Scholar]
  8. Ladge, L.; Rao, Y.S. Study of temperature and power consumption in embedded processors. In Proceedings of the 7th International Conference on Computing in Engineering & Technology (ICCET 2022), Online Conference, 12–13 February 2022; pp. 331–336. [Google Scholar] [CrossRef]
  9. Pérez-Rodríguez, J.; Yomsi, P.; Zaykov, P. A Thermal-Aware Approach for DVFS-Enabled Multi-Core Architectures. In Proceedings of the International Conference on Embedded Systems, Chengdu, China, 18–21 December 2022; pp. 1904–1911. [Google Scholar] [CrossRef]
  10. Kaur, M.; Singh, K.; Kumar, S. Adaptive Behavior-Driven Thermal Management Framework in Heterogeneous Multi-Core Processors. Microprocess. Microsyst. 2021, 87, 104392. [Google Scholar] [CrossRef]
  11. Bashir, Q.; Pivezhandi, M.; Saifullah, A. Energy- and Temperature-Aware Scheduling: From Theory to an Implementation on Intel Processor. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2022, 41, 1674–1687. [Google Scholar]
  12. Huang, J.; Wang, H.; Gong, G.; Wang, L.; Chen, X. An Efficient Multi-Core DSP Power Management Controller. Eng. Rep. 2025, 4, e13050. [Google Scholar] [CrossRef]
  13. García-Martín, E.; Rodrigues, C.F.; Riley, G.; Grahn, H. Estimation of energy consumption in machine learning. J. Parallel Distrib. Comput. 2019, 134, 75–88. [Google Scholar] [CrossRef]
  14. Sheikh, H.F.; Ahmad, I.; Wang, Z.; Ranka, S. An overview and classification of thermal-aware scheduling techniques for multi-core processing systems. Sustain. Comput. Inform. Syst. 2012, 2, 151–169. [Google Scholar] [CrossRef]
  15. Wu, G.; Xu, Z. Temperature-aware task scheduling algorithm for soft real-time multi-core systems. J. Syst. Softw. 2010, 83, 2579–2590. [Google Scholar] [CrossRef]
  16. Bhat, G.; Singla, G.; Unver, A.K.; Ogras, U.Y. Algorithmic optimization of thermal and power management for heterogeneous mobile platforms. IEEE Trans. Very Large Scale Integr. Syst. 2018, 26, 544–557. [Google Scholar] [CrossRef]
  17. Kim, Y.G.; Kong, J.; Chung, S.W. A survey on recent OS-level energy management techniques for mobile processing units. IEEE Trans. Parallel Distrib. Syst. 2018, 29, 2388–2401. [Google Scholar] [CrossRef]
  18. Rupanetti, D.; Salamy, H. Task allocation, migration and scheduling for energy-efficient real-time multiprocessor architectures. J. Syst. Archit. 2019, 98, 17–26. [Google Scholar] [CrossRef]
  19. Abou-Of, M.A.; Sedky, A.A.; Taha, A.H. Power-energy simulation for multi-core processors in benchmarking. Adv. Sci. Technol. Eng. Syst. J. 2017, 2, 255–262. [Google Scholar] [CrossRef]
  20. Han, Q.; Fan, M.; Bai, O.; Ren, S.; Quan, G. Temperature-constrained feasibility analysis for multicore scheduling. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2016, 35, 2082–2092. [Google Scholar] [CrossRef]
  21. Hanumaiah, V.; Vrudhula, S. Temperature-aware DVFS for hard real-time applications on multicore processors. IEEE Trans. Comput. 2012, 61, 1484–1494. [Google Scholar] [CrossRef]
  22. Zhou, J.; Wei, T.; Chen, M.; Yan, J.; Hu, X.S.; Ma, Y. Thermal-aware task scheduling for energy minimization in heterogeneous real-time mpsoc systems. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2015, 35, 1269–1282. [Google Scholar] [CrossRef]
  23. Bhat, G.; Gumussoy, S.; Ogras, U.Y. Power–Temperature Stability and Safety Analysis for Multiprocessor Systems. arXiv 2018, arXiv:1806.06180. [Google Scholar] [CrossRef]
  24. Zhuravlev, S.; Saez, J.C.; Blagodurov, S.; Fedorova, A.; Prieto, M. Survey of energy-cognizant scheduling techniques. IEEE Trans. Parallel Distrib. Syst. 2012, 24, 1447–1464. [Google Scholar] [CrossRef]
  25. Bambagini, M.; Marinoni, M.; Aydin, H.; Buttazzo, G. Energy-aware scheduling for real-time systems: A survey. ACM Trans. Embed. Comput. Syst. 2016, 15, 1–34. [Google Scholar] [CrossRef]
  26. Kok, C.L.; Li, X.; Siek, L.; Zhu, D.; Kong, J. A Switched-Capacitor Deadtime Controller for DC–DC Buck Converter. IEEE Trans. Power Electron. 2019, 34, 5678–5687. [Google Scholar]
  27. Kok, C.L.; Siek, L. Designing a Twin Frequency Control DC–DC Buck Converter Using Accurate Load Current Sensing Technique. Electronics 2023, 13, 45. [Google Scholar] [CrossRef]
  28. Chrobak, M.; Dürr, C.; Hurand, M.; Robert, J. Algorithms for temperature-aware task scheduling in microprocessor systems. In Proceedings of the International Conference on Algorithmic Applications in Management, Xi’an, China, 22–25 June 2005; Springer: Berlin/Heidelberg, Germany; 2005; pp. 120–130. [Google Scholar]
  29. Coskun, A.K.; Ayala, J.L.; Atienza, D.; Rosing, T.S.; Leblebici, Y. Dynamic thermal management in 3D multicore architectures. In Proceedings of the 2009 Design, Automation & Test in Europe Conference & Exhibition, Nice, France, 20–24 April 2009; pp. 1410–1415. [Google Scholar]
  30. Heirman, W.; Sarkar, S.; Carlson, T.E.; Hur, I.; Eeckhout, L. Power-aware multi-core simulation for early design stage hardware/software co-optimization. In Proceedings of the 21st International Conference on Parallel Architectures and Compilation Techniques, Minneapolis, MN, USA, 19–23 September 2012; pp. 3–12. [Google Scholar]
  31. Tsoi, K.H.; Luk, W. Power profiling and optimization for heterogeneous multi-core systems. ACM SIGARCH Comput. Archit. News 2011, 39, 8–13. [Google Scholar] [CrossRef]
  32. Peng, X.; Yan, L.; Li, Q.; Ge, X.; Li, Y. Modeling and optimization of power consumption for multi-core processors: A thermodynamic perspective. Results Eng. 2025, 26, 104837. [Google Scholar] [CrossRef]
  33. Mittal, S. A survey of techniques for improving energy efficiency in embedded computing systems. Int. J. Comput. Aided Eng. Technol. 2014, 6, 440–459. [Google Scholar] [CrossRef]
  34. Ekhtiyari, Z.; Moghaddas, V.; Beitollahi, H. A temperature-aware and energy-efficient fuzzy technique to schedule tasks in heterogeneous mpsoc systems. J. Supercomput. 2019, 75, 5398–5419. [Google Scholar] [CrossRef]
  35. Kocot, B.; Czarnul, P.; Proficz, J. Energy-aware scheduling for high-performance computing systems: A survey. Energies 2023, 16, 890. [Google Scholar] [CrossRef]
  36. Nagalakshmi, K.; Gomathi, N. Analysis of power management techniques in multicore processors. In Artificial Intelligence and Evolutionary Computations in Engineering Systems: Proceedings of ICAIECES 2016; Springer: Berlin/Heidelberg, Germany, 2017; pp. 397–418. [Google Scholar]
  37. Pagani, S.; Manoj, P.D.S.; Jantsch, A.; Henkel, J. Machine learning for power, energy, and thermal management on multicore processors: A survey. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2018, 39, 101–116. [Google Scholar] [CrossRef]
  38. Dowling, A.; Jiang, L.; Cheng, M.-C.; Liu, Y. Regulating CPU Temperature with Thermal-Aware Scheduling. arXiv 2023, arXiv:2310.00854. [Google Scholar]
  39. Djedidi, O.; M’Sirdi, N.K.; Naamane, A. Adaptive Estimation of the Thermal Behavior of CPU–GPU SoCs for Prediction and Diagnosis. In Proceedings of the IMAACA 2019—International Conference on Integrated Modeling and Analysis in Applied Control and Automation, Lisbon, Portugal, 18–20 September 2019; pp. 45–51. [Google Scholar]
  40. Miao, D.; Duan, G.; Chen, D.; Zhu, Y.; Zheng, X. Real-Time Temperature Prediction for Large-Scale Multi-Core Chips Based on Graph Convolutional Neural Networks. Electronics 2025, 14, 1223. [Google Scholar] [CrossRef]
  41. Pivezhandi, M.; Saifullah, A.; Modekurthy, P. Feature-Aware Task-to-Core Allocation in Embedded Multi-Core Platforms via Statistical Learning. In Proceedings of the IEEE 31st International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA), Singapore, 20–22 August 2025; pp. 102–113. [Google Scholar] [CrossRef]
  42. Chen, Y.; Li, W.; Zhao, X. Task Scheduling and Temperature Optimization Co-Design for Multi-Core Embedded Systems. J. Syst. Archit. 2020, 107, 101738. [Google Scholar]
  43. Fisher, N.A.; Chen, J.-J.; Wang, S.; Thiele, L. Thermal-Aware Global Real-Time Scheduling on Multicore Systems. Real-Time Syst. 2009, 43, 243–274. [Google Scholar]
  44. Zhou, J.; Li, L.; Vajdi, A.; Wu, Z.; Chen, D. Temperature-Constrained Reliability Optimization of Industrial Cyber-Physical Systems Using Machine Learning and Feedback Control. IEEE Access 2021, 9, 134025–134038. [Google Scholar] [CrossRef]
  45. Gupta, M.; Bhargava, L.; Indu, S. Mapping techniques in multicore processors: Current and future trends. J. Supercomput. 2021, 77, 9308–9363. [Google Scholar] [CrossRef]
  46. Yang, S.-G.; Wang, Y.-Y.; Liu, D.; Jiang, X.; Fang, H.; Yang, Y.; Zhao, M. Reinforcement learning for thermal-aware task allocation on multicore. arXiv 2019, arXiv:1912.00189. [Google Scholar] [CrossRef]
  47. Liu, D.; Yang, S.-G.; He, Z.; Zhao, M.; Liu, W. Compiler-assisted reinforcement learning for thermal-aware task scheduling and DVFS on multicores. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2021, 41, 1813–1826. [Google Scholar] [CrossRef]
  48. Min-Allah, N.; Hussain, H.; Khan, S.U.; Zomaya, A.Y. Power efficient rate monotonic scheduling for multi-core systems. J. Parallel Distrib. Comput. 2012, 72, 48–57. [Google Scholar] [CrossRef]
  49. Msi. Available online: https://www.msi.com/Landing/afterburner/graphics-cards (accessed on 27 February 2024).
  50. Rodgers, J.L.; Nicewander, W.A. Thirteen Ways to Look at the Correlation Coefficient. Am. Stat. 1988, 42, 59–66. [Google Scholar] [CrossRef]
  51. Tsiramua, S.; Meladze, H.; Davitashvili, T.; Sanchez-Sáez, J.-M.; Criado-Aldeanueva, F. Structural Analysis of Multi-Core Processor and Reliability Evaluation Model. Mathematics 2025, 13, 515. [Google Scholar] [CrossRef]
Figure 1. Flowchart for Methodology.
Figure 1. Flowchart for Methodology.
Jlpea 15 00068 g001
Figure 2. Core temperatures when 2 applications are running.
Figure 2. Core temperatures when 2 applications are running.
Jlpea 15 00068 g002
Figure 3. Core temperatures when 13 applications are running.
Figure 3. Core temperatures when 13 applications are running.
Jlpea 15 00068 g003
Figure 4. Core temperatures when more applications are running (17 MB file).
Figure 4. Core temperatures when more applications are running (17 MB file).
Jlpea 15 00068 g004
Figure 5. Core usages when 2 applications are running.
Figure 5. Core usages when 2 applications are running.
Jlpea 15 00068 g005
Figure 6. Core usages when 13 applications are running.
Figure 6. Core usages when 13 applications are running.
Jlpea 15 00068 g006
Figure 7. Core usages when more applications are running (17 MB file).
Figure 7. Core usages when more applications are running (17 MB file).
Jlpea 15 00068 g007
Figure 8. Core clock when 2 applications are running.
Figure 8. Core clock when 2 applications are running.
Jlpea 15 00068 g008
Figure 9. Core clock when 13 applications are running.
Figure 9. Core clock when 13 applications are running.
Jlpea 15 00068 g009
Figure 10. Core clock when more applications are running (17 MB file).
Figure 10. Core clock when more applications are running (17 MB file).
Jlpea 15 00068 g010
Figure 11. Comparison of all core temperatures over the timestamp (17 MB file).
Figure 11. Comparison of all core temperatures over the timestamp (17 MB file).
Jlpea 15 00068 g011
Figure 12. Comparison of all core usages over the timestamp (17 MB file).
Figure 12. Comparison of all core usages over the timestamp (17 MB file).
Jlpea 15 00068 g012
Figure 13. Comparison of all Core frequencies over the timestamp (17 MB file).
Figure 13. Comparison of all Core frequencies over the timestamp (17 MB file).
Jlpea 15 00068 g013
Figure 14. Graph of the estimated temperature of Core 10.
Figure 14. Graph of the estimated temperature of Core 10.
Jlpea 15 00068 g014
Figure 15. Estimated values of Core 10 temperature.
Figure 15. Estimated values of Core 10 temperature.
Jlpea 15 00068 g015
Table 1. Parameters extracted from the system.
Table 1. Parameters extracted from the system.
TimestampCPU1 TemperatureCPU1 UsageCPU1 Clock
GPU1 temperatureCPU2 temperature CPU2 usageCPU2 clock
GPU1 usageCPU3 temperatureCPU3 usageCPU3 clock
GPU2 usageCPU4 temperatureCPU4 usageCPU4 clock
GPU1 FB usageCPU5 temperatureCPU5 usageCPU5 clock
GPU1 VID usageCPU6 temperatureCPU6 usageCPU6 clock
GPU2 VID usageCPU7 temperatureCPU7 usageCPU7 clock
GPU1 BUS usageCPU8 temperatureCPU8 usageCPU8 clock
GPU1 memory usageCPU9 temperatureCPU9 usageCPU9 clock
GPU2 memory usageCPU10 temperatureCPU10 usageCPU10 clock
GPU1 core clockCPU11 temperatureCPU11 usageCPU11 clock
GPU2 core clockCPU12 temperatureCPU12 usageCPU12 clock
GPU1 memory clockCPU temperatureCPU usageCPU clock
GPU2 powerCPU powerRAM usage Commit charge
GPU1 voltage limitGPU1 no load limit
Table 2. Parameter Specifications.
Table 2. Parameter Specifications.
ParameterUnitMinMaxParameterUnitMinMaxParameterUnitMinMax
CPU1 temperatureC10100CPU1 usage%10100CPU1 clockMHz105000
CPU2 temperatureC10100CPU2 usage%10100CPU2 clockMHz105000
CPU3 temperatureC10100CPU3 usage%10100CPU3 clockMHz105000
CPU4 temperatureC10100CPU4 usage%10100CPU4 clockMHz105000
CPU5 temperatureC10100CPU5 usage%10100CPU5 clockMHz105000
CPU6 temperatureC10100CPU6 usage%10100CPU6 clockMHz105000
CPU7 temperatureC10100CPU7 usage%10100CPU7 clockMHz105000
CPU8 temperatureC10100CPU8 usage%10100CPU8 clockMHz105000
CPU9 temperatureC10100CPU9 usage%10100CPU9 clockMHz105000
CPU10 temperatureC10100CPU10 usage%10100CPU10 clockMHz105000
CPU11 temperatureC10100CPU11 usage%10100CPU11 clockMHz105000
CPU12 temperatureC10100CPU12 usage%10100CPU12 clockMHz105000
CPU temperatureC10100CPU usage%10100CPU clockMHz105000
Table 3. Root Mean Square Error (RMSE) for Each Core.
Table 3. Root Mean Square Error (RMSE) for Each Core.
Core No.123456789101112
RMSE3.3562.4142.7362.7562.7422.7802.0822.0691.6511.6562.5572.488
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ladge, L.; Rao, Y.S. Analysis of Core Temperature Dynamics in Multi-Core Processors. J. Low Power Electron. Appl. 2025, 15, 68. https://doi.org/10.3390/jlpea15040068

AMA Style

Ladge L, Rao YS. Analysis of Core Temperature Dynamics in Multi-Core Processors. Journal of Low Power Electronics and Applications. 2025; 15(4):68. https://doi.org/10.3390/jlpea15040068

Chicago/Turabian Style

Ladge, Leena, and Y. Srinivasa Rao. 2025. "Analysis of Core Temperature Dynamics in Multi-Core Processors" Journal of Low Power Electronics and Applications 15, no. 4: 68. https://doi.org/10.3390/jlpea15040068

APA Style

Ladge, L., & Rao, Y. S. (2025). Analysis of Core Temperature Dynamics in Multi-Core Processors. Journal of Low Power Electronics and Applications, 15(4), 68. https://doi.org/10.3390/jlpea15040068

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop