Next Article in Journal
Influence of 50 Hz and 20 kHz Plasma Generator Frequency on Ammonia Decomposition for Hydrogen Recovery
Previous Article in Journal
A Binary Discounting Method for Economic Evaluation of Hydrogen Projects: Applicability Study Based on Levelized Cost of Hydrogen (LCOH)
Previous Article in Special Issue
Performance Comparison of PV Module Configurations in a Fixed-Load P2H System Considering Regional and Seasonal Solar Irradiance in Korea
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Power Estimation and Energy Efficiency of AI Accelerators on Embedded Systems

Department of Computer Science & Engineering, Incheon National University, Incheon 22012, Republic of Korea
*
Author to whom correspondence should be addressed.
Energies 2025, 18(14), 3840; https://doi.org/10.3390/en18143840 (registering DOI)
Submission received: 5 June 2025 / Revised: 7 July 2025 / Accepted: 17 July 2025 / Published: 19 July 2025
(This article belongs to the Special Issue Energy, Electrical and Power Engineering: 4th Edition)

Abstract

The rapid expansion of IoT devices poses new challenges for AI-driven services, particularly in terms of energy consumption. Although cloud-based AI processing has been the dominant approach, its high energy consumption calls for more energy-efficient alternatives. Edge computing offers an approach for reducing both latency and energy consumption. In this paper, we propose a methodology for estimating the power consumption of AI accelerators on an embedded edge device. Through experimental evaluations involving GPU- and Edge TPU-based platforms, the proposed method demonstrated estimation errors below 8%. The estimation errors were partly due to unaccounted power consumption from main memory and storage access. The proposed approach provides a foundation for more reliable energy management in AI-powered edge computing systems.

1. Introduction

The exponential growth of connected sensors and Internet of Things (IoT) devices has resulted in a significant increase in data generation at the network edge. While this data-rich environment enables a broad range of AI-driven services, it also introduces significant challenges, particularly in terms of energy consumption and environmental impact. Traditional AI systems rely heavily on centralized cloud infrastructures to process and analyze vast streams of data, a model that contributes substantially to carbon emissions. Data centers alone are currently responsible for around 2% of global CO2 emissions, and with rising demand, their energy consumption is expected to grow by 10% annually [1].
In response to these challenges, edge computing has emerged as a technically viable and energy-efficient alternative to centralized architectures. By processing and storing data closer to where it is generated—at the “edge” of the network—AI applications can reduce their dependency on power-hungry cloud servers, thereby minimizing data transmission requirements and lowering overall energy consumption. Performing AI inference locally on edge devices mitigates network dependency, minimizes latency, and contributes to substantial improvements in overall system energy efficiency. Recent advancements in low-power AI accelerators, lightweight neural network architectures, and model optimization techniques have further enhanced the feasibility of executing complex AI workloads directly on resource-constrained edge devices.
To leverage hardware accelerators, machine learning models trained using libraries must be converted into the precision and format required by the hardware accelerator used in embedded systems. Even when using the same deep learning model, differences in mapped operations between libraries or variations in performance (execution time) and energy consumption can occur, depending on the computing resources used such as CPU (Central Processing Units), GPU (Graphical Processing Units), and other accelerators. Additionally, AI hardware accelerators often employ quantization techniques to increase speed, which can impact the accuracy of the results [2]. Unlike data centers, where energy resources are comparatively abundant and scalable, edge devices often operate within limited power budgets, constrained thermal envelopes, and battery-powered contexts. In such scenarios, achieving system-wide energy efficiency depends not only on deploying low-power hardware but also on accurately estimating and managing the power consumption of AI accelerators
Estimation of power consumption is essential for several reasons. It enables system designers to configure AI accelerators effectively for a given application, balancing computational performance with energy efficiency requirements. Power estimation can be utilized for dynamic power management, such as voltage and frequency scaling, thereby extending device operational lifetime and reducing overall energy consumption. In mission-critical and energy-sensitive sectors such as energy management, healthcare, and autonomous systems, maintaining predictable and efficient power profiles is crucial for ensuring reliable system behavior and preventing operational disruptions.
This paper aims to develop a low-overhead methodology for estimating the power consumption of AI accelerators under varying workloads and operational conditions. To achieve this, we conducted experiments with AI accelerators, including GPUs and an edge TPU for embedded systems that support different precision levels of operations.
The remainder of the paper is organized as follows. Section 2 overviews previous works related to this paper. In Section 3, the experimental environment and the target device for power estimation are described, then the power estimation method and data gathering method are explained. The evaluation results for the power estimation method are presented in Section 4. Section 5 discusses the cause of errors in the power estimation. Finally, Section 6 concludes our work and suggests a future research direction to improve the estimation method.

2. Related Works

Significant resources are required to perform deep learning inference, which can be a hindrance when executing AI applications on embedded systems. Several approaches have been researched to address this challenge. Firstly, libraries that require large memory capacity can be replaced with specialized libraries designed for embedded systems, primarily targeting CPU-based inference without the use of hardware accelerators. Inference tools for embedded systems often quantize model weights or input values, reducing model size and inference time. Research by Hadidi et al. [3] compared the performance of various frameworks, including TensorFlow 2.0, PyTorch, Caffe2, and Darknet, using GPUs, TPUs (Tensor Processing Units), and VPUs (Vision Processing Units) as accelerators. However, that study did not use a single embedded board but experimented with different embedded boards, each capable of using various accelerators, making direct comparisons challenging.
Various GPU-based edge AI platforms with different computational capabilities have been released by NVIDIA under the Jetson board series [4]. Although GPUs are not inherently designed as AI-specific accelerators, their integration into embedded systems significantly enhances on-device inference performance. In addition to GPUs, several commercially available AI accelerators featuring dedicated hardware have been introduced, including Google’s Edge TPU [5] and Intel’s Neural Compute Stick (NCS) [6].
A comparative evaluation of the Edge TPU and NCS for 3D object detection tasks is presented in [7]. The work in [8] benchmarks the NVIDIA Jetson Xavier, Edge TPU, and NCS2 using AlexNet and GoogLeNet models, reporting that the NCS2 achieves higher efficiency on AlexNet, while the Edge TPU delivers higher performance with GoogLeNet. Further investigation into architectural trade-offs between computational performance and energy efficiency on Edge TPU and Cortex-A53 platforms is presented in [9], where model size was identified as a primary factor affecting Edge TPU’s performance. Subsequent research in [10] finds that the Edge TPU’s inference performance is highly dependent on the model size.
Tu et al. propose kernel-level energy predictors and scoring metrics to enhance transparency in on-device deep learning energy consumption [11]. Similarly, the MLPerf Power benchmark offers standardized frameworks for comparing energy efficiency across AI systems ranging from small edge devices to large cloud infrastructures, helping to quantify the energy/performance trade-off for edge deployments [12]. Empirical studies of embedded AI inference platforms in [13] show that real-time power profiling could allow more efficient scheduling and resource management in constrained environments. In [14], energy-aware device-level techniques, such as supply voltage scaling and quantization-aware inference, are shown to be effective in improving the energy efficiency of neural network accelerators on edge devices.

3. Estimation of Power Consumption

3.1. Target Embedded Hardware and Accelerators

The present experiments use NVIDIA’s Jetson AGX Xavier as the target board, with brief specifications shown in Table 1 [4]. The target board features 8 power modes, allowing for dynamic adjustment of CPU and GPU clock frequencies according to the selected power mode. The minimum frequencies for CPU and GPU are 1190.4 MHz and 114.75 MHz, respectively. The maximum frequencies for power modes are summarized in Table 2.
We measured the power consumption of the target system in the idle state using ADpower’s HPM-300A [15], which allowed us to record power consumption at a maximum of 4 times per second. Figure 1 depicts the setup for the power measurement. The target board is connected to an AC/DC converter, at which the power meter measures the power and energy consumption of the target system. DC input is automatically adjusted by the target board according to its power mode described in Table 2. The GPU is embedded in the target board, while the TPU is connected via a USB 3.0 connection.
Table 3 shows the power consumption at each mode with the lowest CPU frequency and the minimum online cores. MAXN mode is excluded because the clock speed is fixed at the maximum frequency in that mode.
The AI accelerators used in our experiments were NVIDIA GPU and Google’s Coral USB accelerator. The Coral USB accelerator is connected to the target board via USB and utilizes a Tensor Processing Unit (TPU) [5]. The precision available for the Coral USB is INT8 (Integer 8-bit) for the TPU.

3.2. Power Estimation Method

The power consumption of a CPU can be categorized into two components: dynamic power and static (or leakage) power [16]. Static power is directly influenced by the supply voltage and the leakage current, exhibiting a linear relationship with both. In contrast, dynamic power, which is also referred to as switching power, is typically modeled as being proportional to the product of the switching activity factor α, effective capacitance C, operating frequency f, and the square of the supply voltage V (P ∝ αCfV2) [17]. Although this conventional power model does not capture the full complexity of modern processors, it provides a useful framework to estimate CMOS-based processor power.
In CMOS-based designs, dynamic power consumption dominates total power usage [17]. The processor speed f is linearly proportional to the clock speed [18]. However, lowering the supply voltage increases the delay of logic circuits, which in turn requires reducing the operating frequency [19]. This relationship means that changes to operating frequency have a cubic effect on dynamic power consumption, as both frequency and voltage are interdependent. Techniques such as Dynamic Voltage and Frequency Scaling (DVFS) exploit this relationship to reduce power consumption, though they introduce a trade-off: lowering frequency and voltage improves energy efficiency at the cost of reduced processing throughput and computational performance [20].
We measured the power consumption of the CPU and the accelerators in various settings to determine the estimated power consumption based on the CMOS-based quadratic power dissipation relationship using benchmark programs. Linpack [21] for measuring the CPU power, MatrixMul from CUDA samples [22] for the GPU power, and MobileNet V1 [23] for the TPU power are used as benchmarks. In principle, any program capable of fully utilizing the computing resources can serve as a power consumption benchmark. These particular benchmarks were selected because they are widely adopted and well-recognized for testing the maximum computational workload on a processor.
To measure the CPU power consumption, the GPU clock was set at its minimum, and measurements were taken while varying the number of active CPU cores and CPU clock frequencies. First, to evaluate the effect of CPU frequency on the power consumption, the number of CPU cores was fixed while varying the frequency, and the resulting power consumption was measured. The measured power consumption includes not only the CPU but also the total power usage of the target board. Therefore, the additional power consumed as the CPU frequency increases was estimated by subtracting the idle power consumption from the total measured power. Figure 2 shows the results along with a regression analysis-derived estimation formula for the additional power consumption according to CPU frequency.
To assess the effect of the number of online CPU cores, we also measured the power consumption when varying the number of active cores at each CPU frequency level. The measurement results showed that the number of cores has a linear impact on power consumption. Since the benchmark program fully utilizes the online core at 100%, we must take core utilization rates into account when we estimate power consumption in practical situations.
The estimation model was based on the power consumption of CMOS circuits, and a linear model was derived by observing the increase in power according to the number of active cores from the idle power state. Then, the power consumption of the target board including CPU with frequency f (in GHz) and online core n is modeled as follows:
Power = Idle Power + (0.5𝑓 − 0.1) × 𝑛 × (1.666𝑓2 − 4.044𝑓 + 3.309)
For the GPU power, the number of CPU cores and their clock frequencies were set to the minimum values, and power consumption was measured while varying the GPU clock. The power consumption of the GPU was estimated by subtracting the system’s power consumption without the GPU from the total power consumption. Figure 3 shows the results with an estimation formula for the additional power consumption according to GPU frequency. The resulting GPU power consumption is formulated as follows:
GPU Power = 2.78f2 + 6.63f + 1.727
Since the Edge TPU operates via a USB connection, it is necessary to distinguish the power consumption of the TPU from that of the USB interface. When the Coral USB device was connected, the static power consumption was measured at an average of 1.013 W. If the model size exceeds the internal memory capacity of the TPU, it accesses the target board’s memory, resulting in additional power consumption. To ensure that power consumption from external memory access did not affect the results, we used the MobileNet V1 model, which fits within the TPU’s 8 MB internal memory. Experimental results showed that the TPU’s power consumption remained almost constant at approximately 1.65 W, regardless of the target board’s power mode. Therefore, TPU power is given as follows:
TPU Power = 1.013 + 1.65u
where u = 1 when the Edge TPU is used. Total power consumption of the target board is estimated by summing Equations (1)–(3).

3.3. Data Collection for Power Estimation

To estimate power consumption in real time, we developed a monitoring program based on the eBPF (extended Berkley Packet Filter) framework on Linux. It allows programs to monitor events at both the kernel and user application levels in an event-driven manner by attaching to hook points such as system calls, functions, kernel trace points, and network events. Since the default kernel of the Jetson Xavier does not support eBPF, it is necessary to rebuild the kernel using the source code provided by NVIDIA. Additionally, to trace kernel execution and manipulate programs, we used BCC (BPF Compiler Collection), which offers toolkits needed for working with eBPF.
Performance counters were monitored using eBPF to obtain the number of CPU cycles used per core, and this information was reflected in the number of cores in the CPU power model. We define “effective number of cores” ne as
ne = (sampling frequency) × (the number of CPU cycles)/(CPU frequency)
Since the CPU frequency is the maximum possible number of cycles per 1 s, (CPU frequency)/(sampling frequency) is the maximum cycles at the sampling frequency. Thus, the effective number of cores represents the actual utilization of the CPU cores with switching activities. We replace n in Equation (1) with ne in real-time power estimation.
The other information gathered in real time includes: execution time of applications, USB data transfer for monitoring TPU usage, GPU load, and GPU frequency. For GPU information, NVIDIA’s jtop tool was used, while other information was collected directly from the kernel using the eBPF framework.

4. Evaluation of Power Estimation Method

To evaluate the presented power estimation model, we tested using CIFAR-10 image classification data [24] with the YOLO v8 model. The dataset consists of 10 categories, each containing 1000 images, for a total of 10,000 images. Data for estimating power consumption was gathered at the sampling rate 1 Hz while actual power consumption was measured using a power meter simultaneously. Figure 4 shows the experimental results with the TPU, and Figure 4 shows those with the GPU.
Total energy consumed for inference, measured with the power meter, is compared with the estimated value by the method presented in Table 4. As shown in Figure 4 and Figure 5, the presented method overestimates the power consumption for the TPU, but under estimates that of the GPU. The error is below 8%, but the presented model suggested that inference with the TPU would consume more energy than with the GPU; however, the energy consumption was lower when using the TPU. We will discuss the estimation error in the next section.

5. Discussion

As shown in the previous section, the presented estimation method captures the overall trend of power consumption, but some inaccuracies were observed. To identify the source of the error, we investigated memory usage. As mentioned earlier, the Coral USB with Edge TPU has 8 MB of internal memory. If the model size exceeds the capacity of the internal memory, it uses the off-chip memory for streaming the model parameters. Furthermore, because the internal memory is used up, it suffers from an insufficient cache for model parameters. Table 5 summarizes YOLO v8’s memory use.
The accesses to the main memory of the target board would affect the power consumption. However, it is difficult to measure the number of main memory accesses because the performance counter of the target board does not provide such information. Undoubtedly, loading images before inference requires a certain amount of power to perform. Currently, the presented model also ignores the access to the storage, which is also a source of estimation error. It is necessary to develop a method for obtaining information on main memory and storage usage, so that power consumption associated with their access can also be considered, and to establish a power consumption estimation method based on this information.

6. Conclusions

Developing low-overhead methodologies for estimating the power consumption of AI accelerators under varying workloads and operational conditions is a key enabler for scalable and efficient AI deployment at the network edge. Integrating such estimation frameworks into edge computing platforms could improve energy management. This study presents a methodology for estimating the power consumption of AI accelerators. The experimental results showed that while the proposed estimation model successfully captured overall power consumption trends, it exhibited certain deviations, particularly underestimating GPU consumption and overestimating that of the Edge TPU. The observed estimation errors, although within 8%, may stem from the exclusion of main memory and storage access power in the current model. As AI inference workloads involve not only computational processes but also significant memory and storage operations, future work should focus on developing mechanisms for monitoring these accesses and integrating their associated power costs into estimation models. By refining power estimation methodologies to account for these factors, more accurate and energy-efficient AI systems can be realized, enabling sustainable and reliable operation of AI services at the edge.

Author Contributions

Conceptualization, methodology, M.P.; software, validation, M.K.; formal analysis, investigation, M.P.; resources, data curation, M.K.; writing—original draft preparation, M.K.; writing—review and editing, M.P.; visualization, M.K.; supervision, M.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Incheon National University Research Fund in 2021.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Energy and AI, International Energy Agency, April 2025. Available online: https://www.iea.org/reports/energy-and-ai (accessed on 28 May 2025).
  2. Wu, H.; Judd, P.; Zhang, X.; Isaev, M.; Micikevicius, P. Integer quantization for deep learning inference: Principles and empirical evaluation. arXiv 2020, arXiv:2004.09602. [Google Scholar] [CrossRef]
  3. Hadidi, R.; Cao, J.; Xie, Y.; Asgari, B.; Krishna, T.; Kim, H. Characterizing the deployment of deep neural networks on commercial edge devices. In Proceedings of the 2019 IEEE International Symposium on Workload Characterization, Orlando, FL, USA, 3–5 November 2019. [Google Scholar]
  4. Power Management for Jetson Xavier NX Series and Jetson AGX Xavier Series Devices. Available online: https://docs.nvidia.com/jetson/archives/l4t-archived/l4t-3275/index.html#page/Tegra%20Linux%20Driver%20Package%20Development%20Guide/power_management_jetson_xavier.html (accessed on 26 June 2025).
  5. USB Accelerator. Available online: https://coral.ai/products/accelerator/ (accessed on 30 May 2025).
  6. Intel Neural Compute Stick 2 (Intel NCS2). Available online: https://software.intel.com/content/www/us/en/develop/hardware/neural-compute-stick.html (accessed on 30 May 2025).
  7. Wisultschew, C.; Otero, A.; Protilla, J.; de la Torre, E. Artificial Vision on Edge IoT Devices: A Practical Case for 3D Data Classification. In Proceedings of the 34th Conference on Design of Circuits and Integrated Circuits, Bilbao, Spain, 20–22 November 2019. [Google Scholar]
  8. Kljucaric, L.; Johnson, A.; George, A.D. Architectural Analysis of Deep Learning on Edge Accelerators. In Proceedings of the IEEE Conference on High Performance Extreme Computing, Waltham, MA, USA, 22–24 September 2020. [Google Scholar]
  9. Hosseininoorbin, S.; Layeghy, S.; Sarhan, M.; Jurda, R.; Portmann, M. Exploring Edge TPU for Network Intrusion Detection in IoT. J. Parallel Distrib. Comput. 2023, 179, 104712. [Google Scholar] [CrossRef]
  10. Hosseininoorbin, S.; Layeghy, S.; Kusy, B.; Jurdak, R.; Portmann, M. Exploring Edge TPU for Deep Feed-Forward Neural Networks. Internet Things 2023, 22, 100749. [Google Scholar] [CrossRef]
  11. Tu, X.; Mallik, A.; Chen, D.; Han, K.; Altintas, O.; Wang, H.; Xie, J. Unveiling Energy Efficiency in Deep Learning: Measurement, Prediction, and Scoring across Edge Devices. In Proceedings of the 8th ACM/IEEE Symposium on Edge Computing, Wilmington, DE, USA, 6–9 December 2023. [Google Scholar]
  12. Tschand, A.; Rajan, A.T.R.; Idgunji, S.; Ghosh, A.; Holleman, J.; Király, C.; Ambalkar, P.; Borkar, R.; Chukka, R.; Cockrell, T.; et al. MLPerf Power: Benchmarking the Energy Efficiency of Machine Learning Systems from μWatts to MWatts for Sustainable AI. In Proceedings of the IEEE International Symposium on High-Performance Computer Architecture, Las Vegas, NV, USA, 1–5 March 2025. [Google Scholar]
  13. Ning, Z.; Vandersteegen, M.; Beeck, K.V.; Goedemé, T.; Vandewalle, P. Power Consumption Benchmark for Embedded AI Inference. In Proceedings of the 21st International Conference on Applied Computing, Zagreb, Croatia, 26–28 October 2024. [Google Scholar]
  14. Nabavinejad, S.M.; Salami, B. On the Impact of Device-Level Techniques on Energy-Efficiency of Neural Network Accelerators. arXiv 2021, arXiv:2106.14079. [Google Scholar]
  15. Power Meter & Analyzer (HPM-300A). Available online: https://adpower21com.cafe24.com/shop2/product/power-meter-analyzer-hpm-300a/18/category/51/display/1/ (accessed on 26 June 2025).
  16. Mittal, S. A Survey of Techniques for Improving Energy Efficiency in Embedded Computing Systems. Int. J. Comput. Aided Eng. Technol. 2014, 6, 440–459. [Google Scholar] [CrossRef]
  17. Burd, T.D.; Brodersen, R.W. Energy efficient CMOS microprocessor design. In Proceedings of the 28th Annual Hawaii International Conference on System Sciences, Wailea, HI, USA, 3–6 January 1995; pp. 288–297. [Google Scholar]
  18. Chandrakasan, A.; Sheng, S.; Brodersen, R. Low-power CMOS digital design. IEEE J. Solid-State Circuit 1992, 27, 473–484. [Google Scholar] [CrossRef]
  19. Hong, I.; Kirovski, D.; Qu, G.; Potkonjak, M.; Srivastava, M. Power optimization of variable voltage core-based systems. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 1999, 18, 1702–1714. [Google Scholar] [CrossRef]
  20. Corcoran, P.; Coughlin, T. Safe Advanced Mobile Power Workshop. IEEE Consum. Electron. Mag. 2015, 4, 10–20. [Google Scholar] [CrossRef]
  21. Available online: https://github.com/ereyes01/linpack (accessed on 30 May 2025).
  22. Available online: https://github.com/NVIDIA/cuda-samples (accessed on 30 May 2025).
  23. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar] [CrossRef]
  24. Available online: https://www.kaggle.com/competitions/mu-cifar10 (accessed on 4 June 2025).
Figure 1. Setup for power measurement.
Figure 1. Setup for power measurement.
Energies 18 03840 g001
Figure 2. Additional power consumption due to the variation of the CPU frequency.
Figure 2. Additional power consumption due to the variation of the CPU frequency.
Energies 18 03840 g002
Figure 3. Additional power consumption due to the variation of the GPU frequency.
Figure 3. Additional power consumption due to the variation of the GPU frequency.
Energies 18 03840 g003
Figure 4. Actual and estimated power consumption with TPU.
Figure 4. Actual and estimated power consumption with TPU.
Energies 18 03840 g004
Figure 5. Actual and estimated power consumption with GPU.
Figure 5. Actual and estimated power consumption with GPU.
Energies 18 03840 g005
Table 1. Specification of Jetson AGX Xavier.
Table 1. Specification of Jetson AGX Xavier.
CharacteristicDescription
GPU512 Volta GPU with Tensor cores
CPU8-core ARM v8.2
Memory32 GB 256-bit LPDDR4x @ 137 GB/s
Storage32 GB eMMC 5.1
Table 2. Clock configuration of Jetson AGX Xavier.
Table 2. Clock configuration of Jetson AGX Xavier.
PropertyPower Mode
MAXN10 W15 W30 W30 W30 W30 W15 W
Online CPU cores82486424
CPU maximal frequency (MHz)2265.61200120012001450178021002188
GPU maximal frequency (MHz)1377520670900900900900670
Table 3. Power consumption in the idle state (Watt).
Table 3. Power consumption in the idle state (Watt).
Mode10 W15 W30 W30 W30 W30 W15 W
Power8.7728.8719.7968.8379.8789.15010.487
Table 4. Total power consumption and error.
Table 4. Total power consumption and error.
AcceleratorEnergy UsedEstimationError
Edge TPU910.20 J978.72 J68.52 J (7.53%)
GPU950.44 J896.29 J−54.15 J (−5.70%)
Table 5. YOLO v8 internal memory usage.
Table 5. YOLO v8 internal memory usage.
On-Chip MemoryCacheOff-Chip Memory
7.74 MB512 B1.73 MB
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kang, M.; Park, M. Power Estimation and Energy Efficiency of AI Accelerators on Embedded Systems. Energies 2025, 18, 3840. https://doi.org/10.3390/en18143840

AMA Style

Kang M, Park M. Power Estimation and Energy Efficiency of AI Accelerators on Embedded Systems. Energies. 2025; 18(14):3840. https://doi.org/10.3390/en18143840

Chicago/Turabian Style

Kang, Minseon, and Moonju Park. 2025. "Power Estimation and Energy Efficiency of AI Accelerators on Embedded Systems" Energies 18, no. 14: 3840. https://doi.org/10.3390/en18143840

APA Style

Kang, M., & Park, M. (2025). Power Estimation and Energy Efficiency of AI Accelerators on Embedded Systems. Energies, 18(14), 3840. https://doi.org/10.3390/en18143840

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop