Next Article in Journal
Coordinated Multi-Robotic Vehicles Navigation and Control in Shop Floor Automation
Next Article in Special Issue
Piezoelectric Energy Harvesting from Low-Frequency Vibrations Based on Magnetic Plucking and Indirect Impacts
Previous Article in Journal
Isolation and Gain Improvement of a Rectangular Notch UWB-MIMO Antenna
Previous Article in Special Issue
Towards Hybrid Energy-Efficient Power Management in Wireless Sensor Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Methodology for Extracting Power-Efficient and Contrast Enhanced RGB Images

Department of Computer Engineering and Informatics, University of Patras, 265 04 Patras, Greece
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(4), 1461; https://doi.org/10.3390/s22041461
Submission received: 13 January 2022 / Revised: 8 February 2022 / Accepted: 10 February 2022 / Published: 14 February 2022
(This article belongs to the Special Issue Opportunities and Challenges in Energy Harvesting and Smart Sensors)

Abstract

:
Smart devices have become an integral part of people’s lives. The most common activities for users of such smart devices that are energy sources are voice calls, text messages (SMS) or email, browsing the World Wide Web, streaming audio/video, and using sensor devices such as cameras, GPS, Wifi, 4G/5G, and Bluetooth either for entertainment or for the convenience of everyday life. In addition, other power sources are the device screen, RAM, and CPU. The need for communication, entertainment, and computing makes the optimal management of the power consumption of these devices crucial and necessary. In this paper, we employ a computationally efficient linear mapping algorithm known as Concurrent Brightness & Contrast Scaling (CBCS), which transforms the initial intensity value of the pixels in the YCbCr color system. We introduce a methodology that gives the user the opportunity to select a picture and modify it using the suggested algorithm in order to make it more energy-friendly with or without the application of a histogram equalization (HE). The experimental results verify the efficacy of the presented methodology through various metrics from the field of digital image processing that contribute to the choice of the optimal values for the parameters a , b that meet the user’s preferences (low or high-contrast images) and green power. For both low-contrast and low-power images, the histogram equalization should be omitted, and the appropriate a should be much lower than one. To create high-contrast and low-power images, the application of HE is essential. Finally, quantitative and qualitative evaluations have shown that the proposed approach can achieve remarkable performance.

1. Introduction

The penetration of smart devices into their users’ daily lives (irrespective of age) is now prevalent. There is no doubt that such devices have been a necessary part of the professional and personal lives of their owners. Due to the limited size and capacity of the batteries, the efficient and effective use of energy is critical to the functionality and usability of the mobile device. These devices include an operating system with advanced computing ability and connectivity. High-speed data access is provided via Wifi and mobile broadband services [1].
The power consumption of a mobile device depends on many factors [2]. These include the technical characteristics of the device, environmental conditions, and user’s behavior separately. The main sources of energy consumption associated with the user’s activities of such smart devices include video and voice calls, sending and receiving messages, browsing the World Wide Web, audio and video streaming.
The usage patterns of a smart device are accountable for the underlying power consumption. Making a voice call requires loading the call application, selecting from the contact list, and dialing the number. Points that are taken into account in the assessment of energy consumption are the duration of the call, the time of accepting the call, and the time of connection. However, more energy consuming is video calling. Many components of the mobile device (such as CPU, display, wireless interfaces) work together to load a web page, which consumes power to achieve the purpose (page loading). Other energy-intensive activities include Google map (CPU, screen, Wifi, 5G, or GPS web mapping application), Google talk (CPU, screen, and audio instant messaging application), YouTube (Web-based video sharing application CPU, monitor, Wifi, 5G), and notifications from social media.
In [3], the authors attempted to determine the most important factors describing the overall power consumption during video streaming, and in [4], they aim to optimize the power consumption of mobile devices for video streaming over 4G LTE networks. Modern devices have various built-in Wireless Network Interfaces (Bluetooth, 4G LTE/5G, Wifi) [5] whose power consumption represents a significant portion of the total power of the system, even when idle. The user activities are also connected with the use of screen, the memory, and CPU [6] and device sensors (e.g., GPS, camera) [7,8].
The management of CPU power consumption is subject to many hardware interventions rather than software. The CPU can save energy by running fewer applications on the device. The memory component (RAM) and the storage media (e.g., SDCard) consume very little energy if they are not enabled for reading/writing files or video streaming. The power consumption on the part of the CPU and RAM is affected not only by the benchmark (gzip, equake, etc.) that is executed but also by the frequency operation. In fact, the higher the frequency, the more energy is expended on both of these components, where between them, the CPU consumes the most power significant difference from RAM [2]. In addition, another significant effect on the energy consumption of a mobile device is the mode in which the device operates. Specifically, in case the mobile device is in idle mode and the screen brightness is off (i.e., the backlight off), the power consumption of the display is significant and third in ranking after graphics and GSM. In this mode, the device is fully awake, but no application is running while the monitor subsystem is in on mode [9]. The power consumed by the screen of a mobile device depends on the screen brightness level and the pixel brightness level of images reflected on it. In particular, the screen energy consumption depends exponentially on the brightness of the image pixels, but linearly on the screen brightness level [10].
Organic Light Emitting Diodes (OLED) displays have become prevalent in modern electronic devices due to the power saving they offer compared to their predecessors, Liquid Crystal Displays (LCD) and Light Emitting Diodes (LED) [11]. The power required to display content on OLED depends mainly on it, regardless of the screen brightness level (white-maximum screen brightness level and a black screen-zero screen brightness level) [12]. Dash and Hu [13] experimentally confirmed that the content displayed on such screens has a significant effect on power consumption. The pixel values of the images that users set as a background on their device, even with the minimum screen brightness, play an important role in the power consumed by the screen and therefore in the total power consumed by the device. Since the screen consumes enough of the total power of the mobile device, it is desirable to develop an image processing algorithm, which is able to save energy in the display panel (screen). It is the topic that will concern us in the following sections. In particular, we aim to process the candidate wallpapers to reduce the energy required for their display and enhance their contrast.
The problem of power saving in OLED displays has been approached in the literature in various ways for either images or videos. There are methods that tackle the problem as power constrained and contrast enhancement using histogram-relevant priors, while most recent works employ Sparse Coding-based techniques [14,15]. With the advent of Artificial Intelligence (AI) and Machine Learning (ML), an unsupervised CNN-based [16] method has been suggested to produce high perceptual quality and less power consumption.
The present work was based on backlight-scaling techniques for LCDs and the need to reduce the power consumption on content-dependent displays, such as OLED, by content enhancement using the YCbCr model and the luminance part Y of the displaying image following the philosophy [17]. We adopt different processing functions, namely, linear transformation and then adaptive histogram equalization of Y instead of the sigmoid function that the authors employ in [17]. Here, we will present a computationally simple algorithm that implements a linear transformation of the input image luminance pixels to simultaneously affect the contrast and brightness so that the resulting image consumes less energy in its display and keeps its quality at an acceptable level. The extracted image will have low contrast, since the linear scaling limits the dynamic range. Therefore, histogram equalization is applied to obtain high-contrast images. Such a choice may depend on the user preferences.
The structure of the paper is organized as follows. Section 2 presents the histogram technique and equalization process. Moreover, it describes in detail the proposed methodology, the performance metrics, and the power-constrained perspective of our approach. Section 3 describes the results and performance evaluation of the suggested method in terms of power savings and image quality under the involved parameters. Section 4 discusses the elaborated method in relation to similar studies with the same objective and outlines some future directions of the current research. Finally, Section 5 concludes the paper.

2. Materials and Methods

The elaborated approach will be founded on the histogram of a digital image. Therefore, in the next section, we will present some background knowledge about this technique. In addition, our methodology will be described, and the definition of some useful metrics for the evaluation part will be given.

2.1. Histogram

The histogram shows the distribution of data and, in image processing, it is used to show the distribution of pixel values in an image. Usually, we normalize the histogram by dividing each value by the total number of pixels in the image, let n. Then, the normalized histogram is given by the function p k = n k n , for k = 0 , 1 , , L 1 levels of gray, where L = 2 l ( l = 8 bits), and we could say that it gives an approximation of the probability of occurrence of each level of gray.
The histogram of a digital image with gray levels in the interval [ 0 , L 1 ] is a discrete function that expresses the number of pixels of the image in each gray level. In addition, it is a technique that can describe and demonstrate the contrasts of an image and achieve contrast improvements through histogram modification techniques. Contrast expresses the differences in brightness between light and dark areas of a natural scene. Brightness is a factor that affects the overall view of an image and refers to its overall brightness or darkness. Contrast stretching is an image enhancement technique that improves the contrast of an image by “extending” its pixel range to [ 0 , L 1 ] .
Histogram modification techniques are classified into linear, non-linear, and adaptive [18]. Linear and non-linear transform pixel values using linear and nonlinear mathematical functions, respectively. In these techniques, histogram matching or histogram specification is also included. Some traditional techniques of these categories are presented in Table 1.
From adaptive techniques, the most representative is the histogram equalization. This method smooths the contrasts in an image either locally or globally. The global histogram equalization is applied to the whole image simultaneously. The local equalization scans the image using an overlapping sliding window (small or medium). It applies global equalization to the window transforming the central pixel. Moreover, to transform the boundary pixels, it is necessary to apply padding with an appropriate number of the symmetric values of the image on the perimeter. On the other hand, to reduce the time and computational complexity of the implementation, the boundary pixels of the image remain usually unprocessed. It is characterized by high computational complexity that is enhanced in large images.
The histogram equalization is a histogram specification technique in which the specified histogram is uniformly distributed. In [19], the authors suggest a different histogram specification method in which the brightness and contrast are tuned by adjusting the shape of the probability density function (pdf) of the 1D and 2D Gaussian distribution using the mean and variance of the histogram of the original image.
The histogram equalization can be applied to a variety of color systems [20,21,22], such as RGB, LAB, YIQ, YCbCr, HSV, and HSI. The images to be processed are in the RGB color system and then transformed into one of the rest of the color systems. Histogram equalization is usually applied to the intensity channel. Nonetheless, if it is applied to the chromatic components, the chromaticity of the new image will differ from the original [23]. The histogram equalization is applied to the components L , Y , V , and I , respectively, and the image is reconstructed in the current color system. Finally, it is restored to the original RGB color system, and thus, the new enhanced contrast RGB image is recovered. Some techniques based on histogram equalization [24] are the following:
  • Brightness Preserving Histogram Equalization with Maximum Entropy;
  • Brightness Preserving Bihistogram Equalization (BPHE);
  • Bihistogram Equalization (BBHE);
  • Dualistic Subimage Histogram Equalization (DSHE);
  • Recursive Mean-Separate Histogram Equalization;
  • Minimum Within-Class Variance Multi-Histogram Equalization (MWCVMHE);
  • Minimum Middle Level Squared Error Multi-Histogram Equalization (MMLSEMHE).

2.2. Proposed Approach

The proposed approach aims to save display power by reducing the image illumination while preserving its quality through brightness and contrast control (namely, linear transformation) and histogram equalization. The ultimate goal of this work is to reduce the battery power consumption of a mobile device in order to increase its lifespan.

2.2.1. Description

Users select an image of their choice as wallpaper on their device screen. The energy consumed by the screen is related to the brightness and color values of the pixels in the image. In [25], color transformation algorithms are studied for the purpose of replacing colors (background/foreground, theme) in mobile GUIs, with new ones that consume less energy. The results show that the proposed solutions in [25] can save energy over 75% with acceptable quality results. They were tested on OLED screens. Lee et al. in [26] suggest a new power management scheme for GUI applications with multi-color objects in OLED-based mobile devices adapting colors based on Euclidean distance in CIELAB color space and mobile power budget. These algorithms focus on “creating” new, less energy-intensive colors, while our approach leaves colors unaffected and focuses on brightness and contrast. Since white consumes the maximum and black consumes the least energy, we will be able to reduce image energy by reducing its brightness. The proposed solution attempts to reduce the energy consumed by the screen of the smart device in order to reduce the total energy it consumes. Specifically, the basis of our proposal is a linear and then adaptive transformation of the pixels’ intensity values to eliminate the contrasts and reduce the overall brightness of the candidate wallpaper. The darker the image, the less energy the device’s screen will consume to display it.

2.2.2. The Main Steps of the Algorithm

  • Reads the RGB input image I .
  • Converts the image from the RGB to YCbCr and retrieves the three components Y , C b , C r .
  • Applies the Concurrent Brightness & Contrast Scaling technique to the intensity component Y (not the color components). It is a linear function of the form Y C B C S = a Y + b , where Y C B C S captures the intensity pixel values of the transformed image and Y denotes the intensity of the original image. We should find the appropriate a > 0 and b that reduce the image energy without compromising its quality. Parameter a adjusts the contrast, and parameter b adjusts the image brightness.
  • Checks if the resulting values of step 3 are between 0 and 255 . If it is below 0, they are set to 0, while if it is above 255, they are set to 255.
  • The component Y C B C S (after the previous transformation) is additionally subjected to a global histogram equalization that eliminates the contrasts.
  • Replaces the values of the component Y with the ones that emerged from the previous step, reassembles image components in YCbCr, and returns the processed image to RGB.
  • Estimates the energy consumed by the new image, which is expected to be lower than the original, and evaluates the quality of the resulting image through the Peak Signal-to-Noise Ratio ( P S N R ), Mean Squared Error ( M S E ), and Structural Similarity Index ( S S I M ) performance metrics.
The above steps are also illustrated in Figure 1.

2.2.3. Histogram Equalization of Y C B C S with and without Power Constraint

Here, we will present the main steps of the histogram equalization technique as described by Lee et al. in [27]. It should be noted that this technique will be applied to the linearly transformed luminance component denoted as Y C B C S .
Let h be the column vector whose elements h k are the number of pixels corresponding to the intensity k. The probability of the intensity k, p k , is defined as p k = h k 1 T h . The cumulative distribution function (CDF) of the intensity k is given as
c k = i = 0 k 1 p i .
The histogram equalization corresponds to the input pixel at a new value for the output pixel so that the histogram of the new image is uniform. The y k transformation function corresponds to the intensity k of the input image at the value y k in the output image. For a l-bits image, the transformation function is as follows:
y k = ( 2 l 1 ) c k + 0.5 , k = 0 , 1 , , 2 l 1 .
Ignoring the rounding operator, and combining (1), (2) then the following relation for 8-bit images is obtained:
y k y k 1 = 255 p k , 0 < k 255 , y 0 = 255 p 0 .
Equations (2) and (3) can be summarized to
D y = h ¯ ,
D = 1 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 1 0 0 0 0 0 1 1 .
The normalized vector h ¯ = 255 h 1 T h .
It is known that the power of an image is mainly defined by Y . Incorporating the power model into the proposed technique, we enhance both the contrast by equalizing the histogram of the linearly transformed component Y C B C S ( a Y + b ) and save power (by reducing the histogram values for large intensities of Y ). The Lagrangian multiplier technique is employed to compromise between these two goals.
Histogram equalization can be performed by solving D y = h ¯ or equivalently minimizing D y h ¯ 2 , while power consumption can be saved by decreasing y T H y . Therefore, contrast enhancement and power saving can be achieved simultaneously by minimizing the sum of these two terms. Then, the Lagrangian cost function is given by
C ( y ) = ( D y h ¯ ) T ( D y h ¯ ) + λ y T H y ,
where λ is a Lagrangian multiplier and H = d i a g ( h 0 , h 1 , , h 255 ) . By differentiating the cost function C ( y ) with respect to y and setting it to 0, the power constrained solution is
y = ( D T D + λ H ) 1 D T h ¯ .
The proposed methodology is based on the case where λ = 0 . In this case, the equalized gray levels y are estimated by
y = D 1 h ¯ .
The y captures the new intensity levels of equalized component Y C B C S so that the transformed image consumes less energy for its display and has uniformly distributed contrast. The solution in (7) depends on the values of the parameter λ , which controls the contribution of the power term and the visual quality of the resulting image. Since the two terms in (6) have different orders of magnitude, the parameter λ should be the inverse order of magnitude of y T H y to avoid suppressing the contrast enhancement processing by the power saving term and achieve a better power saving with acceptable quality loss. The dominance of the power term will make the image very dark and thus lower energy. However, the resulting image will be perceptually indistinguishable from the original. The elaborated methodology is also presented in the algorithmic form in Algorithm 1.
Algorithm 1 HE-CBCS.
1:
Input Read the input image I
2:
Input Convert to YCbCr: J = r g b 2 y c b c r ( I ) ;
3:
Y = J ( : , : , 1 ) , C b = J ( : , : , 2 ) , C r = J ( : , : , 3 ) ;
4:
Input Read the set of parameters a , b ;
5:
Output New RGB image I n e w
6:
for p = 1 to | a |  do
7:
   for  q = 1 to | b |  do
8:
      a = a ( p ) , b = b ( q ) , Y C B C S = a Y + b ;
9:
     Limit Y C B C S into [ 0 , 255 ] ;
10:
     Apply HE through (4) or (7) in Y C B C S and restore y ;
11:
     Each pixel intensity in Y is replaced by the intensity levels of y to acquire Y n e w ;
12:
     Reshape image in YCbCr: J n e w = reshape( Y n e w , C b , C r );
13:
     Convert J n e w from YCbCr to RGB: I n e w = y c b c r 2 r g b ( J n e w ) ;
14:
     Estimate metrics: s s i m ( p , q ) = S S I M ( I , I n e w ) , p s n r ( p , q ) = P S N R ( I , I n e w ) ;
15:
      m s e ( p , q ) = M S E ( I , I n e w ) , p r r ( p , q ) = P R R ( I , I n e w ) ;
16:
   end for
17:
end for

2.3. Image Quality Evaluation Metrics

In this section, we demonstrate the most common loss functions [28,29,30] that are utilized to capture the difference between the original ( I ) and the reconstructed ( J ) images.
M S E is the squared Euclidean (L2) distance that measures the degradation of image quality compared to the original image. Mathematically, it is defined as:
M S E = i = 1 M j = 1 N ( J i , j I i , j ) 2 M N ,
where M , N are the dimensions of the image.
P S N R gives the quality of the reconstructed image after processing. The higher the value, the better the quality of the resulting image. Mathematically, it is defined as:
P S N R = 20 l o g 255 M S E .
For color images, the values of M S E and P S N R are calculated as the average of the values for each component R , G , B separately.
S S I M is a metric that evaluates the similarity between two images. It is the most reliable indicator of how close is the processed image to the original. It receives values in the range of [ 0 , 1 ] and the closer the two images are, the closer to one the values of this index will be. According to [31,32], its general math type adjusts the importance of the components that capture the luminance and contrast comparison, respectively. A specific type for the S S I M is
S S I M ( I , I n e w ) = ( 2 μ I μ I n e w + C 1 ) ( 2 σ I , I n e w + C 2 ) ( μ I 2 + μ I n e w 2 + C 1 ) ( σ I 2 + σ I n e w 2 + C 2 )
where μ I and μ I n e w are the average intensity of input image I and output image I n e w , respectively, σ I and σ I n e w are the standard deviation of I and I n e w , respectively, σ I , I n e w is the co-variance of I , and I n e w , C 1 and C 2 are small constants.

2.4. Power Rating Metrics

The power consumption of an OLED panel is based on the metric:
P o w e r = ω 0 + i = 1 N j = 1 M ω R R i , j γ + ω G G i , j γ + ω B B i , j γ ,
where N , M denote the width and height of the panel, respectively, and R i , j , G i , j , B i , j are the red, green, and blue intensities of the ( i , j )-th pixel, respectively. The coefficients γ and ω R , ω G , ω B are display-dependent and 2 γ 3 [33]. Alternatively, the power can be approximated by using only the Y -component in the YCbCr color space as follows:
P o w e r = i = 1 N j = 1 M Y i , j γ ,
where Y i , j is the intensity of the Y -component of the ( i , j ) -th pixel.
A key metric for the evaluation is the power reduction rate ( P R R ) of the transformed image, which is defined as follows
P R R = 1 P f i n a l P o r i g i n a l × 100 % .
If this metric takes negative values, it means that the processing has increased the energy of the image. We are looking for that set of parameters that reduces energy but also maintains the quality of the image at acceptable levels for the user.
Remark 1.
Taking into account that Y C B C S = a Y + b and the dependence of power in (13) by intensities Y i , j , the P R R is rewritten as
P R R = 1 i = 1 N j = 1 M ( a Y i , j + b ) γ i = 1 N j = 1 M Y i , j γ × 100 % = 1 i = 1 N j = 1 M a Y i , j + b Y i , j γ × 100 % .
To achieve the desired goal a Y i , j + b Y i , j 1 or ( a 1 ) Y i , j + b 0 . Given that Y i , j > 0 , 0 < a < 1 and b ( 1 a ) Y i , j should hold.

3. Results

3.1. Experiments Environment and Data

The environment in which experiments were carried out has the following characteristics: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz 2.59 GHz 16 GB Memory, Windows 10 Home, 64-bit Operating System, x64-based processor, and Matlab 2019a. The results reported here concern Lena, Baboon, and Car (MATLAB demo images) color images of size 512 by 512 pixels. The color depth is 24 bits, i.e., eight bits per color channel in the range of 0 to 255. In addition, to estimate the power consumption of an OLED panel, the coefficients have been set as ( γ , ω 0 , ω R , ω G , ω B ) = ( 2 , 0 , 70 , 115 , 154 ) . The input images are shown in Figure 2.

3.2. Evaluation

Now, we will describe the evaluation of the proposed approach. The whole processing becomes in component Y [34] since the color and luminance channels, in the YCbCr color system, are independent. However, the evaluation metrics are estimated and depicted in the RGB color system since we aim to minimize the total power for displaying a color image keeping its perceptive quality at an acceptable level. Let us recall that a defines the contrast and b defines the image brightness.
Figure 3 and Figure 4 illustrate the power reduction rate and the impact of the processing in the Lena image. We evaluate the CBCS method with and without the application of histogram equalization. Through the experiments, we found that the appropriate values of the parameters a , b that make the P R R positive maximize the S S I M and P S N R , and minimize the M S E are in the range 0.5 a 1 , 10 b 200 , respectively.
First, we assess the impact of the CBCS method without the use of HE. According to Figure 3, a = 0.5 achieves the best power saving with a satisfactory performance from an image loss quality perspective (as S S I M reveals). In Figure 5 and Figure 6, we depict the output image, and in Table 2 and Table 3, we record the P R R for the three pairs of parameters a , b that achieve the highest P R R and S S I M . We do not report the results for a = 1 (and >1), because in this case, for any value of parameter b, the P R R is negative and the S S I M is very low.
The CBCS method decreases image power by narrowing the Y dynamic range of gray levels with lower boundary values. This holds when a 1 and for low values of b. Such a technique can save power, but the output image has low contrast.
Focusing on the figure that depicts the P R R , we observe that the histogram equalization plays a decisive role in reducing the image power when a = 1 and the parameter b that controls the brightness moves from medium to high levels. In addition, P R R curves (without and with the HE application) converge to different points. The opposite behavior is caused by the D 1 that affects the gray levels h ¯ (see Equation (8)). According to (13), the image power depends on y γ = ( D 1 h ¯ ) γ , which, when combined with (14), make the curves with HE converge in different directions.
Considering the power-constrained solution (7) of (6), which tries to both apply histogram equalization and reduce the image power, the value of parameter λ makes dominant the term that captures the Y C B C S power (the linearly transformed luminance Y of input image). From the experiments, we verified that this specific parameter, even when it takes values at the level of 10 4 , makes the minimization of the cost function in (6) independent of the parameters a , b and the performance metrics behavior is flat. Specifically, it significantly decreases the image power by 96 % , but it simultaneously deteriorates the image quality, as expected.
In Table 4 and Figure 7 and Figure 8, the results concern the case of HE without the power constraint solution ( λ = 0 ), making the underlying problem controllable by parameters a , b . In particular, it is observed that if parameter a that controls the image contrasts is equal to 0.5 (see Figure 3) and parameter b (brightness) is between 10 and 70, the processing increases the pixels’ power, making the image quite bright. As b takes higher values and, especially for b 80 (see Figure 3 and Figure 7), the image power is reduced, which is reflected as the darkening of the image. The same process is repeated for a = 0.7 where higher power reduction is observed at the cost of lower visual quality. In Figure 7 and Figure 8, we observe that for a = 1 , b = 20 , a power decline by 16.4 % is achieved, contrary to a = 0.5 , where b = 80 is needed to reduce the image power by at least 6.4 % (approximately 3 times lower than a = 0.5 ).
From Table 4, we see that when a is doubled, namely a = 1 ( b = 90 ), the total power savings become three times greater than a = 0.5 ( b = 90 ), at the cost of a half drop in similarity. Hence, maintaining brightness and enhancing image contrast can achieve power saving. Nonetheless, the optimal parameters should meet both the power and quality requirements. The highest P R R (16.4%) was recorded in S S I M = 0.786 where a = 1 , b = 20 . This outcome is also depicted in Figure 8.
To reinforce the validity of the methodology, we present the quantitative and qualitative results for two more images (Baboon and Car). Specifically, we chose to present the visual results for those parameters where the images’ power is reduced.
In Table 5 and Table 6, we summarize the Baboon image research outcomes without the application of HE for a = 0.5 , 0.7 , respectively. In Table 7 and Table 8, we outline the Car’s image performance metrics in the same case.
Figure 9 and Figure 10 show the visual impact of the processing without HE. Moreover, Figure 11 and Figure 12 depict the visual effects on Car image. In both these images (as in Lena), if a = 1 , the image power increases. Again, we observe that a = 0.5 is suitable for power reduction with slightly lower SSIM values than Lena.
In Table 9 and Table 10, we summarize the Baboon and Car outcomes after the application of HE. In Baboon, for a = 1 and b = 10 , a P R R of 2.63% is achieved with S S I M 0.742. Considering the same parameters in Lena’s image, the power increases. However, in Car’s image, a significant power reduction of 25.34% is achieved, which shows that the performance varies depending on the input image.
In Figure 13 and Figure 14, we illustrate the processed images after HE. As is expected, high-contrast and low-power images are derived. We would like to note that similar curves as the ones depicted in Figure 3 and Figure 4 are derived for Baboon and Car images. We omitted them since their behavior is similar to Lena. Finally, the application of the power-constrained model in these images has the same impact as Lena.

4. Discussion

Most approaches focus primarily on wireless interfaces and device sensors generally in an effort to manage the issue of energy saving. Our proposal presents a different approach to the issue of energy saving in modern mobile devices.
The histogram can be a useful tool for producing images with improved contrast and brightness as well as low power. In the literature, several solutions have been proposed that use existing histogram modification techniques (linear, nonlinear, adaptive) to produce images with improved contrast and less power for their display. The so-called Power Constrained Contrast Enhancement (PCCE) employs a nonlinear luminance transformation to reduce OLED power and improve contrast [27,35,36,37]. PCCE performs an unconstrained optimization of power and contrast controlled by a single parameter, while the Low-Overhead Adaptive Power Saving and Contrast Enhancement (LAPSE) [38] minimized power under a mean S S I M ( M S S I M ) constraint. Comparing our proposal with LAPSE, although they have a similar goal, the luminance transformation of each pixel is based on Y i , j n e w = a 3 Y i , j 3 + a 2 Y i , j 2 + a 1 Y i , j + a 0 , which is a more general polynomial function of cubic order. Notice that setting a 3 , a 2 = 0 , the luminance processing function is simplified or reduced to our proposal. Moreover, both in our work and LAPSE, it utilized S S I M as the key metric to determine the optimal parameters. However, the main target of the latter’s algorithm (LAPSE) is time overhead. Recent work in [39] produces contrast-enhanced images applying a modified Land-Effect method that uses white balance and retinex filter. The resulting image has visually better color contrasts than the original and saves power by 13.16%.
Recent works on power saving in OLED displays adopt the Sparse Representations theory in the power-constrained model. In [14,15], the authors propose a mixed-norm Power-Constrained Sparse Representation (PCSR) model where the processing is applied at the patch level. Their method is evaluated using VIF (Visual Information Fidelity) with satisfactory performance for power levels ranging from 40% to 70%. From this perspective, if we apply our HE-based approach inside a patch, the global histogram equalization is equivalent to local. Then, this procedure is repeated iteratively to all image patches. In [16], differently from the current and previous works, Yin et al. exploit the deep CNN learning network to produce power-saved images. The input image is transformed into YUV color space, and the visual quality is assessed through various metrics such as S S I M , VIF, etc., for 10%, 20%, and 30% power reduction.
The elaborated approach is founded on the well-known CBCS algorithm that processes the luminance of each pixel of the image linearly, rendering the brightness and contrast of the enhanced image user-controlled. By incorporating this step prior to histogram equalization, the image processing becomes parametric-dependent and makes the power minimization task controllable as well. We choose to do the processing in the YCbCr color system for specific reasons. First of all, YCbCr is a family of color systems commonly used for the digital representation of images and videos in digital photo-systems (DSP). The simplicity of the transformation and the immediate decoupling of the luminance from the color components make this color space attractive. Moreover, when a still RGB image is being prepared for encoding under the jpeg encoding and compression system, it is transformed into the YCbCr color system, where the component Y keeps the luminance information [38]. Another reason is that the jpeg standard has been proven to be more energy-efficient among other frequently used ones, and thus, our idea could contribute to even more energy-friendly jpeg images.
In addition, now, the intensity transformation is treated as a linear regression problem. The new intensity is estimated based on one previous weighted value of the corresponding pixel intensity (this weight captures the image contrast) and a parameter that captures the total brightness. Then, the resulted intensity is equalized to preserve image quality and reduce image display power simultaneously. Inspired by this analogy, (traditional) data-driven regression methods from statistical signal processing (such as Auto Regressive—AR, Moving Average—MA, Exponentially Weighted Moving Average—EWMA) and machine or deep learning techniques could be evaluated as challenging alternatives for the prediction of the new intensity component Y . Our purpose is to thoroughly investigate the discussed frameworks in [40,41] and formulate the appropriate methodology that best suits our objectives. In addition, the pixels in a predefined neighborhood around the candidate pixel could be considered to estimate the new intensity value. However, this would increase the complexity of the problem (the number of the involved parameters), making its tracking more complex.
Our proposal could concern not only the candidate wallpapers on a user’s device but also general-purpose (or application-independent) images, making them even more energy efficient by reducing their brightness. A positive aspect, in this case, is that these images could be part of websites, so we may achieve energy savings on the part of the browser, which, as we presented above, spends a significant amount of the total energy and slows down battery life. The future direction of this work is to investigate the above processing on video frames to reduce the display energy during playback and incorporate more effective contrast enhancement techniques into the formulation.

5. Conclusions

In conclusion, in this paper, we proposed a methodology for extracting power-efficient images while maintaining the perceptual quality at an acceptable level. To validate the suggested algorithm, a quantitative and qualitative analysis was adopted that revealed the optimal parameters under which the processed image exhibited the lowest levels of distortion and simultaneously consumed lower power. In the context of the qualitative analysis, the visual results aim to illustrate the impact of the proposed algorithm. The experimental results showed that without histogram equalization, the parameter a should be much lower than one to save power, but low-contrast images are generated. By integrating the histogram equalization step, the parameter a should be equal to one in order to achieve a balance between the user’s goals, i.e., a high-contrast image with less visual distortion and lower power.

Author Contributions

E.D. and M.T. conceived of the idea, designed and performed the experiments, analyzed the results, drafted the initial manuscript, and revised the final manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Thantharate, A.; Beard, C.; Marupaduga, S. An Approach to Optimize Device Power Performance Towards Energy Efficient Next Generation 5G Networks. In Proceedings of the 2019 IEEE 10th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), New York, NY, USA, 10–12 October 2019; pp. 749–754. [Google Scholar]
  2. Pramanik, P.K.D.; Sinhababu, N.; Mukherjee, B.; Padmanaban, S.; Maity, A.; Upadhyaya, B.K.; Holm-Nielsen, J.B.; Choudhury, P. Power consumption analysis, measurement, management, and issues: A state-of-the-art review of smartphone battery and energy usage. IEEE Access 2019, 7, 182113–182172. [Google Scholar] [CrossRef]
  3. Herglotz, C.; Coulombe, S.; Vazquez, C.; Vakili, A.; Kaup, A.; Grenier, J.C. Power modeling for video streaming applications on mobile devices. IEEE Access 2020, 8, 70234–70244. [Google Scholar] [CrossRef]
  4. Zhang, J.; Wang, Z.J.; Quan, Z.; Yin, J.; Chen, Y.; Guo, M. Optimizing power consumption of mobile devices for video streaming over 4G LTE networks. Peer-to-Peer Netw. Appl. 2018, 11, 1101–1114. [Google Scholar] [CrossRef]
  5. Yan, M.; Chan, C.A.; Gygax, A.F.; Yan, J.; Campbell, L.; Nirmalathas, A.; Leckie, C. Modeling the total energy consumption of mobile network services and applications. Energies 2019, 12, 184. [Google Scholar] [CrossRef] [Green Version]
  6. Tawalbeh, M.; Eardley, A.; Tawalbeh, L. Studying the energy consumption in mobile devices. Procedia Comput. Sci. 2016, 94, 183–189. [Google Scholar] [CrossRef] [Green Version]
  7. Riaz, M.N. Energy consumption in hand-held mobile communication devices: A comparative study. In Proceedings of the 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), Sukkur, Pakistan, 3–4 March 2018; pp. 1–5. [Google Scholar]
  8. Ahmadoh, E.; Lo’ai, A.T. Power consumption experimental analysis in smart phones. In Proceedings of the 2018 Third International Conference on Fog and Mobile Edge Computing (FMEC), Barcelona, Spain, 23–26 April 2018; pp. 295–299. [Google Scholar]
  9. Carroll, A.; Heiser, G. An analysis of power consumption in a smartphone. In Proceedings of the USENIX Annual Technical Conference, Boston, MA, USA, 23–25 June 2010; Volume 14, p. 21. [Google Scholar]
  10. Kennedy, M.; Venkataraman, H.; Muntean, G.M. Energy consumption analysis and adaptive energy saving solutions for mobile device applications. In Green IT: Technologies and Applications; Springer: Berlin/Heidelberg, Germany, 2011; pp. 173–189. [Google Scholar]
  11. Kang, S.J. Image-quality-based power control technique for organic light emitting diode displays. J. Disp. Technol. 2015, 11, 104–109. [Google Scholar] [CrossRef]
  12. Asnani, S.; Canu, M.G.; Montrucchio, B. Producing Green Computing Images to Optimize Power Consumption in OLED-Based Displays. In Proceedings of the 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC), Milwaukee, WI, USA; 2019; Volume 1, pp. 529–534. [Google Scholar]
  13. Dash, P.; Hu, Y.C. How much battery does dark mode save? An accurate OLED display power profiler for modern smartphones. In Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services, Virtual, Online, USA, 24 June–2 July 2021; pp. 323–335. [Google Scholar]
  14. Lai, E.H.; Chen, B.H.; Shi, L.F. Power constrained contrast enhancement by joint L2, 1-norm regularized sparse coding for OLED display. In Proceedings of the 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), Miami, FL, USA, 10–12 April 2018; pp. 309–314. [Google Scholar]
  15. Yin, J.L.; Chen, B.H.; Lai, E.H.; Shi, L.F. Power-constrained image contrast enhancement through sparse representation by joint mixed-norm regularization. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 2477–2488. [Google Scholar] [CrossRef]
  16. Yin, J.L.; Chen, B.H.; Peng, Y.T.; Tsai, C.C. Deep Battery Saver: End-to-End Learning for Power Constrained Contrast Enhancement. IEEE Trans. Multimed. 2020, 23, 1049–1059. [Google Scholar] [CrossRef]
  17. Gupta, B.; Agarwal, T.K. New contrast enhancement approach for dark images with non-uniform illumination. Comput. Electr. Eng. 2018, 70, 616–630. [Google Scholar] [CrossRef]
  18. Wang, X.; Chen, L. An effective histogram modification scheme for image contrast enhancement. Signal Process. Image Commun. 2017, 58, 187–198. [Google Scholar] [CrossRef]
  19. Xiao, B.; Tang, H.; Jiang, Y.; Li, W.; Wang, G. Brightness and contrast controllable image enhancement based on histogram specification. Neurocomputing 2018, 275, 2798–2809. [Google Scholar] [CrossRef]
  20. Jeon, G.; Lee, Y.S. Histogram Equalization-Based Color Image Processing in Different Color Model. Adv. Sci. Technol. Lett. 2013, 28, 54–57. [Google Scholar]
  21. Kamandar, M. Automatic color image contrast enhancement using Gaussian mixture modeling, piecewise linear transformation, and monotone piecewise cubic interpolant. Signal Image Video Process. 2018, 12, 625–632. [Google Scholar] [CrossRef]
  22. Jeon, G. Color image enhancement by histogram equalization in heterogeneous color space. Int. J. Multimed. Ubiquitous Eng. 2014, 9, 309–318. [Google Scholar] [CrossRef]
  23. García-Lamont, F.; Cervantes, J.; López-Chau, A.; Ruiz, S. Contrast enhancement of RGB color images by histogram equalization of color vectors’ intensities. In International Conference on Intelligent Computing; Springer: Berlin/Heidelberg, Germany, 2018; pp. 443–455. [Google Scholar]
  24. Cao, Q.; Shi, Z.; Wang, R.; Wang, P.; Yao, S. A brightness-preserving two-dimensional histogram equalization method based on two-level segmentation. Multimed. Tools Appl. 2020, 79, 27091–27114. [Google Scholar] [CrossRef]
  25. Dong, M.; Choi, Y.S.K.; Zhong, L. Power-saving color transformation of mobile graphical user interfaces on OLED-based displays. In Proceedings of the 2009 ACM/IEEE International Symposium on Low Power Electronics and Design, San Fancisco, CA, USA, 19–21 August 2009; pp. 339–342. [Google Scholar]
  26. Lee, Y.; Song, M. Adaptive Color Selection to Limit Power Consumption for Multi-Object GUI Applications in OLED-Based Mobile Devices. Energies 2020, 13, 2425. [Google Scholar] [CrossRef]
  27. Lee, C.; Lee, C.; Kim, C.S. Power-constrained contrast enhancement for OLED displays based on histogram equalization. In Proceedings of the 2010 IEEE International Conference on Image Processing, Hong Kong, China, 26–29 September 2010; pp. 1689–1692. [Google Scholar]
  28. Kalaiselvi, T.; Vasanthi, R.; Sriramakrishnan, P. A Study on Validation Metrics of Digital Image Processing. In Proceedings of the Computational Methods, Communication Techniques and Informatics, Dindigul, India, 27–28 January 2017; p. 396. [Google Scholar]
  29. Snell, J.; Ridgeway, K.; Liao, R.; Roads, B.D.; Mozer, M.C.; Zemel, R.S. Learning to generate images with perceptual similarity metrics. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 4277–4281. [Google Scholar]
  30. Tufail, Z.; Khurshid, K.; Salman, A.; Nizami, I.F.; Khurshid, K.; Jeon, B. Improved dark channel prior for image defogging using RGB and YCbCr color space. IEEE Access 2018, 6, 32576–32587. [Google Scholar] [CrossRef]
  31. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  32. Kang, S.J. Perceptual quality-aware power reduction technique for organic light emitting diodes. J. Disp. Technol. 2015, 12, 519–525. [Google Scholar] [CrossRef]
  33. Pagliari, D.J.; Di Cataldo, S.; Patti, E.; Macii, A.; Macii, E.; Poncino, M. Low-overhead adaptive brightness scaling for energy reduction in OLED displays. IEEE Trans. Emerg. Top. Comput. 2019, 9, 1625–1636. [Google Scholar] [CrossRef]
  34. Tan, Y.; Qin, J.; Xiang, X.; Ma, W.; Pan, W.; Xiong, N.N. A robust watermarking scheme in YCbCr color space based on channel coding. IEEE Access 2019, 7, 25026–25036. [Google Scholar] [CrossRef]
  35. Lee, C.; Lee, C.; Lee, Y.Y.; Kim, C.S. Power-constrained contrast enhancement for emissive displays based on histogram equalization. IEEE Trans. Image Process. 2011, 21, 80–93. [Google Scholar] [PubMed]
  36. Shin, J.; Park, R.H. Power-constrained contrast enhancement for organic light-emitting diode display using locality-preserving histogram equalisation. IET Image Process. 2016, 10, 542–551. [Google Scholar] [CrossRef]
  37. Lee, C.; Lam, E.Y. Computationally efficient brightness compensation and contrast enhancement for transmissive liquid crystal displays. J. Real-Time Image Process. 2018, 14, 733–741. [Google Scholar] [CrossRef]
  38. Pagliari, D.J.; Macii, E.; Poncino, M. LAPSE: Low-overhead adaptive power saving and contrast enhancement for OLEDs. IEEE Trans. Image Process. 2018, 27, 4623–4637. [Google Scholar] [CrossRef]
  39. Asnani, S.; Canu, M.G.; Farinetti, L.; Montrucchio, B. On producing energy-efficient and contrast-enhanced images for OLED-based mobile devices. Pervasive Mob. Comput. 2021, 75, 101384. [Google Scholar] [CrossRef]
  40. Li, H.; Deng, J.; Feng, P.; Pu, C.; Arachchige, D.D.; Cheng, Q. Short-Term Nacelle Orientation Forecasting Using Bilinear Transformation and ICEEMDAN Framework. Front. Energy Res. 2021, 9, 780928. [Google Scholar] [CrossRef]
  41. Li, H.; Deng, J.; Yuan, S.; Feng, P.; Arachchige, D.D. Monitoring and Identifying Wind Turbine Generator Bearing Faults Using Deep Belief Network and EWMA Control Charts. Front. Energy Res. 2021, 9, 799039. [Google Scholar] [CrossRef]
Figure 1. Block diagram of the proposed approach.
Figure 1. Block diagram of the proposed approach.
Sensors 22 01461 g001
Figure 2. Data for evaluation: Lena, Car, and Baboon RGB images.
Figure 2. Data for evaluation: Lena, Car, and Baboon RGB images.
Sensors 22 01461 g002
Figure 3. P R R and S S I M of the Y C B C S (without HE) and Y n e w ( Y C B C S with HE) for Lena image.
Figure 3. P R R and S S I M of the Y C B C S (without HE) and Y n e w ( Y C B C S with HE) for Lena image.
Sensors 22 01461 g003
Figure 4. P S N R and M S E of the Y C B C S (without HE) and Y n e w ( Y C B C S with HE) for Lena image.
Figure 4. P S N R and M S E of the Y C B C S (without HE) and Y n e w ( Y C B C S with HE) for Lena image.
Sensors 22 01461 g004
Figure 5. Resulted image I n e w using Y C B C S (without HE) for a = 0.5 and b = [10:10:30].
Figure 5. Resulted image I n e w using Y C B C S (without HE) for a = 0.5 and b = [10:10:30].
Sensors 22 01461 g005
Figure 6. Resulted image I n e w using Y C B C S (without HE) for a = 0.7 and b = [10:10:30].
Figure 6. Resulted image I n e w using Y C B C S (without HE) for a = 0.7 and b = [10:10:30].
Sensors 22 01461 g006
Figure 7. Lena resulted image I n e w for a = { 0.5 , 0.7 } , b = [80:10:100].
Figure 7. Lena resulted image I n e w for a = { 0.5 , 0.7 } , b = [80:10:100].
Sensors 22 01461 g007
Figure 8. Lena resulted image I n e w for a = 1 and b = [20:10:40].
Figure 8. Lena resulted image I n e w for a = 1 and b = [20:10:40].
Sensors 22 01461 g008
Figure 9. Baboon resulted image I n e w using Y C B C S (without HE) for a = 0.5 and b = [10:10:30].
Figure 9. Baboon resulted image I n e w using Y C B C S (without HE) for a = 0.5 and b = [10:10:30].
Sensors 22 01461 g009
Figure 10. Baboon resulted image I n e w using Y C B C S (without HE) for a = 0.7 and b = [10:10:30].
Figure 10. Baboon resulted image I n e w using Y C B C S (without HE) for a = 0.7 and b = [10:10:30].
Sensors 22 01461 g010
Figure 11. Car resulted image I n e w using Y C B C S (without HE) for a = 0.5 and b = [10:10:30].
Figure 11. Car resulted image I n e w using Y C B C S (without HE) for a = 0.5 and b = [10:10:30].
Sensors 22 01461 g011
Figure 12. Car resulted image I n e w using Y C B C S (without HE) for a = 0.7 and b = [10:10:30].
Figure 12. Car resulted image I n e w using Y C B C S (without HE) for a = 0.7 and b = [10:10:30].
Sensors 22 01461 g012
Figure 13. Baboon resulted image I n e w using Y C B C S (with HE) for a = 1 and b = [10:10:30].
Figure 13. Baboon resulted image I n e w using Y C B C S (with HE) for a = 1 and b = [10:10:30].
Sensors 22 01461 g013
Figure 14. Car resulted image I n e w using Y C B C S (with HE) for a = 1 and b = [10:10:30].
Figure 14. Car resulted image I n e w using Y C B C S (with HE) for a = 1 and b = [10:10:30].
Sensors 22 01461 g014
Table 1. Linear and nonlinear histogram modification techniques where 0 = f m i n < g < f m a x = L 1 .
Table 1. Linear and nonlinear histogram modification techniques where 0 = f m i n < g < f m a x = L 1 .
LinearNonlinear
Contrast Stretching
g ( f ) = a f + b , [ s 1 , s 2 ] [ t 1 , t 2 ]
a = ( t 2 t 1 ) ( s 2 s 1 ) , b = t 1 s 1 a
Logarithmic Function
g ( f ) = b l o g ( a f + 1 )
g ( f ) = f f k f k + 1 f k ( g k + 1 g k ) + g k f k < f f k + 1 , f k , g k , k = 0 , 1 , , K 1 Exponent Function
g ( f ) = b e a f 1
Thresholding
g ( f ) = g 0 , f T
g ( f ) = g 1 , f > T
Power Law
g ( f ) = a f k
k < 1 reinforce dark areas
k > 1 suppress light areas
k = 2 square law–exponent function
k = 3 cubic law–logarithmic function
Multi-level thresholding
g ( f ) = g k , f k < f f k + 1
-
Table 2. Lena image without HE for a = 0.5 .
Table 2. Lena image without HE for a = 0.5 .
b = 10 b = 20 b = 30
a = 0.5
P R R
66.2657.4247.16
a = 0.5
S S I M
0.8280.8930.932
a = 0.5
P S N R
12.1513.7515.59
Table 3. Lena image without HE for a = 0.7 .
Table 3. Lena image without HE for a = 0.7 .
b = 10 b = 20 b = 30
a = 0.7
P R R
40.5928.3714.75
a = 0.7
S S I M
0.9590.9800.986
a = 0.7
P S N R
17.5520.5923.94
Table 4. Lena image performance metrics with HE for a = { 0.5 , 0.7 , 1 } and b = [10:10:100].
Table 4. Lena image performance metrics with HE for a = { 0.5 , 0.7 , 1 } and b = [10:10:100].
b = 10 b = 20 b = 30 b = 40 b = 50 b = 60 b = 70 b = 80 b = 90 b = 100
a = 0.5
P R R
−164.07−146.75−126.29−103.37−78.71−50.99−22.126.3930.2249.52
a = 0.5
S S I M
0.3830.4400.4610.4930.5460.5860.5920.5620.5130.450
a = 0.5
P S N R
7.888.589.3810.3011.3312.3312.8212.3711.2510.02
a = 0.7
P R R
−107.06−82.69−57.47−31.19−4.7919.554056.3570.4281.61
a = 0.7
S S I M
0.6340.6700.6910.7020.6870.6500.5860.5090.4280.357
a = 0.7
P S N R
10.8112.0713.3214.2914.3413.2611.7810.469.368.47
a = 1
P R R
−5.1416.4035.0351.0564.9782.9288.0891.6594.0995.42
a = 1
S S I M
0.8340.7860.7220.6440.5600.4810.4080.3440.2890.244
a = 1
P S N R
17.2415.8513.9512.2310.789.648.758.077.547.14
Table 5. Baboon image without HE for a = 0.5 .
Table 5. Baboon image without HE for a = 0.5 .
b = 10 b = 20 b = 30
a = 0.5
P R R
69.4560.9451.04
a = 0.5
S S I M
0.7300.8050.856
a = 0.5
P S N R
11.7813.3115.12
Table 6. Baboon image without HE for a = 0.7 .
Table 6. Baboon image without HE for a = 0.7 .
b = 10 b = 20 b = 30
a = 0.7
P R R
43.0230.9817.54
a = 0.7
S S I M
0.9280.9560.967
a = 0.7
P S N R
17.0920.1223.93
Table 7. Car image without HE for a = 0.5 .
Table 7. Car image without HE for a = 0.5 .
b = 10 b = 20 b = 30
a = 0.5
P R R
73.5366.1557.61
a = 0.5
S S I M
0.7470.8130.853
a = 0.5
P S N R
10.5211.7613.14
Table 8. Car image without HE for a = 0.7 .
Table 8. Car image without HE for a = 0.7 .
b = 10 b = 20 b = 30
a = 0.7
P R R
46.2735.8424.33
a = 0.7
S S I M
0.9320.9490.954
a = 0.7
P S N R
15.7117.9920.38
Table 9. Baboon image performance metrics with HE for a = { 0.5 , 0.7 , 1 } and b = [10:10:100].
Table 9. Baboon image performance metrics with HE for a = { 0.5 , 0.7 , 1 } and b = [10:10:100].
b = 10 b = 20 b = 30 b = 40 b = 50 b = 60 b = 70 b = 80 b = 90 b = 100
a = 0.5
P R R
−178.89−160.38−138.18−112.14−84.79−56.67−28.41−1.1626.2052.88
a = 0.5
S S I M
0.4160.4560.4950.5360.5600.5560.5230.4790.4280.358
a = 0.5
P S N R
7.448.169.0610.1711.2411.8611.7511.1610.359.38
a = 0.7
P R R
−112.55−86.69−60.85−33.42−5.8321.3646.4166.5981.0790.13
a = 0.7
S S I M
0.6120.6410.6490.6340.6020.5490.4740.3860.2920.207
a = 0.7
P S N R
10.6311.9212.9513.3012.8311.8810.659.388.237.32
a = 1
P R R
2.6326.9547.3663.6276.3485.5391.2994.5596.3397.27
a = 1
S S I M
0.7420.6780.5960.5020.4030.3070.2240.1590.1140.085
a = 1
P S N R
15.3013.7111.9710.449.168.137.366.816.426.14
Table 10. Car image performance metrics with HE for a = { 0.5 , 0.7 , 1 } and b = [10:10:100].
Table 10. Car image performance metrics with HE for a = { 0.5 , 0.7 , 1 } and b = [10:10:100].
b = 10 b = 20 b = 30 b = 40 b = 50 b = 60 b = 70 b = 80 b = 90 b = 100
a = 0.5
P R R
−100.17−83.08−63.47−44.05−26.61−11.242.2413.4821.8127.34
a = 0.5
S S I M
0.6630.6910.7120.7150.6890.6340.5620.4880.4180.355
a = 0.5
P S N R
9.5310.8612.6314.4315.3014.8113.5312.1210.869.91
a = 0.7
P R R
−46.05−31.09−16.59−3.188.4217.8936.7550.1454.2257.45
a = 0.7
S S I M
0.7960.7910.7600.7020.6240.5380.4330.3860.3300.294
a = 0.7
P S N R
15.0916.9017.4016.1614.2912.5611.129.979.288.83
a = 1
P R R
25.3436.2143.7849.7254.3958.1562.2167.5874.4781.52
a = 1
S S I M
0.7660.6850.5960.5060.4250.3600.3130.2790.2490.221
a = 1
P S N R
16.5414.2612.5211.1810.169.428.878.407.897.34
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dritsas, E.; Trigka, M. A Methodology for Extracting Power-Efficient and Contrast Enhanced RGB Images. Sensors 2022, 22, 1461. https://doi.org/10.3390/s22041461

AMA Style

Dritsas E, Trigka M. A Methodology for Extracting Power-Efficient and Contrast Enhanced RGB Images. Sensors. 2022; 22(4):1461. https://doi.org/10.3390/s22041461

Chicago/Turabian Style

Dritsas, Elias, and Maria Trigka. 2022. "A Methodology for Extracting Power-Efficient and Contrast Enhanced RGB Images" Sensors 22, no. 4: 1461. https://doi.org/10.3390/s22041461

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop