Next Article in Journal
PG-Based Vehicle-In-the-Loop Simulation for System Development and Consistency Validation
Next Article in Special Issue
Tone Mapping Method Based on the Least Squares Method
Previous Article in Journal
Periodic Signal Suppression in Position Domain Based on Repetitive Control
Previous Article in Special Issue
Furniture Image Classification Based on Depthwise Group Over-Parameterized Convolution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Three-Stage Tone Mapping Algorithm

1
The Higher Educational Key Laboratory for Measuring & Control Technology and Instrumentations of Heilongjiang Province, Harbin University of Science and Technology, Harbin 150080, China
2
Chinese Martial Arts Department, Harbin Sport University, Harbin 150008, China
3
School of Information Engineering, Quzhou College of Technology, Quzhou 324000, China
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(24), 4072; https://doi.org/10.3390/electronics11244072
Submission received: 5 November 2022 / Revised: 3 December 2022 / Accepted: 4 December 2022 / Published: 7 December 2022
(This article belongs to the Special Issue Deep Learning in Image Processing and Pattern Recognition)

Abstract

:
In this paper, a tone mapping algorithm is presented to map real-world luminance into displayed luminance. Our purpose is to reveal the local contrast of real-world scenes on a conventional monitor. Around this point, we propose a three-stage algorithm to visualize high dynamic range images. All pixels of high dynamic range images are classified into three groups. For the first stage, we introduce piecewise linear mapping as the global tone mapping operator to map the luminance of the first group, which provides overall impressions of luminance. For the second stage, the luminance of the second group is determined by the weighted average of its neighborhood pixels, which are derived from the first group’s pixels. For the third stage, the luminance of the third group is determined by the weighted average of its neighborhood pixels, which are derived from the second group’s pixels. Experimental results on several real-world images and the TMQI database show that our algorithm can improve the visibility of real-world scenes with about 12% and 9% higher scores of mean opinion score and tone-mapped image quality index than the closest competitive tone mapping methods. Compared to the existing tone mapping methods, our algorithm produces visually compelling results without halo artifacts and loss of detail.

1. Introduction

The real-world scene covers a wide range of luminance levels, whose order of magnitude approximates about 1014:1 [1]. Through the complicated process of self-adaption, the human visual system adapts over nine orders gradually. The dynamic range of modern image sensors is far less than real-world scenes and human visual systems. Nevertheless, by multi-exposure imaging [2], the limit of the image scene can be extended. Hence, we can obtain high dynamic range (HDR) images with conventional digital cameras.
The contrast ratio of some advanced conventional displays can reach up to 104:1 [3], although some high dynamic range display systems claim that they can achieve five orders of magnitude [4]. However, the dynamic range of display systems is much less than real-world scenes. Furthermore, the price of high contrast ratio displays is not affordable for household consumers. This flaw results in shadow and highlight that can not convey the appearance of the original scene. Tone mapping has been developed to convert an HDR image to a low dynamic range (LDR) image, whose dynamic range is compatible with conventional displays. A valid tone mapping algorithm can reveal both darkness and brightness pixels clearly on a conventional display. It is also able to preserve details in the original image and avoid common artifacts, such as halos, gradient reversals, or loss of local contrast [5].
Unfortunately, existing tone mapping algorithms suffer from some reduction in image quality, e.g., detail loss and unsuitable overall impression of brightness. To solve the rendering problem of HDR images, we present a three-stage tone mapping algorithm. The main idea is to map real-world luminance to displayable luminance while preserving local contrast in the real-world scene. For mapping real-world luminance, previous tone mapping methods compress mainly high luminances [6]. Such tone-curves suffer from over-compression of the global contrast. To deal with this problem, we propose a piecewise linear mapping as the first stage to determine the brightness and enhance the global contrast of tone-mapped images. Each segment point of the piecewise linear mapping is selected by a tiny luminance threshold, and the remapped luminance is estimated by the cumulative probability of the luminance histogram. For visual detail protection, Ashikhmin [7] focused on preserving local contrast during compressing dynamic range. However, such a method could lead to insufficient performance of visual detail preservation. We introduce another two stages to ensure local contrast; both stages employ the weighted average of neighborhood pixels to estimate local contrast. The luminances derived from each stage are related to the previous stage by reviewing the local contrast expression. Experiments were conducted on radiance maps from various real-world scenes and show that the proposed algorithm can produce pleasing images.

2. Related Work

In an early attempt, global tone mappings designed a single curve to compress dynamic range and was designed to save computational efficiency [8]. A practical general framework to match real-world brightness and display brightness was built by Tumblin and Rushmeier [9]. They reviewed the sigmoid responses to light in the film’s encoding process. Using this kind of power-law relation, they created an observer model that converts world luminance to perceived brightness; meanwhile, the coefficient and exponent are determined by Stevens’s experiment. Ward [10] applied a designated multiplier to relate the minimum discernible difference on the display and in a real-world scene. Note that this method deals with the perceived contrast, consequently, it provides the same contrast visibility both in bright scenes and in dark scenes. Larson et al. [11] suggested that the eye is sensitive to region brightness, i.e., adaptation levels, rather than absolute luminance. To estimate adaptation levels, they averaged real-world luminance over a 1○ visual angle. Histogram equalization with a linear ceiling was applied to compress the dynamic range while the limit region contrast could not exceed the original image. Khan et al. [12] presented a histogram-based tone mapping algorithm to visualize HDR images on an LDR display. The algorithm restricts the pixel counts in the histogram to solve over-compression and over-enhancement problems introduced by classic histogram equalization.
Recent tone mapping research focuses on local tone mapping operators. Meylan et al. [13] rendered real-world images with a Retinex-based adaptive filter. This filter processes the luminance and chrominance in two parallel procedures. The chrominance processing employs principal component analysis to provide unaffected color rendition immunity while luminance is compressing. Gu et al. [14] developed a local edge-preserving (LEP) filter and applied the LEP filter to decompose real-world scene images into several detail layers and one base layer based on the Retinex model. Kuang et al. [15] proposed a valid image appearance modeling to reproduce the same visual perception across media based on the iCAM framework. This model decomposes an image into a base layer and a detail layer by the bilateral filter. The Cone response function and the rod’s response function are applied to the base layer to compress the dynamic range. A power-function adjustment is applied to detail layer to predict the Stevens effect. Fattal et al. [5] took the luminance gradient as the local metric to preserve details and compress the dynamic range. The work attempts to attenuate the large magnitudes of gradients at each pixel while small magnitudes of gradients remain unchanged. Their work also introduces a Gaussian pyramid to avoid halo artifacts.
In recent years, deep-learning-based tone mapping algorithms have been presented to generate LDR images. Rana et al. [16] proposed a fast, parameter-free, and scene-adaptable deep tone mapping operator (DeepTMO) based on a conditional generative adversarial network. DeepTMO explores four possible combinations of Generator-Discriminator architectural designs to specifically address some prominent issues in HDR related deep-learning frameworks like blurring, tiling patterns, and saturation artifacts. Panetta et al. [17] designed a deep-learning-based tone mapping operator (TMO-Net), which offers an efficient and parameter-free method capable of generalizing effectively across a wider spectrum of HDR content. Patel et al. [18] proposed a novel generative adversarial network to learn a combination of several classic tone mapping operators. The proposed method uses a deep network to keep the best TMQI [19] score tone-mapped image generated by the classic tone mapping operators.

3. Algorithm

3.1. Global Luminance Mapping

We apply the scene’s key value proposed by Reinhard [20] to make an analogy to camera exposure. The approximation to the key of the scene is given by
L ¯ = 1 M exp x , y log ( δ + L ˜ x , y
where M is the total number of pixels, δ is a small value, L ˜ is world luminance, ( x , y ) is the pixel location and L ¯ is the key of the scene. The scaled luminance is computed by
L x , y = a L ¯ × L ˜ x , y
where a is the key value to map the log-average world luminance to an appropriate value and a is estimated adaptively by Reinhard’s work [21].
We found that the tone distribution curve of a great number of radiance maps presents a similar shade, partially shown in Figure 1a, which is obtained from a well-known radiance map called “memorial church”.
It can be seen that most luminance levels distribute at the beginning of the luminance level, and the curve plots a very long and low trailing. This indicates that it is necessary to make the compression for high luminance levels. Meanwhile, a majority of luminance levels, i.e., the low luminance levels, should be stretched to increase the global contrast.
Not all of the luminance histograms look like Figure 1a. The luminance levels of some scenes occur in both low-level bins and high-level bins, as shown in Figure 1b. This histogram comes from another radiance map called “moto”. Here, we can see that it has equal importance both at low-level and high-level bins. This type of curve suggests that low-level and high-level bins should be stretched together.
Essentially, to pick a luminance mapping operator is to choose a proper shade of the mapping curve. If employing linear mapping as the luminance mapping, the visual of the remapping image is supposed to be consistent with the reconstructed image by the optic nerve practically. However, the linear mapping is incapable of compressing dynamic range within the range of HDR images. Due to this, a piecewise linear mapping is advisable to replace linear mapping.
The next issue is to determine segment points for piecewise linear mapping. Since the bins possess very few numbers of luminance levels that should be compressed, we use the empirical value T a = 10 4 as the threshold to determine the location of segment points. When the probability of the previous bin is larger than the empirical value, while the probability of the current bin is not greater than the empirical value, the current bin is a segment point. When the probability of the previous bin is not greater than the empirical value, while the probability of the current bin is larger than the empirical value, the current bin is also a segment point. Moreover, the remapping value of a segment point is determined by the cumulative probability between the previous segment point and the current segment point. Let B ( ) represent the luminance bin and ( ) represent the number of a bin. The remapping value of the l-th bin between two segment points B ( i ) and B ( j ) is given by
L B ( l ) = L B ( j ) L B ( i ) B ( j ) B ( i ) × B ( l ) L B ( i ) + L B ( i )
where L ( B ( l ) ) , L ( B ( i ) ) , and L ( B ( j ) ) are the remapping value of the l-th bin, i-th bin, and j-th bin, respectively. Note that the remapping values of the first and last bin are 0 and 1, respectively. Thus, the remapping value of every segment point is known by formula (3). Using (3) after segmenting the histogram by an empirical threshold, we obtain the luminance mapping curves shown in Figure 2. Due to the slow varying specifically at the end of the curve shown in Figure 1a, the piecewise linear function mainly compresses the range of high luminance for memorial church image. Meanwhile, the low luminance level is stretched to enhance the contrast in dark areas. Both low luminance levels and high luminance levels are stretched and the dynamic range of the middle luminance level is compressed since most bins are densely distributed both at the beginning and end of the luminance histogram which is shown in Figure 1b.

3.2. Local Contrast Preservation

To preserve local contrast, all pixels of the input image are divided into three groups, i.e., U 1 , U 2 , and U 3 . Figure 3 shows a classification procedure for an image with a size of 9 × 9. Both row and column numbers of the first group’s pixels are odd, as shown in Figure 3a. There is only one spatial coordinate of the second group’s pixels that is odd, as shown in Figure 3b. Both row and column numbers of the third group are even, as shown in Figure 3d. The second group pixel is divided into two subgroups, as shown in Figure 3c. The row number of the first subgroup is even. On the contrary, the row number of the second subgroup is odd.
We use the global luminance mapping described in Section 3.1 to compress the dynamic range for the first group’s pixel. Then, the local contrast of the two subgroups is computed by
c x , y ( 1 ) = 3 I x , y I x 1 , y + I x , y + I x + 1 , y , ( x , y ) U 2 ( 1 ) c x , y ( 2 ) = 3 I x , y I x , y 1 + I x , y + I x , y + 1 , ( x , y ) U 2 ( 2 )
where ( ) represents the number of subgroups, c represents local contrast, I represents displayed luminance, and the superscripts of c and U represent the number of the subgroup. For local contrast preservation, the local contrast value of every pixel should be equal to the tone-mapped image, which can be formulated as:
c x , y ( 1 ) = 3 L x , y L x 1 , y + L x , y + L x + 1 , y , ( x , y ) U 2 ( 1 ) c x , y ( 2 ) = 3 L x , y L x , y 1 + L x , y + L x , y + 1 , ( x , y ) U 2 ( 2 )
Since the luminance of the first group’s pixels has been computed by global luminance mapping, i.e., L x 1 , y , L x + 1 , y , L x , y 1 , and L x , y + 1 are known, the luminance of the second group’s pixels derived by joint (4) and (5) is computed by:
I x , y = c x , y ( 1 ) × ( L x 1 , y + L x + 1 , y ) 3 c x , y ( 1 ) , ( x , y ) U 2 ( 1 ) c x , y ( 2 ) × ( L x , y 1 + L x , y + 1 ) 3 c x , y ( 2 ) , ( x , y ) U 2 ( 2 )
The local contrast of the third group’s pixels is computed by:
c x , y = 5 I x , y I x 1 , y + I x + 1 , y + I x , y + I x , y 1 + I x , y + 1
Similar to the second group’s pixels, the luminance of the third group’s pixels is computed by:
I x , y = c x , y ( 3 ) × L x 1 , y + L x + 1 , y + L x , y 1 + L x , y + 1 5 c x , y ( 3 )
Figure 4 shows the flowchart of the three-stage approach.
All the pixels of the input real-world image are first classified into three groups. In Stage 1, the luminances of the pixels in group one are remapped by piecewise linear mapping. Both Stage 2 and Stage 3 compute the local contrast of every pixel of group two and group three. After piecewise linear mapping, i.e., Stage 1, Stage 2 computes the luminance of all the pixels in group two, according to Equation (6). After Stage 2, Stage 3 computes the luminance of all the pixels in group three, according to Equation (7). All pixels of the three groups are collected to generate the tone mapped image.

3.3. Color Image

We separate lightness and color features by converting RGB images into HSV space. Figure 5 shows the data flow diagram for dealing with a color image.
Three channels are separated into H, S, and V channels, where the V channel is only used to compress dynamic range, and the H and S channels are constant. The last part is Gamma correction, which is used for overcoming the non-linear response of the display device.

4. Results

Our method is simulated by Matlab 2016a on a desktop computer with an Intel i5-7400 CPU and Kingston 2400 MHz 8G-DDR4 internal storage. Tone-mapped images are shown on a Dell S2721DGF digital display whose gamma parameter approximates 2.2. The test radiance maps contain three famous scenes and the TMQI database [19] with a very high dynamic range.

4.1. Subjective Performance Evaluation

In this part, we use three famous radiance maps to make a comparison among several tone mapping algorithms. The compared tone mapping algorithm include DeepTMO [16], Thai [22], TMO-Net [17], L1-L0 [23], and Khan [12]. The three test radiance maps are shown directly in Figure 6. Figure 7, Figure 8 and Figure 9 demonstrate a comparison of tone-mapped images.
The images derived by our algorithm show visually pleasing results due to the clear details and distinguishable information both in dark areas and bright areas. The compared tone mapping algorithms suffer from some reduction in image quality. TMO-Net provides a good, clear result in the darker areas, while the contrast in the brighter area is decreased which leads to a problem of overexposure. DeepTMO improves the global contrast of the original real-world image. The content of the dark area is revealed and the brightness becomes more apparent. However, the reproduced image was blurred while the details are reduced both in low-light and bright areas. Both Thai and Khan have the problem of contrast loss in the low-light area, which leads to over-compression of some intensity levels. There, some image quality is lost, particularly in darker areas. The visibility of the dark area is improved by L1-L0 and the tone-mapped images make a good balance between detail enhancement and visual naturalness for both indoor and outdoor scenes. However, the LDR image obtained by L1-L0 has problems with halo artifacts and the contrast is over-enhanced. Meanwhile, the tone-mapped images show a little brightness distortion.
Another experiment was conducted to test the validity of the tone mapping algorithm. We selected the TMQI database [19] that contains 15 radiance maps to compute the mean opinion score (MOS). TMQI database is shown in Figure 10. It can be seen that the TMQI database includes six indoor scenes and nine outdoor scenes. Due to the unequal resolution, we resize the images without aspect ratio preservation to favor the demonstration. The image displayed in Figure 10 leads to unsatisfactory visibility which is caused by extremely high contrast. The algorithms compared and the algorithm proposed were adopted to convert every radiance map of the TMQI database to a displayed image. Thus, we obtained 90 LDR images. Figure 11 shows four groups of LDR images. It can be seen from Figure 11 that the visibility of our method is improved with better performance than the other competitive tone mapping methods. The subjective visual quality of each tone-mapping image share quality consistent with the reproduced image shown in Figure 7, Figure 8 and Figure 9.
Then, the tone-mapped images of the TMQI database were shown on an S2721DGF Display. To determine MOS, 31 volunteers, including 16 females and 15 males aged between 23 and 34, were invited to give the score for each image shown on the display produced by the above tone mapping algorithms. The score range is from 1 to 10, where 1 means the worst visual quality and 10 means the best visual quality. The mean and std of MOS values are shown in Figure 12.
It can be seen that our method has a higher score (8.9) than other compared methods. Meanwhile, the lowest std value (0.175) means the performance of our algorithm is widely recognized among these volunteers. The mean score and standard deviation for other tone mapping methods are DeepTMO (7.9, 0.3), Thai (5.8, 0.6), Khan (5.1, 1.1), TMO-Net (7.2, 0.25), and L1-L0 (7.7, 0.4). According to the mean scores, our method achieves about 12% higher than the closest tone mapping method (DeepTMO).

4.2. Objective Performance Evaluation

Besides subjective performance evaluation, we employed a tone-mapped image quality index (TMQI) [19] and natural image quality evaluator (NIQE) [25] as objective evaluations to further evaluate the performance of related tone mapping algorithms. The TMQI score ranges from 0 (worst quality) to 1 (best quality), and a smaller NIQE indicates a higher image quality. Table 1 and Table 2 show the TMQI and NIQE scores for related tone mapping algorithms. According to Table 1 and Table 2, our algorithm results in a higher score than the compared tone mapping algorithms. The mean TMQI score for our tone mapping method is higher by about 9%, 30%, 12%, 8%, and 41% as compared to DeepTMO, Thai, TMO-Net, L1-L0, and Khan tone mapping methods. The mean NIQE score for our tone mapping method is lower than the competitive tone mapping methods. The highest TMQI and lowest NIQE scores indicate the tone-mapped image obtained by our algorithm provides better structural fidelity and less statistical naturalness than the other methods.

4.3. Hardware Platform Test

Generally, our method is purely software-based, and no specific hardware is required. Hence, our method is easily ported to many hardware platforms. In this section, our method was implemented on three hardware platforms based on ARM CPU, FPGA, and DSP architectures to analyze the feasibility of our method on embedded platforms. The ARM CPU is Qualcomm’s Snapdragon 865 mobile platform. The CPU and operating system in this platform are a quadcore Kryo@ 2.84 GHz and Android 10.0. Our target FPGA platform is the Xilinx Virtex-7 XC7VX690T FPGA with 5 million gates and 52-Mbit SRAM memory for data exchange. The DSP is the DaVinci digital media processor DM648 with a 1.1 GHz maximum processing clock rate. Three implementations of the proposed method were conducted for these platforms: Java for ARM CPU, VHDL for FPGA, and C language for DSP. Table 3 summarizes the average computational time cost for different hardware platforms when running our method on the TMQI database with six kinds of resolution.
This is an expected result and the proposed method can be implemented on all three platforms. It can be seen that both FPGA and DSP platforms achieve real-time performance compared with ARM CPU. For the largest spatial resolution (803 × 535), our method runs on FPGA and DSP platforms with a speed of about 39 FPS and 42 FPS. Since our method is very dependent on the multiplier, the runtime of the ARM CPU is markedly lower than the other platforms.

5. Conclusions

This paper focuses on reproducing real-world luminance on a conventional display. The radiance map is obtained by global luminance mapping and local contrast preserving. The piecewise linear mapping is introduced to achieve global dynamic range compression. For preserving local contrast, this work computes each pixel by a weighted average of its neighborhood pixels, which luminance values are known by the piecewise linear mapping. We compare the performance of our algorithm and 5 existing state-of-the-art tone mapping algorithms. Each tone mapping algorithm was performed on 3 famous HDR images and the TMQI database, respectively. The objective metric TMQI and subjective evaluation MOS are employed to evaluate every tone-mapped image. The proposed algorithm obtained the best TMQI, MOS, and NIQE among all the tone mapping algorithms, which indicates that our method can keep the quality of tone-mapped images close to the real-world scene and produce high visual quality and natural-looking images under various real-world scenes.

Author Contributions

Conceptualization, L.Z.; methodology, L.Z. and J.W.; software, R.S.; validation, L.Z., R.S. and J.W.; formal analysis, R.S.; investigation, R.S.; resources, J.W.; data curation, L.Z.; writing—original draft preparation, R.S.; writing—review and editing, J.W.; visualization, J.W.; supervision, L.Z.; project administration, L.Z. and J.W.; funding acquisition, L.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Quzhou Science and Technology Plan Project, grant number 2022K108 and Heilongjiang Provincial Natural Science Foundation of China, grant number YQ2022F014.

Acknowledgments

The authors acknowledge Quzhou Science and Technology Plan Project (grant number 2022K108), Heilongjiang Provincial Natural Science Foundation of China (grant number YQ2022F014), and Basic Scientific Research Foundation Project of Provincial Colleges and Universities in Heilongjiang Province (2022KYYWF-FC05).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kalantari, N.K.; Ramamoorthi, R. Deep high dynamic range imaging of dynamic scenes. ACM Trans. Graph. 2017, 36, 144. [Google Scholar] [CrossRef] [Green Version]
  2. Ou, Y.; Ambalathankandy, P.; Takamaeda, S.; Motomura, M.; Asai, T.; Ikebe, M. Real-time tone mapping: A survey and cross-implementation hardware benchmark. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 2666–2686. [Google Scholar] [CrossRef]
  3. Yue, G.; Yan, W.; Zhou, T. Referenceless quality evaluation of tone-mapped HDR and multiexposure fused images. IEEE Trans. Ind. Inform. 2019, 16, 1764–1775. [Google Scholar] [CrossRef]
  4. Jiang, M.; Shen, L.; Hu, M.; An, P.; Gu, Y.; Ren, F. Quantitative measurement of perceptual attributes and artifacts for tone-mapped HDR display. IEEE Trans. Instrum. Meas. 2022, 71, 1–11. [Google Scholar] [CrossRef]
  5. Fattal, R.; Lischinski, D.; Werman, M. Gradient domain high dynamic range compression. In Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, New York, NY, USA, 23–26 July 2002. [Google Scholar]
  6. Eilertsen, G.; Mantiuk, R.K.; Unger, J. A comparative review of tone-mapping algorithms for high dynamic range video. Comput. Graph. Forum 2017, 36, 565–592. [Google Scholar] [CrossRef] [Green Version]
  7. Ashikhmin, M. A tone mapping algorithm for high contrast images. In Proceedings of the Eurographics Workshop on Rendering, Pisa, Italy, 26–28 June 2002. [Google Scholar]
  8. Zai, G.J.; Liu, Y. An improved tone mapping algorithm for high dynamic range images. In Proceedings of the International Conference on Computer Application and System Modeling, Taiyuan, China, 22–24 October 2010. [Google Scholar]
  9. Tumblin, J.; Rushmeier, H. Tone reproduction for realistic images. IEEE Comput. Graph. Appl. 1993, 13, 42–48. [Google Scholar] [CrossRef]
  10. Ward, G. A contrast-based scalefactor for luminance display. Graph. Gems 1994, 4, 415–421. [Google Scholar]
  11. Larson, G.W.; Rushmeier, H.; Piatko, C. A visibility matching tone reproduction operator for high dynamic range scenes. IEEE Trans. Vis. Comput. Graph. 1997, 3, 291–306. [Google Scholar] [CrossRef] [Green Version]
  12. Khan, I.R.; Aziz, W.; Shim, S.O. Tone-mapping using perceptual-quantizer and image histogram. IEEE Access 2020, 8, 31350–31358. [Google Scholar] [CrossRef]
  13. Meylan, L.; Süsstrunk, S. High dynamic range image rendering using a retinex-based adaptive filter. IEEE Trans. Image Process. 2006, 15, 2820–2830. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Gu, B.; Li, W.; Zhu, M.; Wang, M. Local edge-preserving multiscale decomposition for high dynamic range image tone mapping. IEEE Trans. Image Process. 2012, 22, 70–79. [Google Scholar] [PubMed]
  15. Kuang, J.; Johnson, G.M.; Fairchild, M.D. iCAM06: A refined image appearance model for HDR image rendering. J. Vis. Commun. Image Represent. 2007, 18, 406–414. [Google Scholar] [CrossRef]
  16. Rana, A.; Singh, P.; Valenzise, G.; Dufaux, F.; Komodakis, N.; Smolic, A. Deep tone mapping operator for high dynamic range images. IEEE Trans. Image Process. 2019, 29, 1285–1298. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Panetta, K.; Kezebou, L.; Oludare, V.; Agaian, S.; Xia, Z. TMO-Net: A parameter-free tone mapping operator using generative adversarial network, and performance benchmarking on large scale HDR dataset. IEEE Access 2021, 9, 39500–39517. [Google Scholar] [CrossRef]
  18. Patel, V.A.; Shah, P.; Raman, S. A generative adversarial network for tone mapping hdr images. In Proceedings of the National Conference on Computer Vision, Pattern Recognition, Image Processing, and Graphics, Mandi, India, 16–19 December 2017. [Google Scholar]
  19. Yeganeh, H.; Wang, Z. Objective quality assessment of tone-mapped images. IEEE Trans. Image Process. 2012, 22, 657–667. [Google Scholar] [CrossRef] [PubMed]
  20. Reinhard, E.; Stark, M.; Shirley, P.; Ferwerda, J. Photographic tone reproduction for digital images. ACM Trans. Graph. 2002, 22, 267–276. [Google Scholar] [CrossRef] [Green Version]
  21. Reinhard, E. Parameter estimation for photographic tone reproduction. J. Graph. Tools 2002, 7, 45–51. [Google Scholar] [CrossRef]
  22. Thai, B.C.; Mokraoui, A.; Matei, B. Contrast enhancement and details preservation of tone mapped high dynamic range image. J. Vis. Commun. Image Represent. 2019, 58, 589–599. [Google Scholar] [CrossRef]
  23. Liang, Z.; Xu, J.; Zhang, D.; Cao, Z.; Zhang, L. A hybrid l1-l0 layer decomposition model for tone mapping. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  24. Narwaria, M.; Da Silva, M.P.; Le Callet, P.; Pépion, R. Tone mapping based HDR compression: Does it affect visual experience? Signal Process. Image Commun. 2014, 29, 257–273. [Google Scholar] [CrossRef]
  25. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
Figure 1. Luminance histogram of memorial church image. (a) Memorial Church; (b) Moto.
Figure 1. Luminance histogram of memorial church image. (a) Memorial Church; (b) Moto.
Electronics 11 04072 g001
Figure 2. Piecewise linear function. (a) Memorial Church; (b) Moto.
Figure 2. Piecewise linear function. (a) Memorial Church; (b) Moto.
Electronics 11 04072 g002
Figure 3. Pixel classification. (a) Pixels of group one; (b) Pixels of group two; (c) Subgroups for group two; (d) Pixels of group three.
Figure 3. Pixel classification. (a) Pixels of group one; (b) Pixels of group two; (c) Subgroups for group two; (d) Pixels of group three.
Electronics 11 04072 g003
Figure 4. The flowchart of the three-stage algorithm.
Figure 4. The flowchart of the three-stage algorithm.
Electronics 11 04072 g004
Figure 5. Data flow diagram for dealing with a color image.
Figure 5. Data flow diagram for dealing with a color image.
Electronics 11 04072 g005
Figure 6. The picture shows the test radiance maps rendering on standard LCD. The left image, called “memorial church”, is an HDR image from TMQI database [19]. The middle image, called “moto”, and the right image, called “apartment”, are reprinted with permission from Ref. [24]. 2022 Elsevier B.V. (Reprinted from Signal Processing: Image Communication, Vol (29), Authors (Narwaria, M., Da Silva, M.P., Le Callet, P. and Pépion, R.), Title of article (Tone mapping based HDR compression: Does it affect visual experience?), Pages (257–273), Copyright (2013), with permission from Elsevier).
Figure 6. The picture shows the test radiance maps rendering on standard LCD. The left image, called “memorial church”, is an HDR image from TMQI database [19]. The middle image, called “moto”, and the right image, called “apartment”, are reprinted with permission from Ref. [24]. 2022 Elsevier B.V. (Reprinted from Signal Processing: Image Communication, Vol (29), Authors (Narwaria, M., Da Silva, M.P., Le Callet, P. and Pépion, R.), Title of article (Tone mapping based HDR compression: Does it affect visual experience?), Pages (257–273), Copyright (2013), with permission from Elsevier).
Electronics 11 04072 g006
Figure 7. A comparison of tone mapping algorithms on the memorial church image. (a) Result by TMO-Net. (b) Result by DeepTMO. (c) Result by Thai. (d) Result by L1-L0. (e) Result by Khan. (f) Result by our algorithm.
Figure 7. A comparison of tone mapping algorithms on the memorial church image. (a) Result by TMO-Net. (b) Result by DeepTMO. (c) Result by Thai. (d) Result by L1-L0. (e) Result by Khan. (f) Result by our algorithm.
Electronics 11 04072 g007
Figure 8. A comparison of tone mapping algorithms on the moto image. (a) Result by TMO-Net. (b) Result by DeepTMO. (c) Result by Thai. (d) Result by L1-L0. (e) Result by Khan. (f) Result by our algorithm.
Figure 8. A comparison of tone mapping algorithms on the moto image. (a) Result by TMO-Net. (b) Result by DeepTMO. (c) Result by Thai. (d) Result by L1-L0. (e) Result by Khan. (f) Result by our algorithm.
Electronics 11 04072 g008
Figure 9. A comparison of tone mapping algorithms on the apartment image. (a) Result by TMO-Net. (b) Result by DeepTMO. (c) Result by Thai. (d) Result by L1-L0. (e) Result by Khan. (f) Result by our algorithm.
Figure 9. A comparison of tone mapping algorithms on the apartment image. (a) Result by TMO-Net. (b) Result by DeepTMO. (c) Result by Thai. (d) Result by L1-L0. (e) Result by Khan. (f) Result by our algorithm.
Electronics 11 04072 g009
Figure 10. The 15 radiance maps from the TMQI database.
Figure 10. The 15 radiance maps from the TMQI database.
Electronics 11 04072 g010
Figure 11. Four groups of tone-mapped images were obtained by our algorithm and the compared algorithms. The order of sub-photographs keeps in line with Figure 7.
Figure 11. Four groups of tone-mapped images were obtained by our algorithm and the compared algorithms. The order of sub-photographs keeps in line with Figure 7.
Electronics 11 04072 g011
Figure 12. Mean and std of subjective rankings of the six tone mapping algorithms.
Figure 12. Mean and std of subjective rankings of the six tone mapping algorithms.
Electronics 11 04072 g012
Table 1. Objective evaluations using 6 tone mapping algorithms and 3 radiance maps.
Table 1. Objective evaluations using 6 tone mapping algorithms and 3 radiance maps.
AlgorithmMemorial ChurchMotoApartment
TMQINIQETMQINIQETMQINIQE
L1-L00.8933.620.8763.730.8343.82
DeepTMO0.9053.350.9043.590.7983.31
Khan0.6525.290.6226.210.6495.45
TMO-Net0.8524.280.8145.070.8244.34
Thai0.7116.670.6856.750.6595.75
Our0.9293.210.9193.420.8543.03
Table 2. Objective evaluations using 6 tone mapping algorithms and 15 radiance maps from the TMQI database.
Table 2. Objective evaluations using 6 tone mapping algorithms and 15 radiance maps from the TMQI database.
Image IndexDeepTMOThaiTMO-NetL1-L0KhanOur
TMQINIQETMQINIQETMQINIQETMQINIQETMQINIQETMQINIQE
10.8013.210.7336.660.8103.600.8503.150.7885.350.9003.11
20.8293.240.7015.650.8443.970.7963.130.5535.820.8883.06
30.8433.870.7105.690.8754.140.8793.810.6545.930.8913.32
40.8853.840.6715.820.8544.330.8633.580.7216.360.8973.46
50.8123.840.6814.750.7604.470.8373.570.6195.530.9043.47
60.8313.550.6846.940.7223.990.8504.130.6324.680.8882.94
70.8253.120.7056.380.7813.470.8393.740.5525.240.8803.14
80.7843.570.6845.840.8513.710.7793.710.5416.200.8872.93
90.8173.920.6996.910.7533.610.8384.190.5865.140.9012.86
100.7873.480.6565.360.8774.740.8163.360.6846.030.9013.11
110.8463.690.6826.170.7993.760.8103.880.6645.660.9013.60
120.8073.570.6756.100.8604.210.8033.930.6216.240.8973.17
130.8293.170.6636.380.7443.900.8283.560.7054.760.8993.23
140.8153.400.6966.380.7604.250.7693.710.6385.520.8943.05
150.8343.390.6925.790.7363.710.8883.240.5845.080.9053.47
Mean0.8233.520.6896.050.8003.960.8303.650.6365.570.8963.19
Table 3. The average computational time cost when running implementations on different hardware platforms.
Table 3. The average computational time cost when running implementations on different hardware platforms.
Resolution357 × 535512 × 380401 × 535720 × 480713 × 535803 × 535
ARM CPU27.24 ms27.75 ms30.60 ms49.29 ms54.41 ms61.28 ms
FPGA11.31 ms11.53 ms12.71 ms20.47 ms22.60 ms25.45 ms
DSP10.48 ms10.69 ms11.78 ms18.97 ms20.95 ms23.59 ms
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhao, L.; Sun, R.; Wang, J. Three-Stage Tone Mapping Algorithm. Electronics 2022, 11, 4072. https://doi.org/10.3390/electronics11244072

AMA Style

Zhao L, Sun R, Wang J. Three-Stage Tone Mapping Algorithm. Electronics. 2022; 11(24):4072. https://doi.org/10.3390/electronics11244072

Chicago/Turabian Style

Zhao, Lanfei, Ruiyang Sun, and Jun Wang. 2022. "Three-Stage Tone Mapping Algorithm" Electronics 11, no. 24: 4072. https://doi.org/10.3390/electronics11244072

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop