Next Article in Journal
Ferroelectric Induced UV Light-Responsive Memory Devices with Low Dark Current
Next Article in Special Issue
Body-Effect-Free OLED-on-Silicon Pixel Circuit Based on Capacitive Division to Extend Data Voltage Range
Previous Article in Journal
Electric Vehicles Charging Stations’ Architectures, Criteria, Power Converters, and Control Strategies in Microgrids
Previous Article in Special Issue
Deep Gradient Prior Regularized Robust Video Super-Resolution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Study on the Effect of Gaze Position and Image Brightness on Peripheral Dimming Technique

Department of Information Display, Kyung Hee University, Seoul 02447, Korea
*
Author to whom correspondence should be addressed.
Electronics 2021, 10(16), 1896; https://doi.org/10.3390/electronics10161896
Submission received: 7 July 2021 / Revised: 31 July 2021 / Accepted: 3 August 2021 / Published: 7 August 2021
(This article belongs to the Special Issue Circuits, Systems, and Signal Processing for Display Applications)

Abstract

:
Here, we study a low-power technique for displays based on gaze tracking, called peripheral dimming. In this work, the threshold levels of the lightness reduction ratio (LRR), where people notice differences in brightness, depending on gaze positions and image brightness, are investigated. A psychophysical experiment with five gaze positions and three image brightness conditions is performed, and the estimated threshold levels are obtained. To investigate the significance of the differences between the threshold levels, the overlap method and the Bayesian estimation (BEST) analysis are performed. The analysis results show that the difference of the threshold levels depending on the conditions is insignificant. Thus, the proposed technique can operate with a constant LRR level, regardless of the gaze position or image brightness, while maintaining the perceptual image quality. In addition, the proposed technique reduces the power consumption of virtual reality (VR) displays by 12–14% on average. We believe that the peripheral dimming technique would contribute to reducing the power of the self-luminous displays used for VR headsets with an integrated eye tracker.

1. Introduction

The use of displays has been steadily increasing. In particular, displays are remarkably used for mobile devices. Although long battery life is an important issue for mobile devices [1], unfortunately, the display is one of the most power-consuming parts [2,3].
Various approaches to reduce the power consumption of the display have been attempted. Among them, methods reducing the brightness of displays have been widely studied [4,5,6,7,8,9,10,11] because they are simple but effective. Brightness reduction, however, causes a degradation of image quality. Thus, it is important to balance power-saving and image quality. An image quality assessment (IQA) index is commonly used as a criterion. IQA can be categorized into objective and subjective methods. The objective IQA uses a model that can predict the quality of an image automatically. It is a relatively easy to obtain the index. Thus, there are many techniques based on objective IQAs to reach a compromise between power saving and image quality [4,5,6,7,8,9]. On the other hand, subjective evaluation is more expensive and time-consuming. However, it is the most reliable method for assessing the quality of images, as human observers are the ultimate users [12]. For this reason, methods based on subjective IQAs [10,11] have been studied less, but have achieved effective power reduction without degradation of the perceived image quality, even in more aggressive dimming conditions such as when turning off some pixels.
A low-power display technique based on subjective IQA, known as a peripheral dimming, has been studied [13]. It is based on the human visual characteristic that the visual acuity and sensitivity of the central vision are considerably higher than those of the periphery. Owing to a non-uniform distribution of photoreceptor cells in the retina, the peripheral vision has a lower resolution and sensitivity for color or brightness [14,15]. The proposed technique gradually darkens the brightness in the peripheral vision area, as shown in Figure 1a. The foveal zone is a circular area with a boundary 5° from the fixation point, which is the inner part of a blue dashed circle, and the outer area is the peripheral zone. While the foveal zone keeps its own brightness, the brightness in the peripheral zone gradually decreases depending on the viewing angle. The amount of reduced brightness is determined based on the lightness component defined in the CIELAB color space. Figure 1b shows the lightness weight ratio depending on the viewing angle. The slope in the graph is called a lightness reduction ratio (LRR). As the LRR is higher, the lightness in the peripheral zone decreases more rapidly. We performed subjective tests for the central gaze position, as shown in Figure 1a. We obtained threshold levels of LRR (LRRTH) where the power can be effectively saved while maintaining the perceptual image quality. In addition, LRRTH seemed to be inversely proportional to the image brightness in our previous study.
As the peripheral dimming technique is applied based on the user’s gaze, an eye tracker should be integrated into the VR headset. Commercially available devices such as VIVE Pro Eye and FOVE VR already have integrated eye trackers [16,17]. The eye tracker is indispensable for implementing a foveated rendering technology that can reduce the computational cost and power of VR headsets [18,19]. In addition, smooth pursuit eye movement (SPEM) has been investigated for clinical diagnoses [20,21,22,23,24]. For this reason, VR headsets with eye trackers are also expected to be promising health care devices. They are also used for emotion recognition or eye fatigue [25,26]. Thus, the proposed method can contribute to lowering the power consumption of VR headsets with an eye tracker.
In the previous study, we performed a pilot test for the peripheral dimming by changing the gaze positions [27], and found that LRRTHs differed depending on the gaze position. However, because the data were not sufficient to conclude the correlation between them, we presumed that LRRTH might be related to the gaze position. This is a very critical issue for the peripheral dimming technique, which actively reacts to the user’s gaze. Thus, in this study, we investigated this correlation based on more data and statistical analyses. In addition, to improve the model of LRRTH depending on image brightness, we attempted to obtain more LRRTH data under various image brightness conditions.
In this paper, we focused on the following two factors: the gaze position and image brightness. A psychophysical experiment with five gaze positions and three image brightness conditions was conducted. With the statistical analysis, a quantitative model of LRRTH depending on the gaze position and image brightness was investigated.

2. Psychophysical Experiment

2.1. Hardware Setup

Figure 2 shows the experimental setup. The experiment was conducted in a dark room. A 24-inch LCD monitor with a 1920 × 1080 resolution, a chin rest, an eye tracker, and a keypad were used. All of the participants fixed their heads on the chin rest. We asked them to move their gaze by rotating only their eyes, without turning their heads. The monitor was 70 cm away from the chin rest, i.e., the viewing distance was 70 cm, and the field of view (FoV) was 41.6° × 24.1° (horizontal × vertical). Tobii eyeX was used to monitor the participants’ gaze position during the experiment. The gaze data were recorded at a sampling rate of 60 Hz. The participants answered questions by typing on a keypad.

2.2. Conditions

In the experiment, three kinds of conditions were controlled: the gaze position, image brightness, and LRR. Firstly, there were five gaze positions: top (T), left (L), center (C), right (R), and bottom (B). As shown in Figure 3, T (or B) is vertically 5°, and L (or R) is horizontally 10° away from the screen center.
Figure 4a shows the frame images corresponding to each gaze position. To avoid the influence of color on brightness perception, monochrome images were used. The gaze position was marked by a gray cross with a size of 1° viewing angle. Figure 4b shows how to generate the frame images of the L and R gaze conditions. The source images were extracted from a video with a 3840 × 2160 resolution released under the Creative Commons CC0 [28]. A blue solid square represents a cropping boundary with a 1920 × 1080 resolution. By adjusting the position of the cropping boundary, images corresponding to the gaze positions were generated. After cropping, the image was converted to a grayscale image. To minimize the difference of images depending on the gaze position, the right and bottom images were made by horizontal and vertical flipping of the left and top images. To minimize the visual difference depending on the gaze condition, the visual stimulus of the foveal zone was left as unchanged as possible.
Secondly, three image brightness conditions were used. The image brightness, as mentioned above, was quantified based on the lightness defined in the CIELAB color space. To prevent the effect of the image content, we adjusted the image brightness from one source image. As shown in Figure 5a, by adjusting the lightness by ±15% from the original image (source image), two images that were brighter and darker than the original one were generated. As shown in Figure 5b, the darker, original, and brighter images had an average lightness (AL) of 52.2, 61.5, and 70.7, respectively. Thus, they were called AL50, AL60, and AL70, respectively.
Lastly, seven LRR levels were used: 0.10, 0.25, 0.40, 0.55, 0.70, 0.85, and 1.00%/degree.

2.3. Participants

The number of participants in each session is described in Table 1. Sixteen, nineteen, and fifteen observers participated in the AL50, AL60, and AL70 sessions, respectively. They were paid about 10 USD per hour for their participation.

2.4. Procedure

The experiment consisted of three sessions in accordance with image brightness. In each session, five gaze positions with seven LRR levels were used. The AL60 session using the original image was the first one, followed by the AL50 and AL70 sessions. The experiment was designed based on the temporal two-alternative forced choice (2-AFC), which is a psychophysical method used to investigate the behavioral or perceptual responses from subjects. Two stimuli were sequentially presented and then the participants were asked to choose the brighter one between two stimuli.
Before the experiment began, all participants fixed their heads on the chin rest, and then the eye tracker was calibrated for each participant. They were asked to move their gaze without turning their heads. During the experiment, an assessment unit, as shown in Figure 6, was repeated. Each trial began with a fixation image indicating the gaze position for 3 s. The gaze position was randomly determined among the five positions. After the fixation image, the reference and test (with peripheral dimming) videos were sequentially displayed for 6 s in random order. Between two videos, a blank image was displayed for 3 s. After two visual stimuli, the participants were asked to choose the brighter one by using the keypad for 8 s. While the reference and test videos were displayed, the eye tracker recorded the gaze position. A beep sounded whenever the participant stared at an area outside 4° from the fixation point. If the beep sounded more than three times within one assessment unit, the participant’s answers were excluded from the experimental results.
As mentioned above, each session contained five gaze conditions with seven LRRs. Each LRR condition was repeated four times i.e., four answers were obtained for each LRR, and then a correct answer rate was computed. As a result, each participant answered 140 (5 × 7 × 4) times in each session. It took about 1 h to complete one session.

3. Results

3.1. Psychophysical Results

A psychometric function is an inferential model to estimate the threshold of the stimulus intensity statistically. It models the relationship between the stimulus intensity and the underlying probability of the responses. In this work, the response and the stimulus intensity corresponded to the correct answer rate and the LRR, respectively. The LRR level where the correct answer rate was 75% is defined as LRRTH.
The psignifit toolbox for MATLAB (version R2020a, MathWorks) presented by Schütt et al. [29] was used to fit the psychometric function to our experimental data. This tool is an open-source package for Bayesian psychometric function estimation by fitting a beta-binomial model to the correct responses at each stimulus level. It computes the posterior distributions for the parameters based on the Bayesian inference, and then estimates the psychometric function. Figure 7 shows an example outcome obtained from the tool. The red circles are the data. The blue and black curves represent the estimated posterior distribution of the threshold and the psychometric function, respectively. The red error bar represents a 95% credible interval (CI) for the threshold. The CI is a Bayesian confidence interval within which an unobserved parameter value falls with a particular probability. It is in the domain of a posterior probability distribution or a predictive distribution [30].
Figure 8a–c shows the obtained subject data and the psychophysical results. The graphs in the left, center, and right columns present the results for the AL50, AL60, and AL70 conditions, respectively. The average correct answer rates depending on the LRR are plotted in Figure 8a. A trend that the correct answer rate increased with increasing the LRR was observed for all conditions. Figure 8b,c shows the estimated psychometric functions fitted to the subject data and the posterior distributions for LRRTH.
The resulting LRRTHs for each condition are plotted in Figure 9. The error bar represents the 95% CI. The mean value of the obtained LRRTHs was 0.39%/degree, and they ranged from 0.3 to 0.5%/degree. The values of LRRTH slightly varied depending on both conditions of the gaze position and image brightness. To analyze the significance of the differences of LRRTHs, we compared the CI ranges. Examining the overlap among CIs is an analysis method that can quickly compare the data groups [31,32]. The 95% CI ranges are shown in Table 2. Herein, the “common” on the final row or column denotes the overlapped CI ranges for the same gaze position or image brightness condition, respectively. Under the same gaze position or image brightness condition, an overlapped CI range existed, as shown in Table 2. This verifies that the differences of LRRTH depending on each condition could be insignificant. Thus, a common LRR level can be applied, regardless of the gaze position or image brightness. However, the overlap method is one of the rough analysis methods [32], and we performed an additional analysis to verify the differences of the LRRTHs.

3.2. Bayesian Estimation Analysis

The psychophysical analysis is very useful to find out the threshold levels, but has a limit when analyzing their statistical significance. Thus, we additionally performed a Bayesian estimation (BEST). As the psychophysical results provide the value with the posterior distribution for LRRTH, a comparison of the posterior distributions was performed among the various methods of the BEST analysis [30,33]. This method is based on the premise that different random samples obtained from the two different groups will show different trends because of random variation [33]. If two random samples are repeatedly compared, the distribution of the difference between two samples will be derived. Figure 10 shows two example results. The horizontal axis indicates the difference, and the black dashed line and blue solid line denote the zero point and the 95% highest density interval (HDI). If two groups have a very similar data distribution, the value of the difference between two random samples would be most frequently zero, as shown in Figure 10a. If not, the histogram results would be skewed to the left or right sides, as shown in Figure 10b. Consequently, the degree of skew in the histogram is related to the similarity of the two distributions. In this study, the distribution of the differences was computed using Markov chain Monte Carlo sampling [34] with a sample size of 50,000. We took the 95% HDI as a criterion, and concluded that there was no significant difference if the 95% HDI included zero.
Figure 11 shows the differences between the posterior distributions. Firstly, let us see the results compared with C (i.e., T–C, C–B, L–C, and C–R). In most results, it was observed that the distributions of the same comparison conditions had the same skew direction as the distribution. For example, the results of C–B and C–R were skewed to the left and the right, respectively. However, there was no result where the 95% HDI excluded zero. LRRTH seemed to be affected by the gaze position, but we considered it a negligible level. Next, in the comparison between the symmetrical gaze positions, T–B and L–R, mixed results were observed. In the case of the symmetrical gaze position, it was predicted that there would be no difference, because the degree of darkening was the same. The results of L–R were in line with our forecast, whereas the results of T–B were not. However, these results showed that the 95% HDI included zero like the other results, so we decided that the difference of the trends was within an allowable error. Therefore, we conclude that, under the condition of the FoV of 41.6° × 24.1°, LRRTH did not depend on the gaze position significantly.
Figure 12 shows the comparison results for the image brightness conditions. If image brightness affects LRRTH, the distribution of the differences between AL70 and AL50 will be more skewed than for the other distributions. The obtained results, however, show that it did not affect it. The skew patterns were erratic, regardless of image brightness. Only one feature was commonly observed in all results, namely that the 95% HDI includes zero. The results clearly show that there was no significant correlation between LRRTH and image brightness. Because the experimental conditions were more strictly controlled in this study, as mentioned above, we suggest that LRRTH could be independent of image brightness.

3.3. Performance Evaluation for Power-Saving

The total dissipated power (TDP) of the OLED display is given by [35].
T D P = i = 1 N j = 1 M ( w R R i , j γ + w G G i , j γ + w B B i , j γ )
where i and j denote the horizontal and vertical indices of a pixel, respectively. N and M denote the numbers of the horizontal and vertical pixels, respectively. R, G, and B denote the RGB levels of the pixel (i, j). wR, wG, and wB, are the weighting coefficients and γ is the gamma value. They are panel-dependent coefficients. The used values of wR, wG, and wB and γ were 70, 115, and 154 and 2.2, respectively [4,35].
Based on the TDP, we computed the ratio of power saving (PSAVED) as follows:
P SAVED = ( 1 T D P T H T D P O R G ) × 100
where TDPORG and TDPTH represent the TDPs of the original image and that adopting peripheral dimming with LRRTH, respectively.
Table 3 describes the PSAVED values for each condition. On average, the peripheral dimming technique reduced power consumption by 11.5%. The differences of PSAVED depending on the gaze positions were within 3%. The obtained PSAVED values of the AL50, AL60, and AL70 conditions were 11.6%, 12.2%, and 10.9%, respectively. Interestingly, the resulting PSAVED values were very similar, even for different image brightness conditions.
To investigate if PSAVED was independent of image brightness, we applied the peripheral dimming with different LRR to additional images with different lightness [27] and computed PSAVED, as shown in Figure 13. LRRs of 0.3, 0.4, and 0.5%/degree were applied while fixing the gaze position to the center of the image. The resolution of the images was 1920 × 1080. As shown in Figure 13, the PSAVED values remained almost constant under the same LRR, regardless of the average lightness of the images. We believe that the PSAVED was only dependent on LRR because the peripheral dimming technique decreased the lightness by a ratio instead of an absolute value. Considering the average of LRRTHs is about 0.4%/degree, we concluded that the peripheral dimming technique reduced the average power consumption by about 12–14%, while maintaining a good perceptual image quality.
Peripheral dimming can be applied well to virtual reality (VR) head-mounted display (HMD) device. It requires a higher resolution and faster refresh rates for a more immersive experience, which results in more power consumption [36,37,38,39]. Moreover, it is likely to utilize cloud computing for image rendering in the near future, by virtue of cloud VR [40,41,42]. Currently, the display accounts for 7–17% of the total power of a VR device [43,44,45]. However, the display accounts for about 30% of the total power of a cloud VR system. Considering 14% of the maximum power saving, our proposed method will save over 5% of the overall system power.

4. Discussion

In this paper, we investigated the effect of the gaze position and image brightness on the peripheral dimming technique. Furthermore, we obtained the result that the proposed technique could reduce the power consumption of the display panel by about 12–14%. In this study, however, the field of view (FoV) was limited to 41.6° × 24.1° because of the given monitor size and the viewing distance for proper operation of the eye tracker. As the technique reduces the lightness depending on the viewing angle, the power saving efficiency became higher where the distance from the gaze position was further. The larger the display size, the more the power saved. A wide FoV is indispensable for a more immersive experience of VR devices. Thus, larger displays are preferred. Therefore, our proposed method can save more power from the VR headset with a wider FoV.
In our previous work, we had observed that LRRTH is inversely proportional to the AL of the video. However, this trend was not observed in this study. For most gaze conditions, LRRTH for AL70 was the smallest, but LRRTH for AL60 was larger than that for AL50. In addition, the CIs also considerably overlapped. It seems that LRRTH might hardly be related to the AL of the images. In the previous experiment, three full-color videos with different contents were used, whereas monochromatic videos with the same contents were used in this experiment. Because the experimental environment in this study was more strictly controlled, although the two results conflict with each other, we conclude that the results obtained in this study are reliable. Thus, it is considered that image brightness in itself is hardly correlated to LRRTH. We speculate that there are veiled factors other than image brightness that have a complex effect on LRRTH, and that correlation was observed in the previous study. Further study should be performed to solve this issue.
The results show that the peripheral dimming technique can be simply applied with a constant dimming level while tracking the user’s gaze. These findings demonstrate the universal applicability and usefulness of the peripheral dimming technique. In addition, the technique can save the power consumption constantly by 12–14%, without degradation of the perceptual image quality. The power consumption in a self-luminous display, such as an organic light-emitting diode display, is proportional to its luminance. Thus, the technique would help to reduce power more effectively when the display consumes power heavily. We believe that it is an attractive low-power technique for a head-mounted display device with an integrated eye tracker.

Author Contributions

Conceptualization, J.-S.K. and S.-W.L.; formal analysis, J.-S.K., W.-B.J., and B.H.A.; investigation, J.-S.K., W.-B.J., and B.H.A.; writing—original draft preparation, J.-S.K. and S.-W.L.; writing—review and editing, S.-W.L.; supervision, S.-W.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT), no. 2020R1F1A1076479.

Institutional Review Board Statement

The study was approved by the Kyung Hee University’s Institutional Review Board (KHSIRB-20-322).

Informed Consent Statement

Written informed consent was obtained from all participants involved in the study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nathan, A.; Chaji, G.R.; Ashtiani, S.J. Driving schemes for a-Si and LTPS AMOLED displays. J. Display Technol. 2005, 1, 267–277. [Google Scholar] [CrossRef]
  2. Chen, D.; Huang, J.; Lee, C.Y.; Lee, Y.J.; Wu, Y.C.; Zhang, X.; Chun, P.; Kim, S. On the Display Technology Trends Resulting from Rapid Industrial Development. SID’s Dig. Tech. Pap. 2018, 48, 316–321. [Google Scholar] [CrossRef]
  3. Chen, X.; Chen, Y.; Ma, Z.; Fernandes, F.C. How is energy consumed in smartphone display applications? In Proceedings of the 14th Workshop on Mobile Computing Systems and Applications, Jekyll Island, GA, USA, 26–27 February 2013; Association for Computing Machinery: New York, NY, USA, 2013. [Google Scholar]
  4. Lee, C.; Lee, C.; Lee, Y.Y.; Kim, C.S. Power-constrained contrast enhancement for emissive displays based on histogram equalization. IEEE Trans. Image Process 2012, 21, 80–93. [Google Scholar]
  5. Chen, X.; Chen, Y.; Xue, C.J. DaTuM: Dynamic tone mapping technique for OLED display power saving based on video classification. In Proceedings of the 52nd ACM/EDAC/IEEE Design Automation Conference, San Francisco, CA, USA, 8–12 June 2015; IEEE: New York, NY, USA, 2015. [Google Scholar]
  6. Pagliari, D.J.; Macii, E.; Poncino, M. LAPSE: Low-overhead adaptive power saving and contrast enhancement for OLEDs. IEEE Trans. Image Process. 2018, 27, 4623–4637. [Google Scholar] [CrossRef] [PubMed]
  7. Pagliari, D.J.; Cataldo, S.D.; Patti, E.; Macii, A.; Macii, E.; Poncino, M. Low-overhead adaptive brightness scaling for energy reduction in OLED displays. IEEE Trans. Emerg. Top. Comput. 2019. early access. [Google Scholar]
  8. Ahn, Y.; Kang, S.J. OLED Power Reduction Algorithm Using Gray-level Mapping Conversion. SID’s Dig. Tech. Pap. 2015, 46, 254–257. [Google Scholar] [CrossRef]
  9. Shin, Y.G.; Park, S.; Yeo, Y.J.; Yoo, M.J.; Ko, S.J. Unsupervised deep contrast enhancement with power constraint for OLED displays. IEEE Trans. Image Process. 2019, 29, 2834–2844. [Google Scholar] [CrossRef]
  10. Choubey, P.K.; Singh, A.K.; Bankapur, R.B.; Vaisakh, P.C.; Manoj, P.B. Content aware targeted image manipulation to reduce power consumption in OLED panels. In Proceedings of the 8th International Conference on Contemporary Computing, Noida, India, 20–22 August 2015; IEEE: New York, NY, USA, 2015. [Google Scholar]
  11. Chen, X.; Nixon, K.W.; Zhou, H.; Liu, Y.; Chen, Y. FingerShadow: An OLED Power Optimization based on Smartphone Touch Interactions. In Proceedings of the 6th Workshop on Power-Aware Computing and Systems, Broomfield, CO, USA, 5 October 2014; USENIX Association: Berkeley, CA, USA, 2014. [Google Scholar]
  12. Mohammadi, P.; Ebrahimi-Moghadam, A.; Shirani, S. Subjective and objective quality assessment of image: A survey. arXiv 2014, arXiv:1406.7799. [Google Scholar]
  13. Kim J., S.; Lee, S.W. Peripheral Dimming: A New Low-Power Technology for OLED Display Based on Gaze Tracking. IEEE Access 2020, 8, 209064–209073. [Google Scholar] [CrossRef]
  14. Strasburger, H.; Rentschler, I.; Jüttner, M. Peripheral vision and pattern recognition: A review. J. Vis. 2011, 11, 1–82. [Google Scholar] [CrossRef] [Green Version]
  15. Hansen, T.; Pracejus, L.; Gegenfurtner, K.R. Color perception in the intermediate periphery of the visual field. J. Vis. 2009, 9, 1–12. [Google Scholar] [CrossRef] [PubMed]
  16. HTC Co. VIVE. Available online: https://www.vive.com/us/ (accessed on 30 July 2021).
  17. FOVE Co. FOVE. Available online: https://fove-inc.com/ (accessed on 30 July 2021).
  18. Patney, A.; Kim, J.; Salvi, M.; Kaplanyan, A.; Wyman, C.; Benty, N.; Lefohn, A.; Luebke, D. Perceptually-based foveated virtual reality. In Proceedings of the SIGGRAPH 16: ACM SIGGRAPH 2016 Emerging Technologies, New York, NY, USA, 24–28 July 2016; Association for Computing Machinery: New York, NY, USA, 2016. [Google Scholar]
  19. Patney, A.; Salvi, M.; Kim, J.; Kaplanyan, A.; Wyman, C.; Benty, N.; Luebke, D.; Lefohn, A. Towards foveated rendering for gaze-tracked. ACM Trans. Graph. 2016, 35, 1045. [Google Scholar] [CrossRef]
  20. Hutton, J.T.; Nagel, J.A.; Loewenson, R.B. Eye tracking dysfunction in Alzheimer-type dementia. Neurology 1984, 34, 99–102. [Google Scholar] [CrossRef] [PubMed]
  21. MacAvoy, M.G.; Gottlieb, J.P.; Bruce, C.J. Smooth-pursuit eye movement representation in the primate frontal eye field. Cereb. Cortex. 1991, 1, 95–102. [Google Scholar] [CrossRef]
  22. Bott, N.; Lange, A.L.; Cosgriff, R.; Hsiao, R.; Dolin, B. Method and System for Correlating an Image Capturing Device to a Human User for Analysis of Cognitive Performance. U.S. Patent US20180125404A1, 10 May 2018. [Google Scholar]
  23. Alves, J.; Vourvopoulos, A.; Bernardino, A.; Badia, S.B. Eye Gaze Correlates of Motor Impairment in VR Observation of Motor Actions. Methods Inf. Med. 2016, 55, 79–83. [Google Scholar]
  24. Morita, K.; Miura, K.; Kasai, K.; Hashimoto, R. Eye movement characteristics in schizophrenia: A recent update with clinical implications. Neuropsychopharmacol. Rep. 2020, 40, 2–9. [Google Scholar] [CrossRef] [Green Version]
  25. Lim, J.Z.; Mountstephens, J.; Teo, J. Emotion Recognition Using Eye-Tracking: Taxonomy, Review and Current Challenges. Sensors 2020, 20, 2384. [Google Scholar] [CrossRef]
  26. Wang, Y.; Zhai, G.; Chen, S.; Min, X.; Gao, Z.; Song, X. Assessment of eye fatigue caused by head-mounted displays using eye-tracking. Biomed. Eng. Online 2019, 18, 111. [Google Scholar] [CrossRef] [Green Version]
  27. Jeong, W.B.; Kim, J.S.; Lee, S.W. Dependence of Brightness Sensitivity on Gaze Point. In Proceedings of the 20th International Meeting on Information Display, Seoul, Korea, 25–28 August 2020; KIDS: Seoul, Korea, 2020. [Google Scholar]
  28. Pixabay. Available online: https://pixabay.com/ (accessed on 27 November 2019).
  29. Schütt, H.H.; Harmeling, S.; Macke, J.H.; Wichmann, F.A. Painfree and accurate Bayesian estimation of psychometric functions for (potentially) overdispersed data. Vis. Res. 2016, 122, 105–123. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Kruschke, K.K. Doing Bayesian Data Analysis, 2nd ed.; Elsevier: Amsterdam, The Netherlands, 2015. [Google Scholar]
  31. Cumming, G. The new statistics: Why and how. Psychol. Sci. 2014, 25, 7–29. [Google Scholar] [CrossRef]
  32. Schenker, N.; Gentleman, J.F. On judging the significance of differences by examining the overlap between confidence intervals. Am. Statist. 2001, 55, 182–186. [Google Scholar] [CrossRef]
  33. Kruschke, K.K.; Liddell, T.M. The Bayesian New Statistics: Hypothesis testing, estimation, meta-analysis, and power analysis from a Bayesian perspective. Psychon. Bull. Rev. 2018, 25, 178–206. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Andrieu, C.; Freitas, N.; Doucet, A.; Jordan, M.I. An introduction to MCMC for machine learning. Mach. Learn. 2003, 50, 5–43. [Google Scholar] [CrossRef] [Green Version]
  35. Dong, M.; Choi, Y.S.K.; Zhong, L. Power modeling of graphical user interfaces on OLED displays. In Proceedings of the 46th Design Automation Conference, San Francisco, CA, USA, 26–31 July 2009; Association for Computing Machinery: New York, NY, USA, 2009. [Google Scholar]
  36. Keil, J.; Edler, D.; Schmitt, T.; Dickmann, F. Creating Immersive Virtual Environments Based on Open Geospatial Data and Game Engines. KN J. Cartogr. Geogr. Inf. 2021, 71, 53–65. [Google Scholar] [CrossRef]
  37. Zhan, T.; Yin, K.; Xiong, J.; He, Z.; Wu, S.T. Augmented Reality and Virtual Reality Displays: Perspectives and Challenges. iScience 2020, 23, 101397. [Google Scholar] [CrossRef] [PubMed]
  38. Gou, F.; Chen, H.; Li, M.C.; Lee, S.L.; Wu, S.T. Submillisecond-response liquid crystal for high-resolution virtual reality displays. Opt. Exp. 2012, 25, 7984–7997. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Cuervo, E.; Chintalapudi, K.; Kotaru, M. Creating the Perfect Illusion: What will it take to Create Life-Like Virtual Reality Headsets? In Proceedings of the HotMobile 18: 19th International Workshop on Mobile Computing Systems & Applications, New York, NY, USA, 12–13 February 2018; Association for Computing Machinery: New York, NY, USA, 2018. [Google Scholar]
  40. Hou, X.; Lu, Y.; Dey, S. Wireless VR/AR with Edge/Cloud Computing. In Proceedings of the 2017 26th International Conference on Computer Communication and Networks, Vancouver, QC, Canada, 31 July–3 August 2017; IEEE: New York, NY, USA, 2017. [Google Scholar]
  41. Xiao, G.; Li, H.; Han, C.; Liu, Y.; Li, Y.; Liu, J. Cloud Rendering Scheme for Standalone Virtual Reality Headset. In Proceedings of the 2020 International Conference on Virtual Reality and Visualization, Recife, Brazil, 13–14 November 2020; IEEE: New York, NY, USA, 2020. [Google Scholar]
  42. Huawei iLab. Cloud VR Network Solution White Paper. Available online: https://www.huawei.com/minisite/pdf/ilab/cloud_vr_network_solution_white_paper_en.pdf (accessed on 31 July 2021).
  43. Leng, Y.; Chen, C.C.; Sun, Q.; Huang, J.; Zhu, Y. Energy-efficient video processing for virtual reality. In Proceedings of the ISCA ‘19: 46th International Symposium on Computer Architecture, New York, NY, USA, 22–26 June 2019; Association for Computing Machinery: New York, NY, USA, 2019. [Google Scholar]
  44. Jiang, N.; Liu, Y.; Guo, T.; Xu, W.; Swaminathan, V.; Xu, L.; Wei, S. QuRate: Power-efficient mobile immersive video streaming. In Proceedings of the MMSys ‘20: 11th ACM Multimedia Systems Conference, Istanbul, Turkey, 8–11 June 2020; Association for Computing Machinery: New York, NY, USA, 2020. [Google Scholar]
  45. Jiang, N.; Swaminathan, V.; Wei, S. Power Evaluation of 360 VR Video Streaming on Head Mounted Display Devices. In Proceedings of the NOSSDAV’17: 27th Workshop on Network and Operating Systems Support for Digital Audio and Video, Taipei, China, 20–23 June 2017; Association for Computing Machinery: New York, NY, USA, 2017. [Google Scholar]
Figure 1. (a) A concept image of the peripheral dimming technique and (b) a lightness weight ratio depending on the viewing angle.
Figure 1. (a) A concept image of the peripheral dimming technique and (b) a lightness weight ratio depending on the viewing angle.
Electronics 10 01896 g001
Figure 2. A photo of the experimental setup.
Figure 2. A photo of the experimental setup.
Electronics 10 01896 g002
Figure 3. Five gaze positions used in the experiment.
Figure 3. Five gaze positions used in the experiment.
Electronics 10 01896 g003
Figure 4. (a) Frame images corresponding to the five gaze positions and (b) a framework of the image generation process.
Figure 4. (a) Frame images corresponding to the five gaze positions and (b) a framework of the image generation process.
Electronics 10 01896 g004
Figure 5. (a) The images of AL50, AL60, and AL70 and (b) the average lightness of all of the frame images.
Figure 5. (a) The images of AL50, AL60, and AL70 and (b) the average lightness of all of the frame images.
Electronics 10 01896 g005
Figure 6. Schematic diagram of the assessment unit.
Figure 6. Schematic diagram of the assessment unit.
Electronics 10 01896 g006
Figure 7. The estimated psychometric function with the parameters and the posterior distribution of the threshold.
Figure 7. The estimated psychometric function with the parameters and the posterior distribution of the threshold.
Electronics 10 01896 g007
Figure 8. The results of (a) the average correct answer rate, (b) the estimated psychometric function, and (c) the posterior distribution for LRRTH of AL50 (left), AL60 (center), and AL70 (right) conditions.
Figure 8. The results of (a) the average correct answer rate, (b) the estimated psychometric function, and (c) the posterior distribution for LRRTH of AL50 (left), AL60 (center), and AL70 (right) conditions.
Electronics 10 01896 g008
Figure 9. The resulting LRRTHs grouped by (a) AL50, (b) AL60, and (c) AL70 conditions.
Figure 9. The resulting LRRTHs grouped by (a) AL50, (b) AL60, and (c) AL70 conditions.
Electronics 10 01896 g009
Figure 10. The example histograms of the differences obtained from two (a) similar and (b) different distributions.
Figure 10. The example histograms of the differences obtained from two (a) similar and (b) different distributions.
Electronics 10 01896 g010
Figure 11. The results of the BEST analysis for the gaze positions for the (a) AL50, (b) AL60, and (c) AL70 conditions.
Figure 11. The results of the BEST analysis for the gaze positions for the (a) AL50, (b) AL60, and (c) AL70 conditions.
Electronics 10 01896 g011
Figure 12. The results of the BEST analysis for the image brightness conditions.
Figure 12. The results of the BEST analysis for the image brightness conditions.
Electronics 10 01896 g012
Figure 13. The additional images (left) and the PSAVED values after applying the peripheral dimming (right).
Figure 13. The additional images (left) and the PSAVED values after applying the peripheral dimming (right).
Electronics 10 01896 g013
Table 1. Participant information in each session.
Table 1. Participant information in each session.
AL50AL60AL70
Number of participants161915
Mean age25.424.725.2
Table 2. The 95% CI ranges of the LRRTHs.
Table 2. The 95% CI ranges of the LRRTHs.
TLCRBCommon
AL50
[%/degree]
0.33–0.510.23–0.420.28–0.480.23–0.480.40–0.580.40–0.42
AL60
[%/degree]
0.33–0.550.27–0.510.33–0.610.21–0.470.39–0.600.39–0.47
AL70
[%/degree]
0.25–0.500.17–0.390.23–0.470.16–0.380.32–0.550.32–0.38
Common
[%/degree]
0.33–0.500.27–0.390.33–0.470.23–0.380.40–0.55
Table 3. Amount of saved power.
Table 3. Amount of saved power.
Gaze PositionAL50AL60AL70Mean
T12.0%12.4%11.5%12.0%
L10.4%11.5%9.9%10.6%
C11.3%12.7%10.9%11.6%
R10.9%10.8%9.8%10.5%
B13.2%13.4%12.4%13.0%
Mean11.6%12.2%10.9%11.5%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kim, J.-S.; Jeong, W.-B.; An, B.H.; Lee, S.-W. Study on the Effect of Gaze Position and Image Brightness on Peripheral Dimming Technique. Electronics 2021, 10, 1896. https://doi.org/10.3390/electronics10161896

AMA Style

Kim J-S, Jeong W-B, An BH, Lee S-W. Study on the Effect of Gaze Position and Image Brightness on Peripheral Dimming Technique. Electronics. 2021; 10(16):1896. https://doi.org/10.3390/electronics10161896

Chicago/Turabian Style

Kim, Jeong-Sik, Won-Been Jeong, Byeong Hun An, and Seung-Woo Lee. 2021. "Study on the Effect of Gaze Position and Image Brightness on Peripheral Dimming Technique" Electronics 10, no. 16: 1896. https://doi.org/10.3390/electronics10161896

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop