Skip to main content

Adaptive Rendering: Using eye tracking to boost system performance of head mounted display devices

Ref-Nr: TA-WuT_2016/33


Kurzfassung

Since the spatial and temporal resolution in head-mounted displays is steadily increasing, the use of eye trackers to adapt rendering to the user is getting important to handle the increasing rendering workload. We propose to use the current visual field for rendering, depending on the eye gaze. Exceeding the well-known foveated rendering, we use the interplay of two different findings to optimize the rendering performance: lens defects and the inherent limitation of the individual visual view. When applying our method to a renderer, we get up to 2x speed-ups.


Hintergrund

Virtual reality (VR) head-mounted displays (HMD) are becoming popular in the consumer space. To increase the immersion further, higher temporal and spatial resolutions are demanded. Even with expected progress in future GPUs, it is challenging to render in real-time at a 16K retina resolution. For this reason, the workload of the rendering has to be reduced. Aiming at a reduction of the rendering workload, “foveated” rendering methods have been developed in recent years. Here, the rendering quality of the individual pixels/sections of the image is adapted to the users actual gaze direction and corresponding quality of perception which is – due to the physiology of the human’s eye – inferior for objects located in the periphery of the visual field compared to those in its center.


Bilder & Videos


Lösung

Our inventive method uses the effective visual field of the user depending on their actual eye gaze to fully skip the rendering of peripheral pixels/areas of the screen which are not perceived. In doing so, we incorporate two effects that influence and define the pixels within the visual field that has to be rendered: First, lens defects lead to the observation, that depending on the distance of the eye gaze to the center of the optical system, certain parts of the screen towards the edges that were visible when looking through the center are not visible anymore. Second, when looking in the periphery, the user cannot see the opposite peripheral parts of the screen. For these invisible areas which can be calibrated dependent on the eye gaze, we propose to skip rendering and to reuse the pixel colors from the previous frame. When applying the consequently adapted visual field to a renderer, we can more than double the system performance.


Vorteile

  • massive speed-up of rendering compared to other state-of-the-art foveated rendering methods
  • method can be combined with other foveated rendering methods to further reduce rendering workload
  • easy to implement after one-time calibration

Anwendungsbereiche

head-mounted display devices that use eye tracking


Universität des Saarlandes Wissens- und Technologietransfer GmbH

Dr. Christof Schäfer
0681 302-6383
christof.schaefer@uni-saarland.de
www.wut-uni-saarland.de
Adresse
Universität des Saarlandes Wissens- und Technologietransfer GmbH, Campus, Gebäude A1 1
66123 Saarbrücken



Entwicklungsstand

Prototyp


Patentsituation

  • EP 17751691 anhängig
  • US 16/321,922 anhängig

Stichworte

virtual reality, eye tracking, rendering, head-mounted display

Kontakt | Geschäftsstelle

TransferAllianz e. V.
Christiane Bach-Kaienburg
(Geschäftsstellenleiterin)

c/o PROvendis GmbH
Schloßstr. 11-15
D-45468 Mülheim an der Ruhr