Scientists have developed eye tracking technology in VR systems

IMAGE

IMAGE: Eye movement monitoring is one of the key elements of virtual reality and augmented reality (VR / AR) technologies. Develop a team from MSU along with a professor from the University of RUDN … vision more

Credit: RUDN University

Eye movement monitoring is one of the key elements of virtual reality and augmented reality (VR / AR) technologies. A team from MSU together with a professor from the University of RUDN developed a mathematical model that will help accurately predict the next shock setting point and reduce error caused by squeezing. The model would make VR / AR systems more rational and more responsive to user actions. The results of the study were published in the SID Symposium Digest of Technical Papers.

Advanced delivery is a core technology of VR systems. When one looks at something, their fetus focuses on the so-called underground region, and everything else is covered by a fringe view. Therefore, a computer must present the images in the foveated region with the highest level of detail, while other parts require less computing powers. This approach helps to improve computing performance and eliminate issues caused by the gap between the limited capabilities of graphics processors and growing display resolution. However, foveated confinement technology is limited in the speed and accuracy of the next set point prediction because human eye movement is a complex and largely random process. To solve this issue, a team of researchers from MSU together with a professor from the University of RUDN developed a mathematical modeling method that will help in measuring the next inclination resolution points in advance.

“One of the issues with foveated delivery is a timely prediction of the next gaze settling point because vision is a complex stochastic process. We proposed a mathematical model that predicts gaze settling point changes,” he said. t-Oll. Viktor Belyaev, Ph.D. in Technical Sciences from the Department of Mechanics and Mechatronics of the University of RUDN.

The prediction of the model is based on the study of so-called saccadic movements (rapid and rhythmic movements of the eye). They accompany the shifts of our perspective from one object to another and can suggest the next point of resolution. The ratio between the length, range, and maximum speed of saccadic eye movements is determined by specific empirical rules. However, these models cannot be used by eye detectors to predict eye movements because they are not accurate. Thus, the researchers focused on a mathematical model that helped them obtain saccadic motion parameters. Subsequently, this data was used to calculate the area under image.

The new method was experimentally tested using a VR helmet and AR glasses. The eye controller based on the mathematical model was able to detect small eye movements (3.4 minutes, which equates to 0.05 degrees), and the accuracy was up to 6.7 minutes (0.11 degrees). In addition, the team was able to eliminate the computational error that caused crushing: filters included in the model reduced the error 10 times. The results of the work could be used in VR modeling, video games, and in treatment for surgeries and vision disorder disorders.

“We have effectively resolved the issue with the foveated motion technology that has been in the mass production of VR systems. In the future, we plan to calibrate our eye controller to minimize the impact of display or helmet motions. against the head of a consumer, “said Dr. . Viktor Belyaev from the University of RUDN.

###

Disclaimer: AAAS and EurekAlert! they are not responsible for the accuracy of press releases posted to EurekAlert! by sending institutions or for using any information through the EurekAlert system.

.Source