Such technologies are particularly important for virtual reality and disabled users. So far, a view of views has faced difficulties: they did not always understand when the user really wanted to take action. This often led to incorrect studies and required additional approvals that completed interaction.

Researchers from the MGC center of MGPPU used machine learning methods to analyze the eyes of the eyes and take into account the context of those on the screen. One algorithm monitors the behavior of the view, and the other assesses the location of the objects and the alleged actions. The decision is made on the basis of the joint evaluation of these two models.

The new approach was tested on 15 participants using a special game required to control the movement of the color balls. According to the results of the experiment, the number of wrong work has decreased three times and the participants can play longer without failure. At the same time, the speed of motion of objects remained the same, but less action was required.

Source: Ferra

Previous articleNot
Next articleMediterranean and plant diets are called the most useful for Zapryfitnes and health15 July 2025, 16:30
I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.

LEAVE A REPLY

Please enter your comment!
Please enter your name here