Scientists Learned to Reproduce the Eye Reflection in the Photo and Video
Researchers at the University of Maryland have turned eye reflections into 3D scenes.
The work is based on Neural Radiation Fields (NeRF), an artificial intelligence technology that can reconstruct the environment of 2D photographs.
Neural Radiance Field (NeRF) is a deep learning technique whose superpower is the ability to interpolate between original images to create a continuous representation of a scene and view objects from new angles. Scientists have used this ability to create 3D models of objects based on their 2D image as reflected by human eyes. In this case, the shape of the cornea of the eye served as the basis for modeling.
Despite some limitations, such as the use of low-quality video recordings and the limitation to one site and position of the eye, the results of the study are impressive.
The created 3D models, although they have a low resolution, still allow you to identify objects.
The scientists plan to continue improving the algorithm further. This technology is somewhat reminiscent of the attempts by MIT researchers to reconstruct the soundscape of a sealed and soundproof room using a “visual microphone.”