Google has created a light field camera for capturing surround video

Researchers from Google have developed algorithms and cameras surround the rollers. The camera consists of many individual cameras mounted on the transparent hemisphere. Algorithms convert videos from cameras in the array of hemispherical layers, each of which contains information about the color, transparency and depth. This representation allows then to “rebuild” the layers and change the viewing angle, correctly transmitting the parallax, reflection and other optical effects. In addition, the developers created a compression algorithm, allowing real time to pass this huge array of data via the Internet. Article describing the development will be presented at SIGGRAPH 2020, and examples of video published on the website of the authors.

We perceive the world surround mainly for two reasons. First, because we have two eyes located, although close to each other, but at different points in space, the brain receives two images with slightly different angles. This alone allows us to perceive the world three-dimensional and to understand the distance of different objects (if the person does not have binocular vision). Secondly, we can also change the angle of the “shooting”, rotating and moving the head, using motion parallax, and knowing the volume and distance of displacement of the objects relative to each other.

There are serial VR-helmets and stereo cameras, which reproduce binocular vision due to the fact that each eye is given footage from different angles. But this method does not allow to reproduce motion parallax, because at the moment of shooting the camera was in one place and after the shoot angle cannot be changed. Researchers from Google in the last few years approached the solution of this problem. In their latest development there are two main ideas, one of which is predominantly software, hardware and the second: they have learned to split the images and video footage on many flat layers in which the objects are located as the distance from the camera, and use an array of multiple closely spaced cameras.

Leave a Reply

Your email address will not be published.