American developers have created an algorithm for turning regular photographs into three-dimensional, with realistic toupin fragments of the objects in the picture. The algorithm efficiently determines the boundaries of the objects are at different distances from the camera, parts of an image depth and draws the blank area in the far fragments using neural networks. The article will be presented at the conference CVPR, and the code and examples of the algorithm available on the page of the authors.
Three-dimensional images and videos give a much greater sense of immersion and realism than the two-dimensional, but to create them you must use either computer graphics, or, in the case of the shooting of the real world, to apply a complex system of cameras and algorithms that are virtually inaccessible to regular users. In recent years algorithms for imaging has advanced significantly and, for example, Facebook applies a function that allows you to turn an ordinary photo into pseudo. It separates the main subject from the rest of the pictures and as it brings it to the foreground.
Similarly work and the other algorithms, but this method has the fundamental limitation of two-dimensional images impossible to obtain fragments which at the time of the shooting were closed to other objects. Usually this is solved by the fact that the missing parts are just covered with similar colors, but the result is blurry and only vaguely dissimilar to the rest of the photo. In addition, it is difficult to efficiently separate the objects distance from the camera.
Developers from Virginia tech, and Facebook under the leadership of Jia bin Huang (Jia-Bin Huang) has created an algorithm that allows to achieve a high-quality paint the background behind the main objects in the frame.