Recently, the American developers have provided open algorithm that turns photos into three-dimensional with realistic finish of the neural network background. French developer Diagné Cyril (Cyril Diagne) used the code of this algorithm, and created a extension for Google Chrome that converts posts to Instagram from a standard three-dimensional and animated. However, the main computing is done on the free public server for machine learning, so to use the extension you can even on not very powerful computer.
There are algorithms to create two-dimensional images three-dimensional, and if before they were exploratory, but now they can be found in common applications such as Facebook. But in most websites and apps such function yet, and in those where it is already in use, it is usually not implemented very efficiently. Especially it concerns the quality of execution of the background behind the objects and separation of objects and background from each other.
In mid-April, a group of American developers from Virginia tech, and Facebook created a new algorithm that efficiently separates the foreground objects from the background, and then draws the blank areas of the background using the data from neighboring areas. Learn more about how the original algorithm can be found in our article.
As with many machine learning algorithms, the authors published not only an article about it, but the code documentation. Cyril Diagné from Google Arts used this code to create a browser extension that animates photos from Instagram, which itself has no such function.
Just pushed the code of a chrome extension that turns every Instagram posts into 3d images using #3DPhotoInpainting. No GPU needed thanks to @GoogleColab but a bit of patience to set it up 😉
Demo: @parrstudio’s amazing work
Code: https://t.co/59yJUvRHxE#AIUX #Interaction #ML pic.twitter.com/86mMBWdm7V
— Cyril Diagne (@cyrildiagne) 19 APR 2020
The algorithm works on the basis of neural networks, and even the trained model to the processing of several images of the posts on the screen, you need a fairly large computing resources. Diagné used hybrid program structure: the user part works as a browser extension, and the image processing runs in the cloud on Google Colab. In it, the user gets free access to one powerful GPU and the ability to run arbitrary Python code.
This is not the first neural network algorithm working directly in the browser. Earlier we told you about the browser based algorithm for substitution of parties and the transformation of sketches into photos and set of algorithms from Google for face tracking and automatic cropping.