AI has been filling the gaps for illustrators and photographers for years – it literally intelligently fills the gaps with visual content. But the latest tools aim to allow an AI to help artists from the earliest, unprinted phases of a piece. The new canvas tool from Nvidia lets the creator roughly stage a landscape like paint-by-numbers blobs and then fills it out with convincingly photo-realistic (if not entirely gallery-supplied) content.
Each individual color represents a different type of characteristic: mountains, water, grass, ruins, etc. When paints are applied to the canvas, the rough sketch is passed on to a generative opposing network. GANs essentially transfer content between a creator AI trying to create a realistic image (in this case) and a detector AI assessing how realistic that image is. These work together to create what they think is a fairly realistic representation of the proposals.
It’s a pretty more user-friendly version of the GauGAN prototype (got it?) That was shown at CVPR in 2019. This one is much smoother around the edges, produces better images, and can run on any Windows computer with a decent Nvidia graphics card.
This method has been used to create very realistic faces, animals, and landscapes, although there is usually some sort of “narrative” that a human can recognize. But the canvas app doesn’t try to make anything indistinguishable from reality – as concept artist Jama Jurabaev explains in the video below, it’s more about being able to freely experiment with images that are more detailed than a scribble.
For example, if you want a rotting ruin in a field with a river on one side, a quick pencil sketch can tell you little about what the final piece might look like. What if you have it in your head in one direction and then two hours of painting and coloring later you find that the shadows in the foreground become awkward because the sun goes down on the left side of the picture?
If instead you just wrote these functions in Canvas, you may immediately see that it was and move on to the next idea. There are even ways to quickly change the time of day, palette, and other high-level parameters so they can be quickly evaluated as options.
“I’m no longer afraid of a blank canvas,” said Jurabaev. “I’m not afraid to make very big changes because I know that AI always helps me with details … I can put all my strength into the creative side of things and leave the rest to the canvas.”
It’s very similar to Google’s Chimera Painter if you remember that particular nightmare fuel that used an almost identical process to create fantastic animals. Instead of snow, rock, and bushes, it had hind legs, fur, teeth and so on, which made handling a little more complicated and could easily go wrong.
Still, it can be better than the alternative, because surely an amateur like me couldn’t even draw the strange tubular animals that emerged from simple blob painting.
In contrast to the Chimera Creator, however, this app runs locally and requires a powerful Nvidia graphics card for it. GPUs have long been the hardware of choice for machine learning applications, and something like a real-time GAN definitely needs a clunky one. You can download the app for free here.