ADVERTISEMENT

What Is DragGAN AI Editing Tool? Here's All You Need To Know

"DragGAN allows you to reshape images interactively, much like pulling on points in the image to move them exactly where you want"

<div class="paragraphs"><p>Source: DragGAN</p></div>
Source: DragGAN

A group of researchers have published a paper about a new photo editing tool called 'DragGAN', enabling users to reshape images interactively using artificial intelligence.

"Through DragGAN, anyone can deform an image with precise control over where pixels go, thus manipulating the pose, shape, expression, and layout of diverse categories such as animals, cars, humans, landscapes, etc," the research paper states.

The research paper has been co-authored by a group of researchers from Google, alongside the Max Planck Institute of Informatics and MIT CSAIL.

"DragGAN allows you to reshape images interactively, much like pulling on points in the image to move them exactly where you want," Twitter user Bilawal Sidhu said while summarizing the research paper.

"Our approach can hallucinate occluded content, like the teeth inside a lion’s mouth, and can deform following the object’s rigidity, like the bending of a horse leg. We also develop a GUI (Graphical User Interface) for users to interactively perform the manipulation by simply clicking on the image," the research paper said.

Take a look at few examples of DragGAN shared by Twitter users online:

The research paper states that synthesizing visual content that meets users’ needs often requires flexible and precise controllability of the pose, shape, expression, and layout of the generated objects.

"Existing approaches gain controllability of generative adversarial networks (GANs) via manually annotated training data or a prior 3D model, which often lack flexibility, precision, and generality," it says.

The researchers said that they studied a powerful yet much less explored way of controlling GANs, that is, to "drag" any points of the image to precisely reach target points in a user-interactive manner.

To achieve this, they proposed DragGAN, which consists of two main components: 1) a feature based motion supervision that drives the handle point to move towards the target position, and 2) a new point tracking approach that leverages the discriminative generator features to keep localizing the position of the handle points.