Research Guide: Advanced Loss Functions for Machine Learning Models

Discriminator loss aims at maximizing the probability given to real and fake images. Minimax loss is used in the paper that introduced GANs. This is a strategy aimed at reducing the worst-case-scenario possible loss. It’s simply minimizing the maximum loss. This loss is also used in two-player games to reduce the maximum loss for a layer.

In the case of GANs, the two players are the generator and discriminator. This involves the minimization of the generator’s loss and maximization of the discriminator’s loss. Modification of the discriminator loss forms the non-saturating GAN loss, whose aim is to tackle the saturation problem. This involves the generator maximizing the log of the discriminator probabilities. It is done for the generated images.

Least squares GAN loss was developed to counter the challenges of binary cross-entropy loss that resulted in the generated images being very different from the real images. This loss function is adopted for the discriminator. As a result of this, GANs using this loss function are able to generate higher quality images than regular GANs. A comparison of the two is shown in the next figure.

sources

The Wasserstein loss function is dependent on the modification of the GAN architecture, where the discriminator doesn’t perform instance classification. Instead, the discriminator outputs a number for each instance. It attempts to make the number bigger for real instances than for fake ones.

In this loss function, the discriminator attempts to maximize the difference between the output on real instances and the output on fake instances. The generator, on the other hand, attempts to maximize the discriminator’s output for its fake instances.

Here’s an image showing the performance of the GANs using this loss.

source

View Original