losses¶
Custom Keras losses, used by the AshPy executors.
Classes
DHingeLoss |
Discriminator Hinge Loss as Keras Metric. |
DLeastSquare |
Discriminator Least Square Loss as tf.keras.losses.Loss . |
DMinMax |
Implementation of MinMax Discriminator loss as tf.keras.losses.Loss . |
GHingeLoss |
Generator Hinge Loss as Keras Metric. |
L1 |
L1 Loss implementation as tf.keras.losses.Loss . |
-
class
ashpy.keras.losses.
DHingeLoss
[source]¶ Bases:
tensorflow.python.keras.losses.Loss
Discriminator Hinge Loss as Keras Metric.
See Geometric GAN [1]_ for more details.
The Discriminator Hinge loss is the hinge version of the adversarial loss. The Hinge loss is defined as:
\[L_{\text{hinge}} = \max(0, 1 -t y)\]where y is the Discriminator output and t is the target class (+1 or -1 in the case of binary classification).
For the case of GANs:
\[L_{D_{\text{hinge}}} = - \mathbb{E}_{(x,y) \sim p_data} [ \min(0, -1+D(x,y)) ] - \mathbb{E}_{x \sim p_x, y \sim p_data} [ \min(0, -1 - D(G(z),y)) ]\][1] Geometric GAN https://arxiv.org/abs/1705.02894 -
reduction
¶ Return the current reduction for this type of loss.
Return type: ReductionV2
-
-
class
ashpy.keras.losses.
DLeastSquare
[source]¶ Bases:
tensorflow.python.keras.losses.Loss
Discriminator Least Square Loss as
tf.keras.losses.Loss
.-
__init__
()[source]¶ Least square Loss for Discriminator.
Reference: Least Squares Generative Adversarial Networks [1]_ .
Basically the Mean Squared Error between the discriminator output when evaluated in fake samples and 0 and the discriminator output when evaluated in real samples and 1: For the unconditioned case this is:
\[L_{D} = \frac{1}{2} E[(D(x) - 1)^2 + (0 - D(G(z))^2]\]where x are real samples and z is the latent vector.
For the conditioned case this is:
\[L_{D} = \frac{1}{2} E[(D(x, c) - 1)^2 + (0 - D(G(c), c)^2]\]where c is the condition and x are real samples.
[1] Least Squares Generative Adversarial Networks https://arxiv.org/abs/1611.04076 Return type: None
-
call
(d_real, d_fake)[source]¶ Compute the Least Square Loss.
Parameters: Return type: Tensor
Returns: tf.Tensor
– Loss.
-
reduction
¶ Return the reduction type for this loss.
Return type: ReductionV2
Returns: tf.keras.losses.Reduction
– Reduction.
-
-
class
ashpy.keras.losses.
DMinMax
(from_logits=True, label_smoothing=0.0)[source]¶ Bases:
tensorflow.python.keras.losses.Loss
Implementation of MinMax Discriminator loss as
tf.keras.losses.Loss
.\[L_{D} = - \frac{1}{2} E [\log(D(x)) + \log (1 - D(G(z))]\]-
call
(d_real, d_fake)[source]¶ Compute the MinMax Loss.
Play the DiscriminatorMinMax game between the discriminator computed in real and the discriminator compute with fake inputs.
Parameters: Return type: Tensor
Returns: tf.Tensor
– Output Tensor.
-
reduction
¶ Return the reduction type of this loss.
Return type: ReductionV2
Returns: :py:classes:`tf.keras.losses.Reduction` – Reduction.
-
-
class
ashpy.keras.losses.
GHingeLoss
[source]¶ Bases:
tensorflow.python.keras.losses.Loss
Generator Hinge Loss as Keras Metric.
See Geometric GAN [1]_ for more details. The Generator Hinge loss is the hinge version of the adversarial loss. The Hinge loss is defined as:
\[L_{\text{hinge}} = \max(0, 1 - t y)\]where y is the Discriminator output and t is the target class (+1 or -1 in the case of binary classification). The target class of the generated images is +1.
For the case of GANs
\[L_{G_{\text{hinge}}} = - \mathbb{E}_{(x \sim p_x, y \sim p_data} [ \min(0, -1+D(G(x),y)) ]\]This can be simply approximated as:
\[L_{G_{\text{hinge}}} = - \mathbb{E}_{(x \sim p_x, y \sim p_data} [ D(G(x),y) ]\][1] Geometric GAN https://arxiv.org/abs/1705.02894 -
reduction
¶ Return the current reduction for this type of loss.
Return type: ReductionV2
-
-
class
ashpy.keras.losses.
L1
[source]¶ Bases:
tensorflow.python.keras.losses.Loss
L1 Loss implementation as
tf.keras.losses.Loss
.-
reduction
¶ Return the current reduction for this type of loss.
Return type: ReductionV2
-