EncoderTrainer¶
Inheritance Diagram
-
class
ashpy.trainers.gan.
EncoderTrainer
(generator, discriminator, encoder, generator_optimizer, discriminator_optimizer, encoder_optimizer, generator_loss, discriminator_loss, encoder_loss, epochs, metrics, logdir='/home/docs/checkouts/readthedocs.org/user_builds/ashpy/checkouts/v0.1.3/docs/source/log', post_process_callback=None, log_eval_mode=<LogEvalMode.TEST: 0>, global_step=<tf.Variable 'global_step:0' shape=() dtype=int64, numpy=0>)[source]¶ Bases:
ashpy.trainers.gan.AdversarialTrainer
Primitive Trainer for GANs using an Encoder sub-network. The implementation is thought to be used with the BCE losses. To use another loss function consider subclassing the model and overriding the train_step method.
Examples
import shutil import operator from ashpy.metrics import EncodingAccuracy from ashpy.losses.gan import DiscriminatorMinMax, GeneratorBCE, EncoderBCE def real_gen(): label = 0 for _ in tf.range(100): yield ((10.0,), (label,)) latent_dim = 100 generator = tf.keras.Sequential([tf.keras.layers.Dense(1)]) left_input = tf.keras.layers.Input(shape=(1,)) left = tf.keras.layers.Dense(10, activation=tf.nn.elu)(left_input) right_input = tf.keras.layers.Input(shape=(latent_dim,)) right = tf.keras.layers.Dense(10, activation=tf.nn.elu)(right_input) net = tf.keras.layers.Concatenate()([left, right]) out = tf.keras.layers.Dense(1)(net) discriminator = tf.keras.Model(inputs=[left_input, right_input], outputs=[out]) encoder = tf.keras.Sequential([tf.keras.layers.Dense(latent_dim)]) # Losses generator_bce = GeneratorBCE() encoder_bce = EncoderBCE() minmax = DiscriminatorMinMax() epochs = 2 # Fake pre-trained classifier num_classes = 1 classifier = tf.keras.Sequential( [tf.keras.layers.Dense(10), tf.keras.layers.Dense(num_classes)] ) logdir = "testlog/adversarial/encoder" metrics = [ EncodingAccuracy( classifier, # model_selection_operator=operator.gt, logdir=logdir ) ] trainer = EncoderTrainer( generator=generator, discriminator=discriminator, encoder=encoder, discriminator_optimizer=tf.optimizers.Adam(1e-4), generator_optimizer=tf.optimizers.Adam(1e-5), encoder_optimizer=tf.optimizers.Adam(1e-6), generator_loss=generator_bce, discriminator_loss=minmax, encoder_loss=encoder_bce, epochs=epochs, metrics=metrics, logdir=logdir, ) batch_size = 10 discriminator_input = tf.data.Dataset.from_generator( real_gen, (tf.float32, tf.int64), ((1), (1)) ).batch(batch_size) dataset = discriminator_input.map( lambda x, y: ((x, y), tf.random.normal(shape=(batch_size, latent_dim))) ) trainer(dataset) shutil.rmtree(logdir)
Initializing checkpoint. [10] Saved checkpoint: testlog/adversarial/encoder/ckpts/ckpt-1 Epoch 1 completed. [20] Saved checkpoint: testlog/adversarial/encoder/ckpts/ckpt-2 Epoch 2 completed.
Methods
__init__
(generator, discriminator, encoder, …)Instantiate a
EncoderTrainer
.call
(dataset)Perform the adversarial training.
train_step
(real_xy, g_inputs)Adversarial training step.
-
__init__
(generator, discriminator, encoder, generator_optimizer, discriminator_optimizer, encoder_optimizer, generator_loss, discriminator_loss, encoder_loss, epochs, metrics, logdir='/home/docs/checkouts/readthedocs.org/user_builds/ashpy/checkouts/v0.1.3/docs/source/log', post_process_callback=None, log_eval_mode=<LogEvalMode.TEST: 0>, global_step=<tf.Variable 'global_step:0' shape=() dtype=int64, numpy=0>)[source]¶ Instantiate a
EncoderTrainer
.- Parameters
generator (
tf.keras.Model
) – Atf.keras.Model
describing the Generator part of a GAN.discriminator (
tf.keras.Model
) – Atf.keras.Model
describing the Discriminator part of a GAN.encoder (
tf.keras.Model
) – Atf.keras.Model
describing the Encoder part of a GAN.generator_optimizer (
tf.optimizers.Optimizer
) – Atf.optimizers
to use for the Generator.discriminator_optimizer (
tf.optimizers.Optimizer
) – Atf.optimizers
to use for the Discriminator.encoder_optimizer (
tf.optimizers.Optimizer
) – Atf.optimizers
to use for the Encoder.generator_loss (
ashpy.losses.executor.Executor
) – A ash Executor to compute the loss of the Generator.discriminator_loss (
ashpy.losses.executor.Executor
) – A ash Executor to compute the loss of the Discriminator.encoder_loss (
ashpy.losses.executor.Executor
) – A ash Executor to compute the loss of the Discriminator.epochs (int) – number of training epochs.
metrics – (List): list of tf.metrics to measure on training and validation data.
logdir – checkpoint and log directory.
post_process_callback (
callable
) – a function to post-process the output.log_eval_mode – models’ mode to use when evaluating and logging.
global_step – tf.Variable that keeps track of the training steps.
-
_measure_performance
(dataset)[source]¶ Measure performance on dataset.
- Parameters
dataset (
tf.data.Dataset
) –
-
call
(dataset)[source]¶ Perform the adversarial training.
- Parameters
dataset (
tf.data.Dataset
) – The adversarial training dataset.
-
train_step
(real_xy, g_inputs)[source]¶ Adversarial training step.
- Parameters
real_xy – input batch as extracted from the discriminator input dataset. (features, label) pair
g_inputs – batch of noise as generated by the generator input dataset.
- Returns
d_loss, g_loss, e_loss – discriminator, generator, encoder loss values.
-