decoders

Collection of Decoders (i.e., GANs’ Generators) models.

Classes

Decoder

Primitive Model for all decoder (i.e., transpose convolution) based architecture.

FCNNDecoder

Fully Convolutional Decoder.

class ashpy.models.convolutional.decoders.Decoder(layer_spec_input_res, layer_spec_target_res, kernel_size, initial_filters, filters_cap, channels, use_dropout=True, dropout_prob=0.3, non_linearity=<class 'tensorflow.python.keras.layers.advanced_activations.LeakyReLU'>)[source]

Bases: ashpy.models.convolutional.interfaces.Conv2DInterface

Primitive Model for all decoder (i.e., transpose convolution) based architecture.

Notes

Default to DCGAN Generator architecture.

Examples

  • Direct Usage:

    dummy_generator = Decoder(
        layer_spec_input_res=(8, 8),
        layer_spec_target_res=(64, 64),
        kernel_size=(5, 5),
        initial_filters=1024,
        filters_cap=16,
        channels=3,
    )
    
  • Subclassing

    class DummyGenerator(Decoder):
        def call(self, input, training=True):
            print("Dummy Generator!")
            return input
    
    dummy_generator = DummyGenerator(
        layer_spec_input_res=(8, 8),
        layer_spec_target_res=(32, 32),
        kernel_size=(5, 5),
        initial_filters=1024,
        filters_cap=16,
        channels=3,
    )
    dummy_generator(tf.random.normal((1, 100)))
    
    Dummy Generator!
    
__init__(layer_spec_input_res, layer_spec_target_res, kernel_size, initial_filters, filters_cap, channels, use_dropout=True, dropout_prob=0.3, non_linearity=<class 'tensorflow.python.keras.layers.advanced_activations.LeakyReLU'>)[source]

Instantiate the Decoder.

Model Assembly:

1. _add_initial_block(): Ingest the tf.keras.Model inputs and prepare them for _add_building_block().

2. _add_building_block(): Core of the model, the layers specified here get added to the tf.keras.Model multiple times consuming the hyperparameters generated in the _get_layer_spec().

3. _add_final_block(): Final block of our tf.keras.Model, take the model after _add_building_block() and prepare them for the for the final output.

Parameters
  • layer_spec_input_res (tuple of (int, int)) – Shape of the _get_layer_spec() input tensors.

  • layer_spec_target_res – (tuple of (int, int)): Shape of tensor desired as output of _get_layer_spec().

  • kernel_size (tuple of (int, int)) – Kernel used by the convolution layers.

  • initial_filters (int) – Numbers of filters at the end of the first block.

  • filters_cap (int) – Cap filters to a set amount, in the case of Decoder is a floor value AKA the minimum amount of filters.

  • channels (int) – Channels of the output images (1 for Grayscale, 3 for RGB).

Returns

None

Raises

ValueError – If filters_cap > initial_filters.

_add_building_block(filters)[source]

Construct the core of the tf.keras.Model.

The layers specified here get added to the tf.keras.Model multiple times consuming the hyperparameters generated in the _get_layer_spec().

Parameters

filters (int) – Number of filters to use for this iteration of the Building Block.

_add_final_block(channels)[source]

Prepare results of _add_building_block() for the for the final output.

Parameters

channels (int) – Channels of the output images (1 for Grayscale, 3 for RGB).

_add_initial_block(initial_filters, input_res)[source]

Ingest the tf.keras.Model inputs and prepare them for _add_building_block().

Parameters
  • initial_filters (int) – Numbers of filters to used as a base value.

  • input_res (tuple of (int, int)) – Shape of the _get_layer_spec() input tensors.

class ashpy.models.convolutional.decoders.FCNNDecoder(layer_spec_input_res, layer_spec_target_res, kernel_size, initial_filters, filters_cap, channels, use_dropout=True, dropout_prob=0.3, non_linearity=<class 'tensorflow.python.keras.layers.advanced_activations.LeakyReLU'>)[source]

Bases: ashpy.models.convolutional.decoders.Decoder

Fully Convolutional Decoder. Expected input is a feature map.

Examples

  • Direct Usage:
    dummy_generator = FCNNDecoder(
        layer_spec_input_res=(8, 8),
        layer_spec_target_res=(64, 64),
        kernel_size=(5, 5),
        initial_filters=1024,
        filters_cap=16,
        channels=3,
    )
    
    print(dummy_generator(tf.zeros((1, 1, 1, 100))).shape)
    
    (1, 64, 64, 3)
    
__init__(layer_spec_input_res, layer_spec_target_res, kernel_size, initial_filters, filters_cap, channels, use_dropout=True, dropout_prob=0.3, non_linearity=<class 'tensorflow.python.keras.layers.advanced_activations.LeakyReLU'>)[source]

Build a Fully Convolutional Decoder.

_add_initial_block(initial_filters, input_res)[source]

Ingest the tf.keras.Model inputs and prepare them for _add_building_block().

Parameters
  • initial_filters (int) – Numbers of filters to used as a base value.

  • input_res (tuple of (int, int)) – Shape of the _get_layer_spec() input tensors.