encoders

Collection of Encoders (i.e., GANs’ Discriminators) models.

Classes

Encoder Primitive Model for all encoder (i.e., convolution) based architecture.
FCNNEncoder Fully Convolutional Encoder.
class ashpy.models.convolutional.encoders.Encoder(layer_spec_input_res, layer_spec_target_res, kernel_size, initial_filters, filters_cap, output_shape, use_dropout=True, dropout_prob=0.3, non_linearity=<class 'tensorflow.python.keras.layers.advanced_activations.LeakyReLU'>)[source]

Bases: ashpy.models.convolutional.interfaces.Conv2DInterface

Primitive Model for all encoder (i.e., convolution) based architecture.

Notes

Default to DCGAN Discriminator architecture.

Examples

  • Direct Usage:

    dummy_generator = Encoder(
        layer_spec_input_res=(64, 64),
        layer_spec_target_res=(8, 8),
        kernel_size=5,
        initial_filters=4,
        filters_cap=128,
        output_shape=1,
    )
    
  • Subclassing

    class DummyDiscriminator(Encoder):
        def call(self, inputs, training=True):
            print("Dummy Discriminator!")
            # build the model using
            # self._layers and inputs
            return inputs
    
    dummy_discriminator = DummyDiscriminator(
        layer_spec_input_res=(64, 64),
        layer_spec_target_res=(8, 8),
        kernel_size=5,
        initial_filters=16,
        filters_cap=128,
        output_shape=1,
    )
    dummy_discriminator(tf.zeros((1,28,28,3)))
    
    Dummy Discriminator!
    
__init__(layer_spec_input_res, layer_spec_target_res, kernel_size, initial_filters, filters_cap, output_shape, use_dropout=True, dropout_prob=0.3, non_linearity=<class 'tensorflow.python.keras.layers.advanced_activations.LeakyReLU'>)[source]

Instantiate the Decoder.

Parameters:
  • layer_spec_input_res (tuple of (int, int)) – Shape of the input tensors.
  • layer_spec_target_res (Union[int, Tuple[int, int]]) – (tuple of (int, int)): Shape of tensor desired as output of _get_layer_spec().
  • kernel_size (int) – Kernel used by the convolution layers.
  • initial_filters (int) – Numbers of filters to used as a base value.
  • filters_cap (int) – Cap filters to a set amount, in the case of an Encoder is a ceil value AKA the max amount of filters.
  • output_shape (int) – Amount of units of the last tf.keras.layers.Dense.
Returns:

None

Raises:

ValueError – If filters_cap < initial_filters

_add_building_block(filters)[source]

Construct the core of the tf.keras.Model.

The layers specified here get added to the tf.keras.Model multiple times consuming the hyper-parameters generated in the _get_layer_spec().

Parameters:filters (int) – Number of filters to use for this iteration of the Building Block.
_add_final_block(output_shape)[source]

Prepare the results of _add_building_block() for the final output.

Parameters:output_shape (int) – Amount of units of the last tf.keras.layers.Dense
class ashpy.models.convolutional.encoders.FCNNEncoder(layer_spec_input_res, layer_spec_target_res, kernel_size, initial_filters, filters_cap, encoding_dimension)[source]

Bases: ashpy.models.convolutional.encoders.Encoder

Fully Convolutional Encoder.

Output a 1x1xencoding_size vector. The output neurons are linear.

Examples

  • Direct Usage:

    dummy_generator = FCNNEncoder(
        layer_spec_input_res=(64, 64),
        layer_spec_target_res=(8, 8),
        kernel_size=5,
        initial_filters=4,
        filters_cap=128,
        encoding_dimension=100,
    )
    print(dummy_generator(tf.zeros((1, 64, 64, 3))).shape)
    
    (1, 1, 1, 100)
    
__init__(layer_spec_input_res, layer_spec_target_res, kernel_size, initial_filters, filters_cap, encoding_dimension)[source]

Instantiate the FCNNDecoder.

Parameters:
  • layer_spec_input_res (tuple of (int, int)) – Shape of the input tensors.
  • layer_spec_target_res – (tuple of (int, int)): Shape of tensor desired as output of _get_layer_spec().
  • kernel_size (int) – Kernel used by the convolution layers.
  • initial_filters (int) – Numbers of filters to used as a base value.
  • filters_cap (int) – Cap filters to a set amount, in the case of an Encoder is a ceil value AKA the max amount of filters.
  • encoding_dimension (int) – encoding dimension.
Returns:

None

Raises:

ValueError – If filters_cap < initial_filters

_add_final_block(output_shape)[source]

Prepare the results of _add_building_block() for the final output.

Parameters:output_shape (int) – Amount of units of the last tf.keras.layers.Dense