unet

UNET implementations.

Functions

FUNet

Functional UNET Implementation.

Classes

SUNet

Semantic UNet.

UNet

UNet Architecture.

ashpy.models.convolutional.unet.FUNet(input_res, min_res, kernel_size, initial_filters, filters_cap, channels, input_channels=3, use_dropout_encoder=True, use_dropout_decoder=True, dropout_prob=0.3, encoder_non_linearity=<class 'tensorflow.python.keras.layers.advanced_activations.LeakyReLU'>, decoder_non_linearity=<class 'tensorflow.python.keras.layers.advanced_activations.ReLU'>, last_activation=<function tanh>, use_attention=False)[source]

Functional UNET Implementation.

class ashpy.models.convolutional.unet.SUNet(input_res, min_res, kernel_size, initial_filters, filters_cap, channels, use_dropout_encoder=True, use_dropout_decoder=True, dropout_prob=0.3, encoder_non_linearity=<class 'tensorflow.python.keras.layers.advanced_activations.LeakyReLU'>, decoder_non_linearity=<class 'tensorflow.python.keras.layers.advanced_activations.ReLU'>, use_attention=False)[source]

Bases: ashpy.models.convolutional.unet.UNet

Semantic UNet.

__init__(input_res, min_res, kernel_size, initial_filters, filters_cap, channels, use_dropout_encoder=True, use_dropout_decoder=True, dropout_prob=0.3, encoder_non_linearity=<class 'tensorflow.python.keras.layers.advanced_activations.LeakyReLU'>, decoder_non_linearity=<class 'tensorflow.python.keras.layers.advanced_activations.ReLU'>, use_attention=False)[source]

Build the Semantic UNet model.

class ashpy.models.convolutional.unet.UNet(input_res, min_res, kernel_size, initial_filters, filters_cap, channels, use_dropout_encoder=True, use_dropout_decoder=True, dropout_prob=0.3, encoder_non_linearity=<class 'tensorflow.python.keras.layers.advanced_activations.LeakyReLU'>, decoder_non_linearity=<class 'tensorflow.python.keras.layers.advanced_activations.ReLU'>, normalization_layer=<class 'ashpy.layers.instance_normalization.InstanceNormalization'>, last_activation=<function tanh>, use_attention=False)[source]

Bases: ashpy.models.convolutional.interfaces.Conv2DInterface

UNet Architecture.

Used in Image-to-Image Translation with Conditional Adversarial Nets 1.

Examples

  • Direct Usage:

    x = tf.ones((1, 512, 512, 3))
    u_net = UNet(input_res = 512,
                 min_res=4,
                 kernel_size=4,
                 initial_filters=64,
                 filters_cap=512,
                 channels=3)
    y = u_net(x)
    print(y.shape)
    print(len(u_net.trainable_variables)>0)
    
    (1, 512, 512, 3)
    True
    
1

Image-to-Image Translation with Conditional Adversarial Nets https://arxiv.org/abs/1611.04076

__init__(input_res, min_res, kernel_size, initial_filters, filters_cap, channels, use_dropout_encoder=True, use_dropout_decoder=True, dropout_prob=0.3, encoder_non_linearity=<class 'tensorflow.python.keras.layers.advanced_activations.LeakyReLU'>, decoder_non_linearity=<class 'tensorflow.python.keras.layers.advanced_activations.ReLU'>, normalization_layer=<class 'ashpy.layers.instance_normalization.InstanceNormalization'>, last_activation=<function tanh>, use_attention=False)[source]

Initialize the UNet.

Parameters
  • input_res (int) – input resolution.

  • min_res (int) – minimum resolution reached after decode.

  • kernel_size (int) – kernel size used in the network.

  • initial_filters (int) – number of filter of the initial convolution.

  • filters_cap (int) – maximum number of filters.

  • channels (int) – number of output channels.

  • use_dropout_encoder (bool) – whether to use dropout in the encoder module.

  • use_dropout_decoder (bool) – whether to use dropout in the decoder module.

  • dropout_prob (float) – probability of dropout.

  • encoder_non_linearity (Type[Layer]) – non linearity of encoder.

  • decoder_non_linearity (Type[Layer]) – non linearity of decoder.

  • last_activation (<module 'tensorflow.python.keras.api._v2.keras.activations' from '/home/docs/checkouts/readthedocs.org/user_builds/ashpy/envs/v0.2.0/lib/python3.7/site-packages/tensorflow/python/keras/api/_v2/keras/activations/__init__.py'>) – last activation function, tanh or softmax (for semantic images).

  • use_attention (bool) – whether to use attention.

call(inputs, training=False)[source]

Forward pass of the UNet model.

get_decoder_block(filters, use_bn=True, use_dropout=False, use_attention=False)[source]

Return a block to be used in the decoder part of the UNET.

Parameters
  • filters – number of filters

  • use_bn – whether to use batch normalization

  • use_dropout – whether to use dropout

  • use_attention – whether to use attention

Returns

A block to be used in the decoder part

get_encoder_block(filters, use_bn=True, use_attention=False)[source]

Return a block to be used in the encoder part of the UNET.

Parameters
  • filters – number of filters.

  • use_bn – whether to use batch normalization.

  • use_attention – whether to use attention.

Returns

A block to be used in the encoder part.