autoencoders¶
Collection of Fully Convolutional Autoencoders.
Classes
Autoencoder |
Primitive Model for all convolutional autoencoders. |
FCNNAutoencoder |
Primitive Model for all fully convolutional autoencoders. |
-
class
ashpy.models.convolutional.autoencoders.
Autoencoder
(layer_spec_input_res, layer_spec_target_res, kernel_size, initial_filters, filters_cap, encoding_dimension, channels)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
Primitive Model for all convolutional autoencoders.
Examples
Direct Usage:
autoencoder = Autoencoder( layer_spec_input_res=(64, 64), layer_spec_target_res=(8, 8), kernel_size=5, initial_filters=32, filters_cap=128, encoding_dimension=100, channels=3, ) encoding, reconstruction = autoencoder(tf.zeros((1, 64, 64, 3))) print(encoding.shape) print(reconstruction.shape)
-
__init__
(layer_spec_input_res, layer_spec_target_res, kernel_size, initial_filters, filters_cap, encoding_dimension, channels)[source]¶ Instantiate the
BaseAutoEncoder
.Parameters: - layer_spec_input_res (
tuple
of (int
,int
)) – Shape of the input tensors. - layer_spec_target_res – (
tuple
of (int
,int
)): Shape of tensor desired as output of_get_layer_spec()
. - kernel_size (int) – Kernel used by the convolution layers.
- initial_filters (int) – Numbers of filters to used as a base value.
- filters_cap (int) – Cap filters to a set amount, in the case of an Encoder is a ceil value AKA the max amount of filters.
- encoding_dimension (int) – encoding dimension.
- channels (int) – Number of channels for the reconstructed image.
Returns: - layer_spec_input_res (
-
class
ashpy.models.convolutional.autoencoders.
FCNNAutoencoder
(layer_spec_input_res, layer_spec_target_res, kernel_size, initial_filters, filters_cap, encoding_dimension, channels)[source]¶ Bases:
tensorflow.python.keras.engine.training.Model
Primitive Model for all fully convolutional autoencoders.
Examples
- Direct Usage:
autoencoder = FCNNAutoencoder( layer_spec_input_res=(64, 64), layer_spec_target_res=(8, 8), kernel_size=5, initial_filters=32, filters_cap=128, encoding_dimension=100, channels=3, ) encoding, reconstruction = autoencoder(tf.zeros((1, 64, 64, 3))) print(encoding.shape) print(reconstruction.shape)
-
__init__
(layer_spec_input_res, layer_spec_target_res, kernel_size, initial_filters, filters_cap, encoding_dimension, channels)[source]¶ Instantiate the
FCNNBaseAutoEncoder
.Parameters: - layer_spec_input_res (
tuple
of (int
,int
)) – Shape of the input tensors. - layer_spec_target_res – (
tuple
of (int
,int
)): Shape of tensor desired as output of_get_layer_spec()
. - kernel_size (int) – Kernel used by the convolution layers.
- initial_filters (int) – Numbers of filters to used as a base value.
- filters_cap (int) – Cap filters to a set amount, in the case of an Encoder is a ceil value AKA the max amount of filters.
- encoding_dimension (int) – encoding dimension.
- channels (int) – Number of channels for the reconstructed image.
Returns: - layer_spec_input_res (