Available architectures

class koogu.model.architectures.ConvNet(filters_per_layer, **kwargs)

Boilerplate ConvNet network-building logic that can be used, with appropriate customization of parameters, to build networks like LeNet, AlexNet, etc.

Parameters:
  • filters_per_layer – (list/tuple of ints) The length of the list/tuple defines the depth of the network and each value in it specifies the number of filters at the corresponding level.

  • pool_sizes – (optional) Must be a list of 2-element tuples (of ints) specifying the factors by which to downscale (vertical, horizontal) following each convolution. The length of the list must be the same as that of filters_per_layer. By default, a pool size of (2, 2) is considered throughout.

  • pool_strides – (optional; defaults to whatever pool_sizes is) Must be of similar structure as pool_sizes, and will define the strides that the pooling operation takes along the horizontal and vertical directions.

Other helpful customizations

Parameters:
  • add_batchnorm – (bool; default: False) If True, batch normalization layers will be added following each convolution layer.

  • pooling_type – (optional) By default, average pooling is performed. Set to ‘max’ to use max pooling instead.

Koogu-style model customizations

Parameters:
  • preproc

    (optional) Use this to add pre-convolution operations to the model. If specified, must be a list of 2-element tuples, with each tuple containing -

    • the name of the operation (either a compatible Keras layer or a transformation from koogu.data.tf_transformations).

    • a Python dictionary specifying parameters to the operation.

  • dense_layers – (optional) Use this to add fully-connected (dense) layers to the end of the model network. Can specify a single integer (the added layer will have as many nodes) or a list of integers to add multiple (connected in sequence) dense layers.

  • add_dense_layer_nonlinearity – (boolean; default: False) If True, will apply ReLU activation to the outputs of the BatchNormalization layer following each added dense layer (as per dense_layers).

  • data_format – One of ‘channels_last’ (default) or ‘channels_first’.

class koogu.model.architectures.DenseNet(layers_per_block, **kwargs)

DenseNet (Huang et. al., 2016). This implementation supports both with and without bottleneck.

Parameters:
  • layers_per_block – (list/tuple of ints) The length of the list/tuple defines the number of dense-blocks in the network and each value in the list specifies the number of composite function layers (made up of BatchNorm-Conv-ReLU) in the corresponding dense-block.

  • growth_rate – (optional; default: 12) Number of composite function layers per dense-block.

  • compression – (optional; default: 1.0) Specifies the rate of compression applied in transition blocks. A value of 1.0 means no compression; specify value < 1.0 to bring about compression.

  • with_bottleneck – (bool; default: False) Whether to include bottleneck layers.

Other helpful customizations

Parameters:
  • quasi_dense – (bool; default: False) If True, feed-forward connections within a dense-block will be reduced, as described in Madhusudhana et. al. 2021.

  • pooling_type – (optional) By default, average pooling is performed. Set to ‘max’ to use max pooling instead.

  • pool_sizes – (optional) Must be a list of 2-element tuples (of ints) specifying the factors by which to downscale (vertical, horizontal) in each transition block. The length of the list must be one less than that of layers_per_block. By default, a pool size of (3, 3) is considered throughout.

  • pool_strides – (optional; defaults to whatever pool_sizes is) Must be of similar structure as pool_sizes, and will define the strides that the pooling operation takes along the horizontal and vertical directions.

Koogu-style model customizations

Parameters:
  • preproc

    (optional) Use this to add pre-convolution operations to the model. If specified, must be a list of 2-element tuples, with each tuple containing -

    • the name of the operation (either a compatible Keras layer or a transformation from koogu.data.tf_transformations).

    • a Python dictionary specifying parameters to the operation.

  • dense_layers – (optional) Use this to add fully-connected (dense) layers to the end of the model network. Can specify a single integer (the added layer will have as many nodes) or a list of integers to add multiple (connected in sequence) dense layers.

  • add_dense_layer_nonlinearity – (boolean; default: False) If True, will apply ReLU activation to the outputs of the BatchNormalization layer following each added dense layer (as per dense_layers).

  • data_format – One of ‘channels_last’ (default) or ‘channels_first’.