Common Model Architectures

The follow are common model “architectures”. i.e. They are generic, non-application-specific models that may be applied to a variety of ML problems.

NOTE: You may find additional model architectures at tf.keras.applications

ARM DepthwiseConv2D

mltk.models.shared.dsconv_arm.DepthwiseSeparableConv2D_ARM(input_shape=(50, 10, 1), num_classes=12, filters=64, regularizer=<keras.regularizers.L2 object>)[source]

ARM DepthwiseConv2D for Keyword Spotting

Return type


Fully Connected Auto-encoder

mltk.models.shared.fully_connected_autoencoder.FullyConnectedAutoEncoder(input_shape=(5, 128, 1), dense_units=128, latent_units=8)[source]

Fully Connected Auto-encoder

Return type


MobileNet v1

mltk.models.shared.mobilenet_v1.MobileNetV1(input_shape=(96, 96, 3), num_classes=2, num_filters=8)[source]

MobileNet v1

Return type


MobileNet v2

mltk.models.shared.mobilenet_v2.MobileNetV2(input_shape, alpha=0.35, include_top=True, pooling=None, classes=1000, classifier_activation='softmax', last_block_filters=None, **kwargs)[source]

Instantiates the MobileNetV2 architecture.

Optionally loads weights pre-trained on ImageNet.

Note: each Keras Application expects a specific kind of input preprocessing. For MobileNetV2, call tf.keras.applications.mobilenet_v2.preprocess_input on your inputs before passing them to the model.

  • input_shape – shape tuple, to be specified if you would like to use a model with an input image resolution that is not (224, 224, 3). It should have exactly 3 inputs channels (224, 224, 3).

  • alpha

    Float between 0 and 1. controls the width of the network. This is known as the width multiplier in the MobileNetV2 paper, but the name is kept for consistency with applications.MobileNetV1 model in Keras.

    • If alpha < 1.0, proportionally decreases the number

      of filters in each layer.

    • If alpha > 1.0, proportionally increases the number

      of filters in each layer.

    • If alpha = 1, default number of filters from the paper

      are used at each layer.

  • include_top – Boolean, whether to include the fully-connected layer at the top of the network. Defaults to True.

  • pooling

    String, optional pooling mode for feature extraction when include_top is False.

    • None means that the output of the model

      will be the 4D tensor output of the last convolutional block.

    • avg means that global average pooling

      will be applied to the output of the last convolutional block, and thus the output of the model will be a 2D tensor.

    • max means that global max pooling will

      be applied.

  • classes – Integer, optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified.

  • classifier_activation – A str or callable. The activation function to use on the “top” layer. Ignored unless include_top=True. Set classifier_activation=None to return the logits of the “top” layer.

  • last_block_filters – The number of filters to use in the last block of the model. If omitted, default to 1280 which the standard model uses. Due to hardware constraints, this value must be decreased (< 1024) to be fully optimized by the MVP hardare.

  • **kwargs – For backwards compatibility only.


A keras.Model instance.

  • ValueError – in case of invalid argument for weights, or invalid input shape or invalid alpha, rows when weights=’imagenet’

  • ValueError – if classifier_activation is not softmax or None when using a pretrained top layer.


mltk.models.shared.resnet_v1.ResNet10V1(input_shape=(32, 32, 3), num_classes=10, num_filters=16)[source]


Return type