Common Model Architectures

The follow are common model “architectures”. i.e. They are generic, non-application-specific models that may be applied to a variety of ML problems.

NOTE: You may find additional model architectures at tf.keras.applications

ARM DepthwiseConv2D

DepthwiseSeparableConv2D_ARM(input_shape=(50, 10, 1), num_classes=12, filters=64, regularizer=<tf_keras.src.regularizers.L2 object>)[source]

ARM DepthwiseConv2D for Keyword Spotting

Return type:

Model

Parameters:
  • input_shape (tuple) –

  • num_classes (int) –

Fully Connected Auto-encoder

FullyConnectedAutoEncoder(input_shape=(5, 128, 1), dense_units=128, latent_units=8)[source]

Fully Connected Auto-encoder

Return type:

Model

Parameters:
  • input_shape (tuple) –

  • dense_units (int) –

  • latent_units (int) –

MobileNet v1

MobileNetV1(input_shape=(96, 96, 3), num_classes=2, num_filters=8)[source]

MobileNet v1

Return type:

Model

Parameters:

input_shape (tuple) –

MobileNet v2

MobileNetV2(input_shape, alpha=0.35, include_top=True, pooling=None, classes=1000, classifier_activation='softmax', last_block_filters=None, **kwargs)[source]

Instantiates the MobileNetV2 architecture.

Optionally loads weights pre-trained on ImageNet.

Note: each Keras Application expects a specific kind of input preprocessing. For MobileNetV2, call tf.keras.applications.mobilenet_v2.preprocess_input on your inputs before passing them to the model.

Parameters:
  • input_shape – shape tuple, to be specified if you would like to use a model with an input image resolution that is not (224, 224, 3). It should have exactly 3 inputs channels (224, 224, 3).

  • alpha

    Float between 0 and 1. controls the width of the network. This is known as the width multiplier in the MobileNetV2 paper, but the name is kept for consistency with applications.MobileNetV1 model in Keras.

    • If alpha < 1.0, proportionally decreases the number

      of filters in each layer.

    • If alpha > 1.0, proportionally increases the number

      of filters in each layer.

    • If alpha = 1, default number of filters from the paper

      are used at each layer.

  • include_top – Boolean, whether to include the fully-connected layer at the top of the network. Defaults to True.

  • pooling

    String, optional pooling mode for feature extraction when include_top is False.

    • None means that the output of the model

      will be the 4D tensor output of the last convolutional block.

    • avg means that global average pooling

      will be applied to the output of the last convolutional block, and thus the output of the model will be a 2D tensor.

    • max means that global max pooling will

      be applied.

  • classes – Integer, optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified.

  • classifier_activation – A str or callable. The activation function to use on the “top” layer. Ignored unless include_top=True. Set classifier_activation=None to return the logits of the “top” layer.

  • last_block_filters – The number of filters to use in the last block of the model. If omitted, default to 1280 which the standard model uses. Due to hardware constraints, this value must be decreased (< 1024) to be fully optimized by the MVP hardare.

  • **kwargs – For backwards compatibility only.

Returns:

A keras.Model instance.

Raises:
  • ValueError – in case of invalid argument for weights, or invalid input shape or invalid alpha, rows when weights=’imagenet’

  • ValueError – if classifier_activation is not softmax or None when using a pretrained top layer.

ResNetv1-10

ResNet10V1(input_shape=(32, 32, 3), num_classes=10, num_filters=16)[source]

ResNetv1-10

Return type:

Model

Parameters:
  • input_shape (tuple) –

  • num_classes (int) –

  • num_filters (int) –

TENet

TENet(input_shape, classes, channels=32, blocks=3, block_depth=4, scales=[9], channel_increase=0.0, include_head=True, return_model=True, input_layer=None, dropout=0.1, *args, **kwargs)[source]

Temporal efficient neural network (TENet)

A network for processing spectrogram data using temporal and depthwise convolutions. The network treats the [T, F] spectrogram as a timeseries shaped [T, 1, F].

Note

When building the model, make sure that the input shape is concrete, i.e. explicitly reshape the samples to [T, 1, F] in the preprocessing pipeline.

Parameters:
  • classes (int) – Number of classes the network is built to categorize

  • channels (int) – Base number of channels in the network

  • blocks (int) – Number of (StridedIBB -> IBB -> …) blocks in the networks

  • block_depth (int) – Number of IBBs inside each (StridedIBB -> IBB -> …) block, including the strided IBB

  • scales (List[int]) – The multitemporal convolution filter widths. Should be odd numbers >= 3.

  • channel_increase (float) – If nonzero, the network increases the channel size each time there is a strided IBB block. The increase (each time) is given by channels * channel_increase.

  • include_head – If true, add a classifier head to the model

  • return_model – If true, return a Keras model

  • input_layer (Input) – Use the given layer as the input to the model. If None, the create a layer using the given input shape

  • dropout (float) – The dropout to use when include_head=True

  • input_shape (Union[List[int], Tuple[int]]) –

Return type:

Union[Model, Layer]

Return type:

Union[Model, Layer]

Parameters:
  • input_shape (Union[List[int], Tuple[int]]) –

  • classes (int) –

  • channels (int) –

  • blocks (int) –

  • block_depth (int) –

  • scales (List[int]) –

  • channel_increase (float) –

  • input_layer (Input) –

  • dropout (float) –