Model Operations

The following model operation APIs are available:

The source code for these APIs may be found on Github at https://github.com/siliconlabs/mltk/tree/master/mltk/core.

profile_model

mltk.core.profile_model(model, accelerator=None, port=None, use_device=False, build=False, **kwargs)[source]

Profile a model for the given accelerator

This will profile the given model in either a hardware simulator or on a physical device.

Refer to the Model Profiler guide for more details.

Parameters
  • model (Union[MltkModel, TfliteModel, str]) – The model to profile as either a mltk.core.MltkModel or mltk.core.TfliteModel instance, or the path to a .tflite or .mltk.zip file

  • accelerator (Optional[str]) – The name of the hardware accelerator to profile for If omitted, then use reference kernels

  • use_device (bool) – Profile on a locally connected embedded device. If omitted, then profile in simulator

  • port (Optional[str]) – Serial port of physical platform. If omitted, attempt to discover automatically

  • build (bool) – If true, build the MLTK Model as a .tflite before profiling

Return type

ProfilingModelResults

Returns

The results of model profiling

class mltk.core.profiling_results.ProfilingModelResults(model, accelerator=None, cpu_clock_rate=0, runtime_memory_bytes=0, layers=None, is_simulated=True)[source]

Results from profiling model for specific accelerator

property name: str

Name of the profiled model

Return type

str

property tflite_model: mltk.core.tflite_model.tflite_model.TfliteModel

Associated TfliteModel

Return type

TfliteModel

property accelerator: str

Name of accelerator used for profiling

Return type

str

property is_simulated: bool

True if the simulator was used to generate the results, else False if an embedded device was used

Return type

bool

property cpu_clock_rate: int

Clock rate in hertz

Return type

int

property runtime_memory_bytes: int

Total SRAM in bytes required by ML library NOTE: This only include the ML run-time memory, it does NOT include the memory required by the user application or external pre-processing libraries (e.g. DSP)

Return type

int

property flatbuffer_size: int

Total size in bytes required by ML model This is the size the .tflite flatbuffer file

Return type

int

property layers: List[mltk.core.profiling_results.ProfilingLayerResult]

Profiling details of each model layer

Return type

List[ProfilingLayerResult]

property n_layers: int

Number of layers in model

Return type

int

property input_shape_str: str

Model input shape(s) as a string

Return type

str

property input_dtype_str: str

Model input data type(s) as a string

Return type

str

property output_shape_str: str

Model output shape(s) as a string

Return type

str

property output_dtype_str: str

Model output data type(s) as a string

Return type

str

property ops: int

The total number of ops to execute one model inference

Return type

int

property macs: int

The total number of multiply-accumulate operations to execute one model inference

Return type

int

property accelerator_cycles: int

The total number of accelerator cycles to execute one model inference

Return type

int

property cpu_cycles: int

The total number of CPU cycles to execute one model inference

Return type

int

property time: float

The total time in seconds required to execute one model inference

Return type

float

property energy: float

The total energy required to execute one model inference

Return type

float

property cpu_utilization: float

Percentage of the CPU used to execute the model

Return type

float

property n_unsupported_layers: int

The number of layers not supported by the accelerator

Return type

int

property unsupported_layers: List[mltk.core.profiling_results.ProfilingLayerResult]

Return layers not supported by accelerator

Return type

List[ProfilingLayerResult]

get_summary(include_labels=False, format_units=False, exclude_null=True)[source]

Return a summary of the profiling results as a dictionary

Return type

dict

generate_report(output_dir, format_units)[source]

Generate a profiling report in the given directory

to_dict(format_units=False, exclude_null=True)[source]

Return profiling results as dictionary

Arguments

format_units: Format number values to a string with associated units, e.g. 0.0234 -> 23.4m exclude_null: Exclude columns with all number values (e.g. don’t include energy if not energy numbers were provided)

Return type

dict

to_json(indent=2, format_units=False, exclude_null=True)[source]

Return profiling results as JSON

JSON Format:

{
"summary": { key/value summary of profiling },
"summary_labels": { key/value of printable labeles for each summary field }
"layers": [ {<model layer results>},  ... ]
"layers_labels": { key/value of printable labeles for each layer field }
}

Where the “summary” member contains:

"summary": {
    "name"                : "<Name of model>",
    "accelerator"         : "<Accelerator used>",
    "input_shape"         : "<Model input shapes>",
    "input_dtype"         : "<Model input data types>",
    "output_shape"        : "<Model output shapes>",
    "output_dtype"        : "<Model output data types>",
    "tflite_size"         : <.tflite file size>,
    "runtime_memory_size" : <Estimated TFLM arena size>,
    "ops"                 : <Total # operations>,
    "macs"                : <Total # multiply-accumulate ops>,
    "accelerator_cycles"  : <Total # accelerator cycles>,
    "cpu_cycles"          : <Total estimated CPU cycles>,
    "cpu_utilization"     : <Percentage of CPU required to run an inference>,
    "cpu_clock_rate"      : <CPU clock rate hz>,
    "energy"              : <Total estimated energy in Joules>,
    "time"                : <Total estimated inference time>,
    "n_layers"            : <# of layers in model>,
    "n_unsupported_layers": <# layers unsupported by accelerator>,
    "j_per_op"            : <Joules per operation>,
    "j_per_mac"           : <Joules per multiply-accumulate>,
    "op_per_s"            : <Operations per second>,
    "mac_per_s"           : <Multiply-accumulates per second>,
    "inf_per_s"           : <Inference per second>
}

Where the “layers” member contains:

"layers": [ {
    "index"       : <layer index>,
    "opcode"      : "<kernel opcode>",
    "options"     : "<layer options>",
    "ops"         : <# operations>,
    "macs"        : <# of multiple-accumulate operations>,
    "accelerator_cycles" : <# accelerator cycles>,
    "cpu_cycles"  : <estimated CPU cycles>,
    "energy"      : <estimated energy in Joules>,
    "time"        : <estimated layer execution time>,
    "supported"   : <true/false>,
    "err_msg"     : "<error msg if not supported by accelerator>"
    },
    ...
]
Arguments

indent: Amount of indentation to use in JSON formatting format_units: Format number values to a string with associated units, e.g. 0.0234 -> 23.4m exclude_null: Exclude columns with all number values (e.g. don’t include energy if not energy numbers were provided)

Returns

JSON formated string

Return type

str

to_string(format_units=True, exclude_null=True)[source]

Return the profiling results as a string

Arguments

format_units: Format number values to a string with associated units, e.g. 0.0234 -> 23.4m exclude_null: Exclude columns with all number values (e.g. don’t include energy if not energy numbers were provided)

Return type

str

class mltk.core.profiling_results.ProfilingLayerResult(tflite_layer, ops=0, macs=0, cpu_cycles=0, accelerator_cycles=0, accelerator_loads=0, accelerator_optimized_loads=0, accelerator_parallel_loads=0, time=0.0, energy=0.0, error_msg=None, **kwargs)[source]

Profiling results for an individual layer of a model

property is_accelerated: bool

Return true if this layer was executed on the accelerator

Return type

bool

property is_unsupported: bool

Return true if this layer should have been accelerated but exceeds the limits of the accelerator

Return type

bool

property error_msg: str

Error message generated by accelerator if layer was not supported

Return type

str

property tflite_layer: mltk.core.tflite_model.tflite_layer.TfliteLayer

Associated TF-Lite layer

Return type

TfliteLayer

property index: int

Index of this layer in the model

Return type

int

property name: str

Op<index>-<OpCodeStr>

Type

Name of current layer as

Return type

str

property opcode_str: str

OpCode as a string

Return type

str

property opcode: tensorflow_lite_support.metadata.schema_py_generated.BuiltinOperator

OpCode

Return type

BuiltinOperator

property macs: int

Number of Multiple-Accumulate operations required by this layer

Return type

int

property ops: int

Number of operations required by this layer

Return type

int

property accelerator_cycles: int

Number of accelerator clock cycles required by this layer

Return type

int

property accelerator_loads: int

The number of times the accelerator was loaded

Return type

int

property accelerator_optimized_loads: int

The number of times the accelerator was loaded with an optimized program

Return type

int

property accelerator_parallel_loads: int

The number of times the accelerator was loaded with an parallelized optimizations

Return type

int

property cpu_cycles: int

Number of CPU clock cycles required by this layer

Return type

int

property time: float

Time in seconds required by this layer

Return type

float

property energy: float

Energy in Joules required by this layer The energy is relative to the ‘baseline’ energy (i.e. energy used while the device was idling)

Return type

float

property options_str: str

Layer configuration options as a string

Return type

str

property input_shape_str: str

Layer input shape(s) as a string

Return type

str

property input_dtype_str: str

Layer input data type(s) as a string

Return type

str

property output_shape_str: str

Layer output shape(s) as a string

Return type

str

property output_dtype_str: str

Layer output data type(s) as a string

Return type

str

get_summary(include_labels=False, format_units=False, excluded_columns=None)[source]

Return a summary of the layer profiling results as a dictionary

Return type

dict

train_model

mltk.core.train_model(model, weights=None, epochs=None, resume_epoch=0, verbose=None, clean=False, quantize=True, create_archive=True, show=False)[source]

Train a model using Keras and Tensorflow

Parameters
  • model (Union[MltkModel, str]) – mltk.core.MltkModel instance, name of MLTK model, path to model specification script(.py) __Note:__ If the model is in “test mode” then the model will train for 1 epoch

  • weights (Optional[str]) – Optional file path of model weights to load before training

  • epochs (Optional[int]) – Optional, number of epochs to train model. This overrides the mltk_model.epochs attribute

  • resume_epoch (int) – Optional, resuming training at the given epoch

  • verbose (Optional[bool]) – Optional, Verbosely print to logger while training

  • clean (bool) – Optional, Clean the log directory before training

  • quantize (bool) – Optional, quantize the model after training successfully completes

  • create_archive (bool) – Optional, create an archive (.mltk.zip) of the training results and generated model files

  • show (bool) – Optional, show the training results diagram

Return type

TrainingResults

Returns

The model TrainingResults

class mltk.core.train_model.TrainingResults(mltk_model, keras_model, training_history)[source]

Container for the model training results

mltk_model

The MltkModel uses for training

keras_model: KerasModel

The trained KerasModel

epochs: List[int]

List of integers corresponding to each epoch

params: dict

Dictionary of parameters uses for training

history

Dictionary of metrics recorded for each epoch

asdict()[source]

Return the results as a dictionary

Return type

dict

get_best_metric()[source]

Return the best metric from training

Return type

Tuple[str, float]

Returns

Tuple(Name of metric, best metric value)

evaluate_model

mltk.core.evaluate_model(model, tflite=False, weights=None, max_samples_per_class=- 1, classes=None, dump=False, show=False, verbose=None, callbacks=None, update_archive=True)[source]

Evaluate a trained model

This internally calls:

based on the given mltk.core.MltkModel instance.

Refer to the Model Evaluation guide for more details.

Parameters
  • model (Union[MltkModel, str]) – mltk.core.MltkModel instance, name of MLTK model, path to model archive .mltk.zip or model specification script .py

  • tflite (bool) – If True, evaluate the .tflite (i.e. quantized) model file. If False, evaluate the Keras``.h5`` model (i.e. float)

  • weights (Optional[str]) –

    Optional, load weights from previous training session. May be one of the following:

    • If option omitted then evaluate using output .h5 or .tflite from training

    • Absolute path to a generated weights .h5 file generated by Keras during training

    • The keyword best; find the best weights in <model log dir>/train/weights

    • Filename of .h5 in <model log dir>/train/weights

    Note: This option may only be used if the “–tflite” option is not used

  • max_samples_per_class (int) – By default, all validation samples are used. This option places an upper limit on the number of samples per class that are used for evaluation

  • classes (Optional[List[str]]) – If evaluating a model with the mltk.core.EvaluateAutoEncoderMixin, then this should be a comma-seperated list of classes in the dataset. The first element should be considered the “normal” class, every other class is considered abnormal and compared independently. If not provided, then the classes default to: [normal, abnormal]

  • dump (bool) – If evaluating a model with the mltk.core.EvaluateAutoEncoderMixin, then, for each sample, an image will be generated comparing the sample to the decoded sample

  • show (bool) – Display the generated performance diagrams

  • verbose (Optional[bool]) – Enable verbose console logs

  • callbacks (Optional[List]) – List of Keras callbacks to use for evaluation

  • update_archive (bool) – Update the model archive with the evaluation results

Return type

EvaluationResults

Returns

Dictionary of evaluation results

class mltk.core.EvaluationResults(name, model_type='generic', **kwargs)[source]

Holds model evaluation results

Note

The Implementation details are specific to the model type

property name: str

The name of the evaluated model

Return type

str

property model_type: str

The type of the evaluated model (e.g. classification, autoencoder, etc.)

Return type

str

generate_summary(include_all=True)[source]

Generate and return a summary of the results as a string

Return type

str

generate_plots(show=True, output_dir=None, logger=None)[source]

Generate plots of the evaluation results

Parameters
  • show – Display the generated plots

  • output_dir (Optional[str]) – Generate the plots at the specified directory. If omitted, generated in the model’s logging directory

  • logger (Optional[Logger]) – Optional logger

evaluate_classifier

mltk.core.evaluate_classifier(mltk_model, tflite=False, weights=None, max_samples_per_class=- 1, classes=None, verbose=False, show=False, callbacks=None, update_archive=True)[source]

Evaluate a trained classification model

Parameters
  • mltk_model (MltkModel) – MltkModel instance

  • tflite (bool) – If true then evalute the .tflite (i.e. quantized) model, otherwise evaluate the keras model

  • weights (Optional[str]) – Optional weights to load before evaluating (only valid for a keras model)

  • max_samples_per_class (int) – Maximum number of samples per class to evaluate. This is useful for large datasets

  • classes (Optional[List[str]]) – Specific classes to evaluate

  • verbose (bool) – Enable verbose log messages

  • show (bool) – Show the evaluation results diagrams

  • callbacks (Optional[list]) – Optional callbacks to invoke while evaluating

  • update_archive (bool) – Update the model archive with the eval results

Return type

ClassifierEvaluationResults

Returns

Dictionary containing evaluation results

class mltk.core.ClassifierEvaluationResults(*args, **kwargs)[source]

Classifier evaluation results

property classes: List[str]

List of class labels used by evaluated model

Return type

List[str]

property overall_accuracy: float

The overall, model accuracy

Return type

float

property class_accuracies: List[float]

List of each classes’ accuracy

Return type

List[float]

property false_positive_rate: float

The false positive rate

Return type

float

property fpr: float

The false positive rate

Return type

float

property tpr: float

The true positive rate

Return type

float

property roc_auc: List[float]

The area under the curve of the Receiver operating characteristic for each class

Return type

List[float]

property roc_thresholds: List[float]

The list of thresholds used to calculate the Receiver operating characteristic

Return type

List[float]

property roc_auc_avg: List[float]

The average of each classes’ area under the curve of the Receiver operating characteristic

Return type

List[float]

property precision: List[List[float]]

List of each classes’ precision at various thresholds

Return type

List[List[float]]

property recall: List[List[float]]

List of each classes’ recall at various thresholds

Return type

List[List[float]]

property confusion_matrix: List[List[float]]

Calculated confusion matrix

Return type

List[List[float]]

calculate(y, y_pred)[source]

Calculate the evaluation results

Given the expected y values and corresponding predictions, calculate the various evaluation results

Parameters
  • y (Union[ndarray, list]) – 1D array with shape [n_samples] where each entry is the expected class label (aka id) for the corresponding sample e.g. 0 = cat, 1 = dog, 2 = goat, 3 = other

  • y_pred (Union[ndarray, list]) – 2D array as shape [n_samples, n_classes] for categorical or 1D array as [n_samples] for binary, where each entry contains the model output for the given sample. For binary, the values must be between 0 and 1 where < 0.5 maps to class 0 and >= 0.5 maps to class 1

generate_summary()[source]

Generate and return a summary of the results as a string

Return type

str

generate_plots(show=True, output_dir=None, logger=None)[source]

Generate plots of the evaluation results

Parameters
  • show – Display the generated plots

  • output_dir (Optional[str]) – Generate the plots at the specified directory. If omitted, generated in the model’s logging directory

  • logger (Optional[Logger]) – Optional logger

evaluate_autoencoder

mltk.core.evaluate_autoencoder(mltk_model, tflite=False, weights=None, max_samples_per_class=- 1, classes=None, dump=False, verbose=None, show=False, callbacks=None, update_archive=True)[source]

Evaluate a trained auto-encoder model

Parameters
  • mltk_model (MltkModel) – MltkModel instance

  • tflite (bool) – If true then evalute the .tflite (i.e. quantized) model, otherwise evaluate the keras model

  • weights (Optional[str]) – Optional weights to load before evaluating (only valid for a keras model)

  • max_samples_per_class (int) – Maximum number of samples per class to evaluate. This is useful for large datasets

  • classes (Optional[List[str]]) – Specific classes to evaluate, if omitted, use the one defined in the given MltkModel, i.e. model specification

  • dump (bool) – If true, dump the model output of each sample with a side-by-side comparsion to the input sample

  • verbose (Optional[bool]) – Enable verbose log messages

  • show (bool) – Show the evaluation results diagrams

  • callbacks (Optional[list]) – Optional callbacks to invoke while evaluating

  • update_archive (bool) – Update the model archive with the eval results

Return type

AutoEncoderEvaluationResults

Returns

Dictionary containing evaluation results

class mltk.core.AutoEncoderEvaluationResults(*args, **kwargs)[source]

Auto-encoder evaluation results

property classes: List[str]

List of class labels used by evaluated model

Return type

List[str]

property overall_accuracy: float

The overall, model accuracy

Return type

float

property overall_precision: List[float]

The overall, model precision as various thresholds

Return type

List[float]

property overall_recall: List[float]

The overall, model recall at various thresholds

Return type

List[float]

property overall_pr_accuracy: float

The overall, precision vs recall

Return type

float

property overall_tpr: List[float]

The overall, true positive rate at various thresholds

Return type

List[float]

property overall_fpr: List[float]

The overall, false positive rate at various thresholds

Return type

List[float]

property overall_roc_auc: List[float]

The overall, area under curve of the receiver operating characteristic

Return type

List[float]

property overall_thresholds: List[float]

List of thresholds used to calcuate overall stats

Return type

List[float]

property class_stats: dict

Dictionary of per class statistics

Return type

dict

calculate(y, y_pred, all_scores, thresholds=None)[source]

Calculate the evaluation results

Given the list of expected values and corresponding predicted values with scores, calculate the evaluation metrics.

Parameters
  • y (ndarray) – 1D array of expected class ids

  • y_pred (ndarray) – 1D array of scoring results, e.g. y_pred[i] = scoring_function(x[i], y[i])

  • all_scores (ndarray) – 2D [n_samples, n_classes] of scores comparing the input vs auto-encoder generated out for each class type (normal, and all abnormal cases)

  • thresholds (Optional[List[float]]) – Optional, list of thresholds to use for calculating the TPR, FPR and AUC

generate_summary()[source]

Generate and return a summary of the results as a string

Return type

str

generate_plots(show=True, output_dir=None, logger=None)[source]

Generate plots of the evaluation results

Parameters
  • show – Display the generated plots

  • output_dir (Optional[str]) – Generate the plots at the specified directory. If omitted, generated in the model’s logging directory

  • logger (Optional[Logger]) – Optional logger

update_model_parameters

mltk.core.update_model_parameters(model, params=None, description=None, output=None, accelerator=None)[source]

Update the parameters of a previously trained model

This updates the metadata of a previously trained .tflite model. The parameters are taken from either the given mltk.core.MltkModel’s python script or the given “params” dictionary and added to the .tflite model file.

Note

The .tflite metadata is only modified. The weights and model structure of the .tflite file are NOT modified.

Refer to the Model Parameters guide for more details.

Parameters
  • model (Union[MltkModel, TfliteModel, str]) – Either the name of a model a mltk.core.MltkModel or mltk.core.TfliteModel instance or the path to a .tflite model file or .mltk.zip model archive

  • params (Optional[dict]) – Optional dictionary of parameters to add .tflite. If omitted then model argument must be a mltk.core.MltkModel instance or model name

  • description (Optional[str]) – Optional description to add to .tflite

  • output (Optional[str]) – Optional, directory path or file path to generated .tflite file. If none then generate in model log directory. If output=’tflite_model’, then return the mltk.core.TfliteModel object instead of .tflite file path

  • accelerator (Optional[str]) – Optional hardware accelerator to use when determining the runtime_memory_size parameter. If None then default to the CMSIS kernels for calculating the required tensor arena size.

Return type

Union[str, TfliteModel]

Returns

The file path to the generated .tflite OR TfliteModel object if output=`tflite_model`

quantize_model

mltk.core.quantize_model(model, keras_model=None, output=None, weights=None, build=False, update_archive=None, tflite_converter_override=None)[source]

Generate a quantized .tflite model file

This uses the Tensorflow TfliteConverter internally. This will also add any metadata to the generated .tflite model file.

Refer to the Model Quantization guide for more details.

Parameters
  • model (Union[MltkModel, str]) – mltk.core.MltkModel instance, name of MLTK model, path to model archive (.mltk.zip) or specification script (.py)

  • keras_model (Optional[Model]) –

    Optional, keras_model previously built from given mltk_model

    • If none, then load keras model from MLTK model archive’s .h5 file

    • If none and build=True, then build keras model rather that loading archive’s .h5

  • output (Optional[str]) –

    Optional, directory path or file path to generated .tflite file.

    • If none then generate in model log directory and update the model’s archive.

    • If output=’tflite_model’, then return the mltk.core.TfliteModel object instead of .tflite file path

    NOTE: The model archive is NOT updated if this argument is supplied

  • weights (Optional[str]) – Optional, path to model weights file. This is only used if no keras_model argument is given.

  • build (bool) – If true and keras_model is None, then first build the keras model by training for 1 epoch.. This is useful for visualizing the .tflite without fully training the model first NOTE: The model archive is NOT updated if this argument is supplied

  • update_archive (Optional[bool]) – Update the model archive .mltk.zip with the generated .tflite file,. If None (default), then determine automatically if the model archive should be updated

  • tflite_converter_override (Optional[dict]) – Dictionary of zero or more mltk.core.TrainMixin.tflite_converter settings used to override the mltk.core.TrainMixin.tflite_converter in the model specification NOTE: The model archive is NOT updated if this argument is supplied

Return type

Union[str, TfliteModel]

Returns

The file path to the generated .tflite OR TfliteModel object if output=’tflite_model’

view_model

mltk.core.view_model(model, host=None, port=None, test=False, build=False, tflite=False, timeout=7.0)[source]

View an interactive graph of the given model in a webbrowser

Refer to the Model Visualization guide for more details.

Parameters
  • model (Union[str, MltkModel, Model, TfliteModel]) –

    Either

  • host (Optional[str]) – Optional, host name of local HTTP server

  • port (Optional[int]) – Optional, listening port of local HTTP server

  • test (bool) – Optional, if true load previously generated test model

  • build (bool) – Optional, if true, build the MLTK model rather than loading previously trained model

  • tflite (bool) – If true, view .tflite model otherwise view keras model

  • timeout (float) – Amount of time to wait before terminaing HTTP server

summarize_model

mltk.core.summarize_model(model, tflite=False, build=False, test=False, built_model=None)[source]

Generate a summary of the given model and return the summary as a string

Refer to the Model Summary guide for more details.

Parameters
Return type

str

Returns

A summary of the given model as a string