How do I run my model on an embedded device?

After training a model, .tflite model file is generated (See the FAQ Where is my trained model? for more details).

This file is programmed to the embedded device which is then loaded and executed by the Tensorflow-Lite Micro (TFLM) interpreter.

The .tflite can be thought of as a binary blob. It simply needs to be converted to a uint8_t C array which can then be directly given to the TFLM interpreter.

There are several different ways to deploy your model to an embedded device, including:

Simplicity Studio

From your Simplicity Studio project, replace the default model by renaming your .tflite file to 1_<your model named>.tflite and copy it into the config/tflite folder of the Simplicity Studio project. (Simplicity Studio sorts the models alphabetically in ascending order, adding 1_ forces the model to come first). After a new .tflite file is added to the project Simplicity Studio will automatically use the flatbuffer converter tool to convert a .tflite file into a C file which is added to the project.

Refer to the online documentation for more details.

CMake

The MLTK features several example applications. These applications can be built with VS Code or the CMake command line.

Supported applications such as:

Allow for defining build options that specify the path to the .tflite model file.

e.g., to <mltk repo root>/user_options.cmake, add:

mltk_set(MODEL_PROFILER_MODEL "~/workspace/my_model.tflite")
mltk_set(AUDIO_CLASSIFIER_MODEL "~/workspace/my_model.tflite")

Command line

The MLTK features several example applications.

Supported applications such as:

Allow for overriding the default .tflite model built into the application. When the application starts, it checks the end of flash memory for a .tflite model file. If found, the model at the end of flash is used instead of the default model.

To write the model to flash, use the command:

mltk update_params <model name> --device

Refer to the command’s help for more details:

mltk update_params --help