Tensorflow-Lite Micro Python Wrapper

This allows for accessing the Tensorflow-Lite Micro (TFLM) C++ interpreter from a Python script.

This is useful as it allows for executing .tflite model files from a Python script running on Windows/Linux (i.e. without requiring an embedded device).

This provides useful information about the .tflite model such as:

  • Required working memory (i.e. RAM)

  • If any of the layers of the model are not supported by TFLM

This wrapper is made accessible to a Python script via the TfliteMicro Python API. This Python API loads the C++ Python wrapper shared library into the Python runtime.

Source Code

  • Python wrapper - This makes the Tensorflow-Lite Micro C++ library accessible to Python

  • Tensorflow-Lite Micro - This is the Tensorflow-Lite Micro C++ library plus some additional helpers to aid development (NOTE: The actual TFLM library is downloaded by the build scripts)

  • TfliteMicroModel - This is a helper C++ library to make interfacing to the TFLM library easier for applications

  • Python API - Python package that loads this C++ wrapper

Building the Wrapper


This wrapper comes pre-built when installing the MLTK Python package, e.g.:

pip install silabs-mltk

Automatic Build

This wrapper is automatically built when installing from source, e.g.:

git clone https://github.com/siliconlabs/mltk.git
cd mltk
pip install -e .

Manual build via MLTK command

To manually build this wrapper, issue the MLTK command:

mltk build tflite_micro_wrapper

Manual build via CMake

This wrapper can also be built via CMake using Visual Studio Code or the Command Line.

To build the wrapper, the build_options.cmake file needs to be modified.

Create the file <mltk repo root>/user_options.cmake and add:

mltk_set(MLTK_TARGET mltk_tflite_micro_wrapper)


You must remove this option and clean the build directory before building the example applications

Then configure the CMake project using the Window/Linux GCC toolchain and build the target: mltk_tflite_micro_wrapper.