# Why MLTK?

The MLTK can be thought of as a collection of Python scripts that simplify the development of embedded machine learning models
using [Google Tensorflow](https://www.tensorflow.org/) and [Tensorflow-Lite Micro](https://github.com/tensorflow/tflite-micro).

So, why use the MLTK instead of directly using Tensorflow?


## Only a single python script and command-line is needed

To create an ML model using the MLTK, all you need is a single Python script and a few commands.

Everything needed to [train](./guides/model_training.md), [evaluate](./guides/model_evaluation.md), [quantize](./guides/model_quantization.md), [profile](./guides/model_profiler.md), [summarize](./guides/model_summary.md), [visualize](./guides/model_visualizer.md) 
is defined is a single Python script called the [model specification](./guides/model_specification.md) file. This file is then invoked by the various MLTK [commands](./command_line/index.md).


### Comparison with other solutions

Other projects that directly use the [Tensorflow / Keras API](https://www.tensorflow.org/api_docs/python/tf/keras) can be much more convoluted.

Consider the [Tensorflow-Lite Micro Examples](https://github.com/tensorflow/tflite-micro/tree/main/tensorflow/lite/micro/examples). To train the models for these examples requires numerous different python scripts and advanced knowledge of Tensorflow. A comparison between the MLTK and TFLM examples can be seen in the following links:

| Example Name | MLTK Solution                                                                                                                     | Tensorflow-Lite Micro Solution                                                                                               |
| ------------ | --------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------- |
| micro_speech | [tflite_micro_speech.py](https://github.com/siliconlabs/mltk/blob/master/mltk/models/tflite_micro/tflite_micro_speech.py)         | [micro_speech/train](https://github.com/tensorflow/tflite-micro/tree/main/tensorflow/lite/micro/examples/micro_speech/train) |
| magic_wand   | [tflite_micro_magic_wand.py](https://github.com/siliconlabs/mltk/blob/master/mltk/models/tflite_micro/tflite_micro_magic_wand.py) | [magic_wand/train](https://github.com/tensorflow/tflite-micro/tree/main/tensorflow/lite/micro/examples)     |



Another example is the [TinyML Benchmark](https://github.com/mlcommons/tiny/tree/master/benchmark) examples. These also require numerous different Python scripts and advanced Tensorflow knowledge to train the models:


| Example Name         | MLTK Solution                                                                                                         | TinyML Benchmark Solution                                                                                     |
| -------------------- | --------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- |
| anomaly_detection    | [anomaly_detection.py](https://github.com/siliconlabs/mltk/blob/master/mltk/models/tinyml/anomaly_detection.py)       | [anomaly_detection](https://github.com/mlcommons/tiny/tree/master/benchmark/training/anomaly_detection)       |
| image_classification | [image_classification.py](https://github.com/siliconlabs/mltk/blob/master/mltk/models/tinyml/image_classification.py) | [image_classification](https://github.com/mlcommons/tiny/tree/master/benchmark/training/image_classification) |
| keyword_spotting     | [keyword_spotting.py](https://github.com/siliconlabs/mltk/blob/master/mltk/models/tinyml/keyword_spotting.py)         | [keyword_spotting](https://github.com/mlcommons/tiny/tree/master/benchmark/training/keyword_spotting)         |
| visual_wake_words    | [visual_wake_words.py](https://github.com/siliconlabs/mltk/blob/master/mltk/models/tinyml/visual_wake_words.py)       | [visual_wake_words](https://github.com/mlcommons/tiny/tree/master/benchmark/training/visual_wake_words)       |



## Lots of tools, all optional

The MLTK offers several different tools/utilities to analyze/modify the model files. Each of these tools are optional and independent from one another.
Additionally, many of the tools support `.tflite` model files that were generated _outside_ of the MLTK (i.e. the `.tflite` model file need not be generated by the MLTK).


| Tool Name                                                                   | External .tflite Supported? | Description                                                      |
| --------------------------------------------------------------------------- | --------------------------- | ---------------------------------------------------------------- |
| [Model Profiler](https://siliconlabs.github.io/mltk/docs/guides/model_profiler.html)                                | Yes                         | Calculate model run-time statistics like Latency, RAM usage, etc |
| [Model Summary](https://siliconlabs.github.io/mltk/docs/guides/model_summary.html)                                  | Yes                         | Generate text summary of model                                   |
| [Model Visualizer](https://siliconlabs.github.io/mltk/docs/guides/model_visualizer.html)                            | Yes                         | View model in interative graph                                   |
| [Model Parameters](https://siliconlabs.github.io/mltk/docs/guides/model_parameters.html)                            | Yes                         | Embed parameters into `.tflite` model file                       |
| [Model Trainer](https://siliconlabs.github.io/mltk/docs/guides/model_training.html)                                 | No                          | Train model using Tensorflow                                     |
| [Model Evaluater](https://siliconlabs.github.io/mltk/docs/guides/model_evaluation.html)                             | No                          | Evaluate accuracy of `.tflite`                                   |
| [Model Quantizer](https://siliconlabs.github.io/mltk/docs/guides/model_quantization.html)                           | No                          | Quantize model using Tensorflow-Lite Converter                   |
| [Audio Visualizer](https://siliconlabs.github.io/mltk/docs/audio/audio_utilities.html#audio-visualization-utility)  | N/A                         | View generated spectrograms in real-time                         |
| [Audio Classifier](https://siliconlabs.github.io/mltk/docs/audio/audio_utilities.html#audio-classification-utility) | No                          | Classify real-time audio from a microphone                       |

## C++ Python wrappers

The MLTK allows for sharing C/C++ code between the embedded device and host Python scripts.  
This is useful as it allows for sharing the _exact_ same data pre-processing algorithms between the embedded device and model training scripts.
This ensures the embedded ML model "sees" the same data samples as what was used to train the model which should (hopefully) lead to more accurate predictions.

See the [Audio Feature Generator](./audio/audio_feature_generator.md) for details on how this is done to create spectrograms from streaming audio.

Also see the [C++ Development Guide](./cpp_development/index.md) for details on how to build wrappers.



## Embedded model parameters

The MLTK allows for embedding custom parameters into the generated `.tflite` model file.  
This is useful because it enables the ML model developer to distribute training settings
to the embedded device using a single file. It also ensures that the settings used to train
the model are in lock-step with the `.tflite` model file.

See the [Model Parameters](./guides/model_parameters.md) guide for more details on how this works.

## Integration with Tensorflow

All model layout/architecture design and training is done using stock-standard [Tensorflow](https://www.tensorflow.org/).  
The details are contained within a [single python script](./guides/model_specification.md) and exported via simple API, [TrainMixin](mltk.core.TrainMixin).


## Integration with Tensorflow-Lite

[Model quantization](./guides/model_quantization.md) is automatically done at the end of training using the standard [Tensorflow-Lite Converter](https://www.tensorflow.org/lite/convert).  
The MLTK also features a Python API, [TfliteModel](mltk.core.TfliteModel) to access the contents of a generated `.tflite` model file.


## Integration with Tensorflow-Lite Micro

The MLTK features a [C++ Python wrapper](https://github.com/siliconlabs/mltk/tree/master/cpp/tflite_micro_wrapper) to execute `.tflite` model files in the [Tensorflow-Lite Micro Interpreter](https://github.com/tensorflow/tflite-micro) from a Python script.  
The Python wrapper is accessible via Python API, [TfliteMicro](mltk.core.tflite_micro.TfliteMicro).

This is useful as it allows for determining if a given `.tflite` model is supported by TFLM and how much RAM it will consume.


## Integration with the Gecko SDK

The [Gecko SDK](https://docs.silabs.com/gecko-platform/latest/machine-learning/tensorflow/overview) is able to parse the [embedded parameters](./guides/model_parameters.md) in
a `.tflite` and generate the corresponding source code.

The MLTK [example applications](./cpp_development/examples/index.md) can also be loaded into Silicon Labs [Simplicity Studio](./cpp_development/simplicity_studio.md)


## Support for cloud development

MLTK model development can be done locally or in the cloud using the following features:

### Training via SSH

The MLTK features the option to train models via [SSH](./guides/model_training_via_ssh.md). This can greatly reduce model training times by leveraging cloud machines.  
See the [Cloud Training with vast.ai](https://siliconlabs.github.io/mltk/mltk/tutorials/cloud_training_with_vast_ai.html) tutorial for more details.


### Logging to the cloud

The MLTK allows for logging to the cloud during model development using [Weights & Biases](https://wandb.ai).  
See the [Cloud Logging with Weights & Biases](https://siliconlabs.github.io/mltk/mltk/tutorials/cloud_logging_with_wandb.html) tutorial for more details.