profile¶
Profile a model to determine how efficiently is may run on hardware. This will profile a MLTK model or .tflite model file in a simulator or on a locally connected, embedded device.
NOTE: Any .tflite model file supported by Tensorflow-Lite Micro will work with this command (i.e. The .tflite does NOT need to be generated by the MLTK).
Additional Documentation¶
Usage¶
Usage: mltk profile [OPTIONS] <model>
Profile a model to determine how efficiently is may run on hardware
This will profile a MLTK model or .tflite model file
in a simulator or on a locally connected, embedded device.
NOTE: *Any* .tflite model file supported by Tensorflow-Lite Micro will
work with this command (i.e. The .tflite does NOT need to be generated by the
MLTK).
For more details see:
https://siliconlabs.github.io/mltk/docs/guides/model_profiler
----------
Examples
----------
# Profile the MLTK model in the MVP accelerator simulator
mltk profile image_example1 --accelerator MVP --estimates
# Profile a .tflite without any hardware acceleration
mltk profile ~/workspace/some_model.tflite --estimates
# Profile the model on the connected development board
# using the MVP accelerator
mltk profile audio_example1 --accelerator MVP --device
Arguments
* model <model> One of the following:
- Name of previously trained MLTK model
- Path to .tflite model file
- Path to .mltk.zip model archive file
[default: None]
[required]
Options
--accelerator -a <name> Name of accelerator for
which to compile then
profile model.
If omitted, then profile
using the reference kernels
[default: None]
--build --no-build Build and quantize before
profiling the model rather
than loading from a
pre-trained .tflite file in
the MLTK model's archive
[default: no-build]
--verbose -v Enable verbose console logs
--device -d Profile model on embedded
device instead of
simulator.
If this option is provided,
then the device must be
locally connected
--port <port> Serial COM port of the
embedded device.
This is only used with the
--device option.
If omitted, then attempt to
automatically determine the
serial COM port
[default: None]
--output -o <output> Generate profiling report
in given output directory
[default: None]
--no-format By default, the number
units will be formatted for
easier reading. Use this
option to return the
unformatted values
--estimates --no-estimates If profiling in the
simulator, this will
estimate additional metrics
such as CPU cycles and
energy. Disabling this
option can reduce profiling
time
[default: no-estimates]
--post This allows for
post-processing the
profiling results (e.g.
uploading to a cloud) if
supported by the given
MltkModel
--full-summary Generate a full summary
from the profiling report.
This includes any extra
info logged by the selected
accelerator
--app <path> By default, the
model_profiler app is
automatically downloaded.
This option allows for
overriding with a custom
built app.
Alternatively, if using the
--device option, set this
option to "none" to NOT
program the model_profiler
app to the device.
In this case, ONLY the
.tflite will be programmed
and the existing
model_profiler app will be
re-used.
[default: None]
--help Show this message and exit.