train¶
This trains a model using the configuration specified in the given MLTK model.
Additional Documentation¶
Usage¶
Usage: mltk train [OPTIONS] <model>
Train an ML model
This trains a model using the configuration specified in the given MLTK
model.
All training results are stored <model log dir>/train.
After training completes, an archive file is generated at:
<MLTK model directory>/<MLTK model name>.mltk.zip
which contains:
- MLTK model specification python script
- Keras model in .h5 format
- Quantized .tflite model
- Model summary
- Training log
For more details see:
https://siliconlabs.github.io/mltk/docs/guides/model_training
----------
Examples
----------
# Execute model training dryrun
mltk train image_example1-test
# Clean any previous training logs & checkpoints,
# then train for 100 epochs
mltk train image_example1 --clean --epochs 100
# Resume model from previous session at the last save checkpoint
mltk train image_example1 --resume
# Resume model from previous session at epoch 43
mltk train image_example1 --resume-epoch 43
# Do NOT evaluate the model after training
mltk train image_example1 --no-evaluate
Arguments
* model <model> Name of MLTK model OR path to model specification
python script.
If MLTK model name given, optionally append
"-test" to execute a training "dryrun".
This ensures everything works by quickly training
the model;
Dry-run training limits the number of epochs and
dataset size.
All results are stored in a test directory and
archive.
[default: None]
[required]
Options
--epochs -e <epochs> Override the number of
training epochs define by
the MLTK model
[default: None]
--weights -w <weights> Optional, load weights from
previous training session.
May be one of the
following:
- If option omitted then
trained with initial
weights
- Absolute path to a
generated weights .h5 file
generated by Keras during
training
- The keyword 'best'; find
the best weights from the
previous training session
- Filename of .h5 in <model
log dir>/train/weights
[default: None]
--verbose -v Enable verbose console logs
--resume --no-resume Resume the training from a
previous training session
using the last stored
checkpoint weights
[default: no-resume]
--resume-epoch <epoch> Specify the epoch to resume
training
[default: 0]
--clean --no-clean Clean all previous training
logs files, weights, and
checkpoints
[default: no-clean]
--show --no-show Display the generated model
training history diagram
[default: no-show]
--evaluate --no-evaluate After training, evaluate
the .h5 and .tflite models
[default: evaluate]
--test --no-test Do a dryrun of training the
model. This does the same
thing as: mltk train
my_model-test
[default: no-test]
--post This allows for
post-processing the
training results (e.g.
uploading to a cloud) if
supported by the given
MltkModel
--help Show this message and exit.