mltk.core.EvaluateClassifierMixin¶
- class EvaluateClassifierMixin[source]¶
Provides evaluation properties and methods to the base
MltkModel
Note
This mixin is specific to “classification” models
Refer to the Model Evaluation guide for more details.
Properties
Enable random augmentations during evaluation Default: False NOTE: This is only used if the DataGeneratorDatasetMixin or sub-class is used by the MltkModel
Custom evaluation callback
The maximum number of samples for a given class to use during evaluation.
Shuffle data during evaluation Default: False
Total number of steps (batches of samples) before declaring the prediction round finished.
Methods
__init__
- property eval_shuffle¶
Shuffle data during evaluation Default: False
- property eval_augment¶
Enable random augmentations during evaluation Default: False NOTE: This is only used if the DataGeneratorDatasetMixin or sub-class is used by the MltkModel
- property eval_custom_function¶
Custom evaluation callback
This is invoked during the
mltk.core.evaluate_model()
API.The given function should have the following signature:
my_custom_eval_function(my_model:MyModel, built_model: Union[KerasModel, TfliteModel]) -> EvaluationResults: results = EvaluationResults(name=my_model.name) if isinstance(built_model, KerasModel): results['overall_accuracy] = calculate_accuracy(built_model) return results
- property eval_steps_per_epoch¶
Total number of steps (batches of samples) before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict will run until the input dataset is exhausted.
- property eval_max_samples_per_class¶
The maximum number of samples for a given class to use during evaluation. If -1 then use all available samples Default: -1