mltk.core.EvaluateAutoEncoderMixin¶
- class EvaluateAutoEncoderMixin[source]¶
Provides evaluation properties and methods to the base
MltkModel
Note
This mixin is specific to “auto-encoder” models
Refer to the Model Evaluation guide for more details.
Properties
Enable random augmentations during evaluation Default: False NOTE: This is only used if the DataGeneratorDatasetMixin or sub-class is used by the MltkModel
List if classes to use for evaluation.
Custom evaluation callback
The maximum number of samples for a given class to use during evaluation.
Shuffle data during evaluation Default: False
Total number of steps (batches of samples) before declaring the prediction round finished.
The auto-encoder scoring function to use during evaluation
Methods
__init__
Return the scoring function used during evaluation
- property scoring_function¶
The auto-encoder scoring function to use during evaluation
If None, then use the mltk_model.loss function
Default: None
- property eval_classes¶
List if classes to use for evaluation. The first element should be considered the ‘normal’ class, every other class is considered abnormal and compared independently. This is used if the –classes argument is not supplied to the eval command.
Default: [normal, abnormal]
- property eval_augment¶
Enable random augmentations during evaluation Default: False NOTE: This is only used if the DataGeneratorDatasetMixin or sub-class is used by the MltkModel
- property eval_custom_function¶
Custom evaluation callback
This is invoked during the
mltk.core.evaluate_model()
API.The given function should have the following signature:
my_custom_eval_function(my_model:MyModel, built_model: Union[KerasModel, TfliteModel]) -> EvaluationResults: results = EvaluationResults(name=my_model.name) if isinstance(built_model, KerasModel): results['overall_accuracy] = calculate_accuracy(built_model) return results
- property eval_max_samples_per_class¶
The maximum number of samples for a given class to use during evaluation. If -1 then use all available samples Default: -1
- property eval_shuffle¶
Shuffle data during evaluation Default: False
- property eval_steps_per_epoch¶
Total number of steps (batches of samples) before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict will run until the input dataset is exhausted.