mltk.core.TfliteTransposeConvParams¶
- class TfliteTransposeConvParams[source]¶
Calculated Transpose Convolution Parameters
Properties
Input quantization offset (i.e.
Output quantization offset (i.e.
Fused activation max value
Fused activation min value
Kernel height
Kernel width
Weight (aka filters) quantization offset (i.e.
Kernel padding
Per layer multipliers for output scalers
Per layer shifts for output scalers
Methods
Calculate the parameters for the given layer
- padding: TflitePadding¶
Kernel padding
- __init__(padding=<factory>, stride_width=0, stride_height=0, input_offset=0, weights_offset=0, output_offset=0, per_channel_output_multiplier=<factory>, per_channel_output_shift=<factory>, quantized_activation_min=0, quantized_activation_max=0)¶
- Parameters:
padding (TflitePadding) –
stride_width (int) –
stride_height (int) –
input_offset (int) –
weights_offset (int) –
output_offset (int) –
per_channel_output_multiplier (List[int]) –
per_channel_output_shift (List[int]) –
quantized_activation_min (int) –
quantized_activation_max (int) –
- Return type:
None
- stride_width: int = 0¶
Kernel width
- stride_height: int = 0¶
Kernel height
- input_offset: int = 0¶
Input quantization offset (i.e. zero point)
- weights_offset: int = 0¶
Weight (aka filters) quantization offset (i.e. zero point)
- output_offset: int = 0¶
Output quantization offset (i.e. zero point)
- per_channel_output_multiplier: List[int]¶
Per layer multipliers for output scalers
- per_channel_output_shift: List[int]¶
Per layer shifts for output scalers
- quantized_activation_min: int = 0¶
Fused activation min value
- quantized_activation_max: int = 0¶
Fused activation max value