{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Model Summary API Examples\n", "\n", "This demonstrates how to use the [summarize_model](../../docs/python_api/operations/summarize.md) API.\n", "\n", "Refer to the [Model Summary](../../docs/guides/model_summary.md) guide for more details.\n", "\n", "__NOTES:__ \n", "- Click here: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/siliconlabs/mltk/blob/master/mltk/examples/summarize_model.ipynb) to run this example interactively in your browser \n", "- Refer to the [Notebook Examples Guide](../../docs/guides/notebook_examples_guide.md) for how to run this example locally in VSCode " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Install MLTK Python Package" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# Install the MLTK Python package (if necessary)\n", "!pip install --upgrade silabs-mltk" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Import Python Packages" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# Import the necessary MLTK APIs\n", "from mltk.core import summarize_model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Example 1: Summarize Keras model\n", "\n", "In this example, we generate a summary of the trained `.h5` model file in the [image_example1](../../docs/python_api/models/examples/image_example1.md) model's [model archive](../../docs/guides/model_archive.md)." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Model: \"image_example1\"\n", "_________________________________________________________________\n", " Layer (type) Output Shape Param # \n", "=================================================================\n", " conv2d (Conv2D) (None, 48, 48, 24) 240 \n", " \n", " average_pooling2d (AverageP (None, 24, 24, 24) 0 \n", " ooling2D) \n", " \n", " conv2d_1 (Conv2D) (None, 11, 11, 16) 3472 \n", " \n", " conv2d_2 (Conv2D) (None, 9, 9, 24) 3480 \n", " \n", " batch_normalization (BatchN (None, 9, 9, 24) 96 \n", " ormalization) \n", " \n", " activation (Activation) (None, 9, 9, 24) 0 \n", " \n", " average_pooling2d_1 (Averag (None, 4, 4, 24) 0 \n", " ePooling2D) \n", " \n", " flatten (Flatten) (None, 384) 0 \n", " \n", " dense (Dense) (None, 3) 1155 \n", " \n", " activation_1 (Activation) (None, 3) 0 \n", " \n", "=================================================================\n", "Total params: 8,443\n", "Trainable params: 8,395\n", "Non-trainable params: 48\n", "_________________________________________________________________\n", "\n", "Total MACs: 1.197 M\n", "Total OPs: 2.528 M\n", "Name: image_example1\n", "Version: 1\n", "Description: Image classifier example for detecting Rock/Paper/Scissors hand gestures in images\n", "Classes: rock, paper, scissor\n" ] } ], "source": [ "summary = summarize_model('image_example1')\n", "print(summary)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Example 2: Summarize Tensorflow-Lite model\n", "\n", "In this example, we generate a summary of the trained `.tflite` model file in the [image_example1](../../docs/python_api/models/examples/image_example1.md) model's [model archive](../../docs/guides/model_archive.md)." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "+-------+-----------------+-------------------+-----------------+-----------------------------------------------------+\n", "| Index | OpCode | Input(s) | Output(s) | Config |\n", "+-------+-----------------+-------------------+-----------------+-----------------------------------------------------+\n", "| 0 | quantize | 96x96x1 (float32) | 96x96x1 (int8) | Type=none |\n", "| 1 | conv_2d | 96x96x1 (int8) | 48x48x24 (int8) | Padding:same stride:2x2 activation:relu |\n", "| | | 3x3x1 (int8) | | |\n", "| | | 24 (int32) | | |\n", "| 2 | average_pool_2d | 48x48x24 (int8) | 24x24x24 (int8) | Padding:valid stride:2x2 filter:2x2 activation:none |\n", "| 3 | conv_2d | 24x24x24 (int8) | 11x11x16 (int8) | Padding:valid stride:2x2 activation:relu |\n", "| | | 3x3x24 (int8) | | |\n", "| | | 16 (int32) | | |\n", "| 4 | conv_2d | 11x11x16 (int8) | 9x9x24 (int8) | Padding:valid stride:1x1 activation:relu |\n", "| | | 3x3x16 (int8) | | |\n", "| | | 24 (int32) | | |\n", "| 5 | average_pool_2d | 9x9x24 (int8) | 4x4x24 (int8) | Padding:valid stride:2x2 filter:2x2 activation:none |\n", "| 6 | reshape | 4x4x24 (int8) | 384 (int8) | Type=none |\n", "| | | 2 (int32) | | |\n", "| 7 | fully_connected | 384 (int8) | 3 (int8) | Activation:none |\n", "| | | 384 (int8) | | |\n", "| | | 3 (int32) | | |\n", "| 8 | softmax | 3 (int8) | 3 (int8) | Type=softmaxoptions |\n", "| 9 | dequantize | 3 (int8) | 3 (float32) | Type=none |\n", "+-------+-----------------+-------------------+-----------------+-----------------------------------------------------+\n", "Total MACs: 1.197 M\n", "Total OPs: 2.561 M\n", "Name: image_example1\n", "Version: 1\n", "Description: Image classifier example for detecting Rock/Paper/Scissors hand gestures in images\n", "Classes: rock, paper, scissor\n", "Runtime memory size (RAM): 71.512 k\n", "hash: 0242764704ebb6643ae7df4a6536bb83\n", "date: 2022-10-06T16:32:48.472Z\n", "samplewise_norm.rescale: 0.0\n", "samplewise_norm.mean_and_std: True\n", ".tflite file size: 15.4kB\n" ] } ], "source": [ "summary = summarize_model('image_example1', tflite=True)\n", "print(summary)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Example 3: Summarize external Tensorflow-Lite model\n", "\n", "The given model need _not_ be generated by the MLTK. \n", "External `.tflite` or `.h5` model files are also supported by the summarize API." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "import os \n", "import tempfile\n", "import urllib\n", "import shutil\n", "\n", "# Use .tflite mode found here:\n", "# https://github.com/mlcommons/tiny/tree/master/benchmark/training/keyword_spotting/trained_models\n", "# NOTE: Update this URL to point to your model if necessary\n", "TFLITE_MODEL_URL = 'https://github.com/mlcommons/tiny/raw/master/benchmark/training/keyword_spotting/trained_models/kws_ref_model.tflite'\n", "\n", "# Download the .tflite file and save to the temp dir\n", "external_tflite_path = os.path.normpath(f'{tempfile.gettempdir()}/kws_ref_model.tflite')\n", "with open(external_tflite_path, 'wb') as dst:\n", " with urllib.request.urlopen(TFLITE_MODEL_URL) as src:\n", " shutil.copyfileobj(src, dst)" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "+-------+-------------------+----------------+----------------+-------------------------------------------------------+\n", "| Index | OpCode | Input(s) | Output(s) | Config |\n", "+-------+-------------------+----------------+----------------+-------------------------------------------------------+\n", "| 0 | conv_2d | 49x10x1 (int8) | 25x5x64 (int8) | Padding:same stride:2x2 activation:relu |\n", "| | | 10x4x1 (int8) | | |\n", "| | | 64 (int32) | | |\n", "| 1 | depthwise_conv_2d | 25x5x64 (int8) | 25x5x64 (int8) | Multiplier:1 padding:same stride:1x1 activation:relu |\n", "| | | 3x3x64 (int8) | | |\n", "| | | 64 (int32) | | |\n", "| 2 | conv_2d | 25x5x64 (int8) | 25x5x64 (int8) | Padding:same stride:1x1 activation:relu |\n", "| | | 1x1x64 (int8) | | |\n", "| | | 64 (int32) | | |\n", "| 3 | depthwise_conv_2d | 25x5x64 (int8) | 25x5x64 (int8) | Multiplier:1 padding:same stride:1x1 activation:relu |\n", "| | | 3x3x64 (int8) | | |\n", "| | | 64 (int32) | | |\n", "| 4 | conv_2d | 25x5x64 (int8) | 25x5x64 (int8) | Padding:same stride:1x1 activation:relu |\n", "| | | 1x1x64 (int8) | | |\n", "| | | 64 (int32) | | |\n", "| 5 | depthwise_conv_2d | 25x5x64 (int8) | 25x5x64 (int8) | Multiplier:1 padding:same stride:1x1 activation:relu |\n", "| | | 3x3x64 (int8) | | |\n", "| | | 64 (int32) | | |\n", "| 6 | conv_2d | 25x5x64 (int8) | 25x5x64 (int8) | Padding:same stride:1x1 activation:relu |\n", "| | | 1x1x64 (int8) | | |\n", "| | | 64 (int32) | | |\n", "| 7 | depthwise_conv_2d | 25x5x64 (int8) | 25x5x64 (int8) | Multiplier:1 padding:same stride:1x1 activation:relu |\n", "| | | 3x3x64 (int8) | | |\n", "| | | 64 (int32) | | |\n", "| 8 | conv_2d | 25x5x64 (int8) | 25x5x64 (int8) | Padding:same stride:1x1 activation:relu |\n", "| | | 1x1x64 (int8) | | |\n", "| | | 64 (int32) | | |\n", "| 9 | average_pool_2d | 25x5x64 (int8) | 1x1x64 (int8) | Padding:valid stride:5x25 filter:5x25 activation:none |\n", "| 10 | reshape | 1x1x64 (int8) | 64 (int8) | Type=none |\n", "| | | 2 (int32) | | |\n", "| 11 | fully_connected | 64 (int8) | 12 (int8) | Activation:none |\n", "| | | 64 (int8) | | |\n", "| | | 12 (int32) | | |\n", "| 12 | softmax | 12 (int8) | 12 (int8) | Type=softmaxoptions |\n", "+-------+-------------------+----------------+----------------+-------------------------------------------------------+\n", "Total MACs: 2.657 M\n", "Total OPs: 5.394 M\n", "Name: summarize_model\n", "Version: 1\n", "Description: Generated by Silicon Lab's MLTK Python package\n", ".tflite file size: 53.9kB\n" ] } ], "source": [ "summary = summarize_model(external_tflite_path)\n", "print(summary)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Example 4: Summarize model before training\n", "Training a model can be very time-consuming, and it is useful to know basic information \n", "about a model before investing time and energy into training it. \n", "For this reason, the MLTK `summarize` API features a `build` argument to build a model\n", "and summarize it _before_ the model is fully trained.\n", "\n", "In this example, the [image_example1](../../docs/python_api/models/examples/image_example1.md) model is built\n", "at API execution time and a summary is generated. \n", "Note that _only_ the [model specification](../../docs/guides/model_specification.md) script is required, it does _not_ need to be trained first." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Enabling test mode\n", "training is using 1 subprocesses\n", "validation is using 1 subprocesses\n", "Model: \"image_example1\"\n", "_________________________________________________________________\n", " Layer (type) Output Shape Param # \n", "=================================================================\n", " conv2d (Conv2D) (None, 48, 48, 24) 240 \n", " \n", " average_pooling2d (AverageP (None, 24, 24, 24) 0 \n", " ooling2D) \n", " \n", " conv2d_1 (Conv2D) (None, 11, 11, 16) 3472 \n", " \n", " conv2d_2 (Conv2D) (None, 9, 9, 24) 3480 \n", " \n", " batch_normalization (BatchN (None, 9, 9, 24) 96 \n", " ormalization) \n", " \n", " activation (Activation) (None, 9, 9, 24) 0 \n", " \n", " average_pooling2d_1 (Averag (None, 4, 4, 24) 0 \n", " ePooling2D) \n", " \n", " flatten (Flatten) (None, 384) 0 \n", " \n", " dense (Dense) (None, 3) 1155 \n", " \n", " activation_1 (Activation) (None, 3) 0 \n", " \n", "=================================================================\n", "Total params: 8,443\n", "Trainable params: 8,395\n", "Non-trainable params: 48\n", "_________________________________________________________________\n", "\n", "Total MACs: 1.197 M\n", "Total OPs: 2.528 M\n", "Name: image_example1\n", "Version: 1\n", "Description: Image classifier example for detecting Rock/Paper/Scissors hand gestures in images\n", "Classes: rock, paper, scissor\n", "Training dataset: Found 9 samples belonging to 3 classes:\n", " rock = 3\n", " paper = 3\n", " scissor = 3\n", "Validation dataset: Found 9 samples belonging to 3 classes:\n", " rock = 3\n", " paper = 3\n", " scissor = 3\n", "Running cmd: c:\\Users\\reed\\workspace\\silabs\\mltk\\.venv\\Scripts\\python.exe -m pip install -U tensorflow-addons\n", "(This may take awhile, please be patient ...)\n", "Requirement already satisfied: tensorflow-addons in c:\\users\\reed\\workspace\\silabs\\mltk\\.venv\\lib\\site-packages (0.18.0)\n", "\n", "Requirement already satisfied: packaging in c:\\users\\reed\\workspace\\silabs\\mltk\\.venv\\lib\\site-packages (from tensorflow-addons) (21.3)\n", "\n", "Requirement already satisfied: typeguard>=2.7 in c:\\users\\reed\\workspace\\silabs\\mltk\\.venv\\lib\\site-packages (from tensorflow-addons) (2.13.3)\n", "\n", "Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in c:\\users\\reed\\workspace\\silabs\\mltk\\.venv\\lib\\site-packages (from packaging->tensorflow-addons) (3.0.9)\n", "\n", "WARNING: You are using pip version 21.2.3; however, version 22.2.2 is available.\n", "\n", "You should consider upgrading via the 'c:\\Users\\reed\\workspace\\silabs\\mltk\\.venv\\Scripts\\python.exe -m pip install --upgrade pip' command.\n", "\n", "Forcing epochs=3 since test=true\n", "Class weights:\n", " rock = 1.00\n", " paper = 1.00\n", "scissor = 1.00\n", "Starting model training ...\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "d4c15c1c90e6402799c4227d73419f19", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Training: 0%| 0/3 ETA: ?s, ?epochs/s" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "Epoch 1/3\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "3fa4aae73aba44ea9f9e116e0ac0393c", "version_major": 2, "version_minor": 0 }, "text/plain": [ "0/3 ETA: ?s - " ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "Epoch 2/3\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "ca5a852b93584fa0b58220173099f75e", "version_major": 2, "version_minor": 0 }, "text/plain": [ "0/3 ETA: ?s - " ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "Epoch 3/3\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "56500ee1c4f546558431091e36e52938", "version_major": 2, "version_minor": 0 }, "text/plain": [ "0/3 ETA: ?s - " ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "Generating C:/Users/reed/.mltk/models/image_example1-test/image_example1.test.h5\n", "\n", "\n", "*** Best training val_accuracy = 0.333\n", "\n", "\n", "Training complete\n", "Training logs here: C:/Users/reed/.mltk/models/image_example1-test\n", "validation is using 1 subprocesses\n", "Generating tflite_model\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING:absl:Found untraced functions such as _jit_compiled_convolution_op, _jit_compiled_convolution_op, _jit_compiled_convolution_op while saving (showing 3 of 3). These functions will not be directly callable after loading.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Assets written to: E:\\tmpv8h9r1ze\\assets\n", "Using Tensorflow-Lite Micro version: b13b48c (2022-06-08)\n", "Searching for optimal runtime memory size ...\n", "Determined optimal runtime memory size to be 72320\n", "+-------+-----------------+-------------------+-----------------+-----------------------------------------------------+\n", "| Index | OpCode | Input(s) | Output(s) | Config |\n", "+-------+-----------------+-------------------+-----------------+-----------------------------------------------------+\n", "| 0 | quantize | 96x96x1 (float32) | 96x96x1 (int8) | Type=none |\n", "| 1 | conv_2d | 96x96x1 (int8) | 48x48x24 (int8) | Padding:same stride:2x2 activation:relu |\n", "| | | 3x3x1 (int8) | | |\n", "| | | 24 (int32) | | |\n", "| 2 | average_pool_2d | 48x48x24 (int8) | 24x24x24 (int8) | Padding:valid stride:2x2 filter:2x2 activation:none |\n", "| 3 | conv_2d | 24x24x24 (int8) | 11x11x16 (int8) | Padding:valid stride:2x2 activation:relu |\n", "| | | 3x3x24 (int8) | | |\n", "| | | 16 (int32) | | |\n", "| 4 | conv_2d | 11x11x16 (int8) | 9x9x24 (int8) | Padding:valid stride:1x1 activation:relu |\n", "| | | 3x3x16 (int8) | | |\n", "| | | 24 (int32) | | |\n", "| 5 | average_pool_2d | 9x9x24 (int8) | 4x4x24 (int8) | Padding:valid stride:2x2 filter:2x2 activation:none |\n", "| 6 | reshape | 4x4x24 (int8) | 384 (int8) | Type=none |\n", "| | | 2 (int32) | | |\n", "| 7 | fully_connected | 384 (int8) | 3 (int8) | Activation:none |\n", "| | | 384 (int8) | | |\n", "| | | 3 (int32) | | |\n", "| 8 | softmax | 3 (int8) | 3 (int8) | Type=softmaxoptions |\n", "| 9 | dequantize | 3 (int8) | 3 (float32) | Type=none |\n", "+-------+-----------------+-------------------+-----------------+-----------------------------------------------------+\n", "Total MACs: 1.197 M\n", "Total OPs: 2.561 M\n", "Name: image_example1\n", "Version: 1\n", "Description: Image classifier example for detecting Rock/Paper/Scissors hand gestures in images\n", "Classes: rock, paper, scissor\n", "Runtime memory size (RAM): 71.512 k\n", "hash: f84b0517005c8392d9746f6c6dae1f50\n", "date: 2022-10-06T16:34:05.327Z\n", "samplewise_norm.rescale: 0.0\n", "samplewise_norm.mean_and_std: True\n", ".tflite file size: 15.3kB\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "c:\\Users\\reed\\workspace\\silabs\\mltk\\.venv\\lib\\site-packages\\tensorflow\\lite\\python\\convert.py:766: UserWarning: Statistics for quantized inputs were expected, but not specified; continuing anyway.\n", " warnings.warn(\"Statistics for quantized inputs were expected, but not \"\n" ] } ], "source": [ "summary = summarize_model('image_example1', tflite=True, build=True)\n", "print(summary)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3.9.7 ('.venv': venv)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.7" }, "orig_nbformat": 4, "vscode": { "interpreter": { "hash": "600e22ae316f8c315f552eaf99bb679bc9438a443c93affde9ac001991b79c8f" } } }, "nbformat": 4, "nbformat_minor": 2 }