Main Content

Deploy Classification Application Using Mobilenet-V3 TensorFlow Lite Model on Host and Raspberry Pi

This example shows how to simulate and generate code for a classification application that performs inference using a TensorFlow™ Lite model. This example uses a pretrained TensorFlow Lite model for the image classification network Mobilenet-V3 that is available on the TensorFlow webpage for pretrained models: https://www.tensorflow.org/lite/models/trained. This workflow can be used for both int8 and float TensorFlow Lite models.

This example also shows how to import data from Python®.

This example is supported for host Windows® and Linux® platforms.

Download Model

Run this script to download the image classification network Mobilenet-V3 from the URL mentioned below.

if ~exist("mobilenetv3.tflite","file")
    disp('Downloading MobilenetV3 model file...');
    url = "https://tfhub.dev/google/lite-model/imagenet/mobilenet_v3_small_100_224/classification/5/metadata/1?lite-format=tflite";
    websave("mobilenetv3.tflite",url);
end
Downloading MobilenetV3 model file...

The tflite_classification_predict Entry-Point Function

The loadTFLiteModel function loads the Mobilenet-V3 model into a TFLiteModel object. The properties of this object contain information about the model such as the number and size of inputs and outputs of the model.

net = loadTFLiteModel('mobilenetv3.tflite');
disp(net.InputSize);

In this example, you generate code for the entry-point function tflite_classification_predict.m. This function loads the Mobilenet-V3 model into a persistent network object by using the loadTFLiteModel function.

To optimize performance, after creating the network object, set the NumThreads property based on the number of threads available on your target hardware.

The tflite_classification_predict function performs prediction by passing the network object to the predict function. Subsequent calls to this function reuse this persistent object.

type tflite_classification_predict.m
function out = tflite_classification_predict(in)
persistent net;
if isempty(net)
    net = loadTFLiteModel('mobilenetv3.tflite');
    % To optimize performance, set NumThreads property based on the number 
    % of threads available on the hardware board
    net.NumThreads = 4;
end
net.Mean = 0;
net.StandardDeviation = 255;
out = net.predict(in);
end

Read Labels Map

Read the labels file associated with TFLite Model.

labelsFile = importdata('labels.txt');

Read and Preprocess Input Image

Read the image that you intend to classify.

I = imread('peppers.png');
imshow(I);

Alternatively, you can import the input data from Python. In the supporting files for this example, a Python input image is saved as the pythoninputImage.mat file.

Python only supports TFlite models that are in the NHWC format (for non-RNN models), or in the NTC and NC formats (for RNN models). By contrast, MATLAB accepts the format HWCN for non-RNN models, and CNT and CN for RNN models. Here, N - Batch size, H - Height, W - Width, C - Channels, T - Sequence length.

So, if you import the input from Python, you must convert it to the shape that MATLAB accepts.

Load input shape read by Python from the pythoninputImage.mat file. This input is stored in the pythoninputformat variable.

load('pythoninputImage.mat'); % The input size is [1,224,224,3]

The python input has the shape NHWC and its size is [1 224 224 3]. Convert the input to the shape HWCN that MATLAB accepts.

I1 = ConvertPythonTFLiteInput(net, pythoninputformatInput);

If the input image is not imported from python, reshape it based on the input shape of the TFLite model.

I1 = imresize(I,[224 224]);

Workflow 1: Perform Classification by Using Simulation on Host

Run the simulation by passing the input image I1 to the entry-point function.

output = tflite_classification_predict(I1);

Workflow 2: Perform Classification by Running Generated MEX on Host

Additional Prerequisites

  • MATLAB® Coder™

This example uses the codegen command to generate a MEX function that runs on the host platform.

Generate MEX Function

To generate a MEX function for a specified entry-point function, create a code configuration object for a MEX. Set the target language to C++.

cfg = coder.config('mex');
cfg.TargetLang = 'C++';

Run the codegen command to generate the MEX function tflite_classification_predict_mex on the host platform.

codegen -config cfg tflite_classification_predict -args ones(224,224,3,'single')

Run Generated MEX

Run the generated MEX by passing the input image I1.

output = tflite_classification_predict_mex(single(I1));

Workflow 3: Generate Code for Classification Application, Deploy and Run on Raspberry Pi

Additional Prerequisites

  • MATLAB® Coder™

  • Embedded® Coder™

  • MATLAB Support Package for Raspberry Pi Hardware. To install this support package, use the Add-On Explorer.

Third-Party Prerequisites

  • Raspberry Pi hardware

  • TFLite library (on the target ARM® hardware)

On the Raspberry Pi hardware, set the environment variable TFLITE_PATH to the location of the TFLite library. For more information on how to build the TFLite library and set the environment variables, see Prerequisites for Deep Learning with TensorFlow Lite Models (Deep Learning Toolbox).

Set Up Connection with Raspberry Pi

Use the MATLAB Support Package for Raspberry Pi Hardware function raspi to create a connection to the Raspberry Pi.

In the following code, replace:

  • raspiname with the name of your Raspberry Pi board

  • username with your user name

  • password with your password

r = raspi('raspiname','username','password');

Copy TFLite model to Target Hardware

Copy the TFLite model to the Raspberry Pi board. On the hardware board, set the environment variable TFLITE_MODEL_PATH to the location of the TFLite model. For more information on setting environment variables, see Prerequisites for Deep Learning with TensorFlow Lite Models (Deep Learning Toolbox).

In the following commands, replace targetDir with the destination folder of TFLite model on the Raspberry Pi board.

r.putFile('mobilenetv3.tflite',targetDir)

Generate PIL MEX Function

To generate a PIL MEX function for a specified entry-point function, create a code configuration object for a static library and set the verification mode to 'PIL'. Set the target language to C++.

cfg = coder.config('lib','ecoder',true);
cfg.TargetLang = 'C++';
cfg.VerificationMode = 'PIL';

Create a coder.hardware object for Raspberry Pi and attach it to the code generation configuration object.

hw = coder.hardware('Raspberry Pi');
cfg.Hardware = hw;

On the MATLAB, run the codegen command to generate a PIL MEX function tflite_classification_predict_pil.

codegen -config cfg tflite_classification_predict -args ones(224,224,3,'single')

Run Generated PIL

Run the generated MEX by passing the input image I1.

output = tflite_classification_predict_pil(single(I1));

Get Top Five Labels

[~, top5] = maxk(output,5);
disp(labelsFile(top5));
    {'bell pepper'     }
    {'cucumber'        }
    {'spaghetti squash'}
    {'grocery store'   }
    {'acorn squash'    }

Get Prediction Scores

This network does not contain a softmax layer. So, run the softmax function to get the accuracy values. If your network has a softmax layer, you can skip this step.

% If network does not have softmax
predictionScores = softmax(output);
% If network has softmax
% predictionScores = output;

Display Prediction Scores On Image.

DisplayPredictionsOnImage(predictionScores, I);

See Also

Functions

Related Topics

External Websites