This example shows you how to recognize images of handwritten digits captured on your Android™ device using Simulink® Support Package for Android Devices. On deployment, the Simulink model in this example builds an Android application on the device. Using the camera of the device to capture an image of any digit from 0 to 9, the application recognizes the digit and then outputs a label for the digit along with the prediction probability. This example uses the pretrained network,
originalMNIST.mat, for prediction. The network has been trained using the Modified National Institute of Standards and Technology database (MNIST) data set.
MNIST is a commonly used data set in the field of neural networks. This data set comprises of 60,000 training and 10,000 testing greyscale images for machine learning models. Each image is of 28-by-28 pixel.
Complete the Getting Started with Android™ Devices example.
1. Open the Android Digit Classification model.
2. In the Modeling tab, select Model Settings to open the Configuration Parameters dialog box.
3. In the Configuration Parameters dialog box, select Hardware Implementation. Verify that the Hardware board parameter is set to
4. Go to Hardware board settings > Target hardware resources > Groups and select Device options.
5. From the Device list, select your Android device. If your device is not listed, click Refresh.
Note: If your device is not listed even after clicking Refresh, ensure that you have enabled the USB debugging option on your device. To enable USB debugging, enter
androidhwsetup in the MATLAB® Command Window and follow the onscreen instructions.
1. On the Hardware tab, click the Build, Deploy, & Start button. This action builds, downloads, and runs the model as a standalone application on the Android device. The application continues to run even if the device is disconnected from the computer.
2. The application opens the device camera. You will see a region of interest (ROI) marked as a red box inside the camera frame. Only the image inside the ROI is used for prediction.
3. Draw a digit on a white board.
4. Capture the digit in the camera frame of your device. Ensure that the digit is enclosed inside the ROI. On capturing the digit, the algorithm processes the image as explained here.
a. The Camera block accepts the digit captured using the camera of your Android device. The image obtained is of size 640x480. The image is passed to the Concatenate block to perform multidimensional concatenation of R, G, and B pixels. The Draw Region of Interest and Digit Predictor subsystems accept the image and ROI as inputs.
b. The Draw Region of Interest subsystem draws the ROI starting from (120,240) to (200,240) pixels. To draw the ROI, this image is converted to
single and then converted back to RGB.
c. In the Digit Predictor subsystem, the RGB2bin block converts the image into its binary equivalent and then extracts the ROI from the input image. The block complements the image and resizes the image to 28-by-28 pixel. The 28-by-28 image is then passed to the Extract Image Features block to extract the Histogram of Oriented Gradients (HOG) features. The extracted features are passed to the Predict Digit block. The block loads the compact trained model,
originalMNIST.mat, to predict the digit from the extracted features. For information on how the
originalMNIST.mat is trained, see Digit Classification Using HOG Features on MNIST Database. The predicted output is then given to the Data Display, Predicted Digit and Confidence(0-1) blocks to display the predicted digit along with the probability of the prediction.