Main Content

Digit Classification Using HOG Features on MNIST Database

This example shows how to classify digits using HOG features and a multiclass SVM classifier.

Object classification is an important task in many computer vision applications, including surveillance, automotive safety, and image retrieval. For example, in an automotive safety application, you may need to classify nearby objects as pedestrians or vehicles. Regardless of the type of object being classified, the basic procedure for creating an object classifier is:

  • Acquire a labeled data set with images of the desired object.

  • Partition the data set into a training set and a test set.

  • Train the classifier using features extracted from the training set.

  • Test the classifier using features extracted from the test set.

To illustrate, this example shows how to classify numerical digits using HOG (Histogram of Oriented Gradient) features [1] and a multiclass SVM (Support Vector Machine) classifier. This type of classification is often used in many Optical Character Recognition (OCR) applications.

The example uses the fitcecoc function from the Statistics and Machine Learning Toolbox™ and the extractHOGFeatures function from the Computer Vision Toolbox™.

Digit Data Set

Synthetic digit images are used for training. Using synthetic images is convenient as it enables the creation of a variety of training samples without having to manually collect them. For testing, scans of handwritten digits are used to validate how well the classifier performs on data that is different than the training data. Although this is not the most representative data set, there is enough data to train and test a classifier, and show the feasibility of the approach. the classifier is trained using the Modified National Institute of Standards and Technology database (MNIST) dataset. MNIST is a commonly used dataset in the field of neural networks. This dataset comprises of 60K training and 10K testing greyscale images for machine learning models. The images are of 28-by-28 pixels.

Get Synthetic Images to train MNIST Database

Download the set of training images and labels from

Execute these commands in your MATLAB Command prompt.

% Create a folder named synthetic in your current MATLAB directory and extract the images to the folder.
mkdir synthetic;
% Load training data using |imageDatastore|.
syntheticDir   = fullfile(pwd,'synthetic');
% |imageDatastore| recursively scans the directory tree containing the
% images. Folder names are automatically used as labels for each image.
trainingSet = imageDatastore(syntheticDir,   'IncludeSubfolders', true, 'LabelSource', 'foldernames');

Get Handwritten Images to test MNIST Database

% Create a folder named handwritten in your current MATLAB directory and copy the test images from vision toolbox to the folder.
mkdir handwritten;
copyfile([fullfile(toolboxdir('vision'), 'visiondata','digits','handwritten')],[fullfile(pwd,'handwritten')]);
% Make the handwritten folder writeable
for i = 0:9
    for j = 1:12
    fileattrib(fullfile(pwd,'handwritten',int2str(i),strcat('digit_',int2str(i),'_', int2str(j),'.png')),'+w')
i = i+1;
for i=0:9
i= i+1;

handwrittenDir = fullfile(pwd,'handwritten');

% |imageDatastore| recursively scans the directory tree containing the
% images. Folder names are automatically used as labels for each image.
testSet     = imageDatastore(handwrittenDir, 'IncludeSubfolders', true, 'LabelSource', 'foldernames');

Use countEachLabel to tabulate the number of images associated with each label. In this example, the training set consists of approximately 60K images for each of the 10 digits. The test set consists of 12 images per digit.

ans=10×2 table
    Label    Count
    _____    _____

      0      5923 
      1      6742 
      2      5958 
      3      6131 
      4      5842 
      5      5421 
      6      5918 
      7      6265 
      8      5851 
      9      5949 

ans=10×2 table
    Label    Count
    _____    _____

      0       12  
      1       12  
      2       12  
      3       12  
      4       12  
      5       12  
      6       12  
      7       12  
      8       12  
      9       12  

Show a few of the training and test images








Using HOG Features

The data used to train the classifier are HOG feature vectors extracted from the training images. Therefore, it is important to make sure the HOG feature vector encodes the right amount of information about the object. The extractHOGFeatures function returns a visualization output that can help form some intuition about just what the "right amount of information" means. By varying the HOG cell size parameter and visualizing the result, you can see the effect the cell size parameter has on the amount of shape information encoded in the feature vector:

img = readimage(trainingSet, 206);

% Extract HOG features and HOG visualization
[hog_2x2, vis2x2] = extractHOGFeatures(img,'CellSize',[2 2]);
[hog_4x4, vis4x4] = extractHOGFeatures(img,'CellSize',[4 4]);
[hog_8x8, vis8x8] = extractHOGFeatures(img,'CellSize',[8 8]);

% Show the original image
subplot(2,3,1:3); imshow(img);

% Visualize the HOG features
title({'CellSize = [2 2]'; ['Length = ' num2str(length(hog_2x2))]});

title({'CellSize = [4 4]'; ['Length = ' num2str(length(hog_4x4))]});

title({'CellSize = [8 8]'; ['Length = ' num2str(length(hog_8x8))]});

The visualization shows that a cell size of [8 8] does not encode much shape information, while a cell size of [2 2] encodes a lot of shape information but increases the dimensionality of the HOG feature vector significantly. A good compromise is a 4-by-4 cell size. This size setting encodes enough spatial information to visually identify a digit shape while limiting the number of dimensions in the HOG feature vector, which helps speed up training. In practice, the HOG parameters should be varied with repeated classifier training and testing to identify the optimal parameter settings.

cellSize = [4 4];
hogFeatureSize = length(hog_4x4);

Train a Digit Classifier

Digit classification is a multiclass classification problem, where you have to classify an image into one out of the ten possible digit classes. In this example, the fitcecoc function from the Statistics and Machine Learning Toolbox™ is used to create a multiclass classifier using binary SVMs.

Start by extracting HOG features from the training set. These features will be used to train the classifier.

% Loop over the trainingSet and extract HOG features from each image. A
% similar procedure will be used to extract features from the testSet.

numImages = numel(trainingSet.Files);
trainingFeatures = zeros(numImages, hogFeatureSize, 'single');

for i = 1:numImages
    img = readimage(trainingSet, i);
    % Apply pre-processing steps
    img = imbinarize(img);
    trainingFeatures(i, :) = extractHOGFeatures(img, 'CellSize', cellSize);  

% Get labels for each image.
trainingLabels = trainingSet.Labels;

Next, train a classifier using the extracted features. The output of the trained classifier is stored as a compact trained model in the originalMNIST.mat file.

% fitcecoc uses SVM learners and a 'One-vs-One' encoding scheme.
%classifier = fitcecoc(trainingFeatures, trainingLabels);
trainingLabelsGrps = grp2idx(trainingLabels);  % returns group indices  1 through 10
trainingLabelsGrps = trainingLabelsGrps-1; % get actual digit values 0 through 9

classifier_to_deploy = fitcecoc(trainingFeatures, trainingLabelsGrps);



This example illustrated the basic procedure for creating a multiclass object classifier using the extractHOGfeatures function from the Computer Vision Toolbox and the fitcecoc function from the Statistics and Machine Learning Toolbox™. Although HOG features and an ECOC classifier were used here, other features and machine learning algorithms can be used in the same way. For instance, you can explore using different feature types for training the classifier; or you can see the effect of using other machine learning algorithms available in the Statistics and Machine Learning Toolbox™ such as k-nearest neighbors.


[1] N. Dalal and B. Triggs, "Histograms of Oriented Gradients for Human Detection", Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 1, pp. 886-893, 2005.

[2] LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86, 2278-2324.

[3] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, A.Y. Ng, Reading Digits in Natural Images with Unsupervised Feature Learning NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011.

Copyright 2020 The MathWorks, Inc