classifyRegions

Classify objects in image regions using R-CNN object detector

Syntax

[labels,scores] = classifyRegions(detector,I,rois)
[labels,scores,allScores] = classifyRegions(detector,I,rois)
[___] = classifyRegions(___Name,Value)

Description

example

[labels,scores] = classifyRegions(detector,I,rois) classifies objects within the regions of interest of image I, using an R-CNN (regions with convolutional neural networks) object detector. For each region, classifyRegions returns the class label with the corresponding highest classification score.

When using this function, use of a CUDA® enabled NVIDIA® GPU with a compute capability of 3.0 or higher is highly recommended. The GPU reduces computation time significantly. Usage of the GPU requires Parallel Computing Toolbox™.

[labels,scores,allScores] = classifyRegions(detector,I,rois) also returns all the classification scores of each region. The scores are returned in an M-by-N matrix of M regions and N class labels.

[___] = classifyRegions(___Name,Value) specifies options using one or more Name,Value pair arguments. For example, classifyRegions(detector,I,rois,'ExecutionEnvironment','cpu') classifies objects within image regions using only the CPU hardware.

Examples

collapse all

Load training data and network layers.

load('rcnnStopSigns.mat', 'stopSigns', 'layers')

Add the image directory to the MATLAB path.

imDir = fullfile(matlabroot, 'toolbox', 'vision', 'visiondata',...
  'stopSignImages');
addpath(imDir);

Set network training options to use mini-batch size of 32 to reduce GPU memory usage. Lower the InitialLearningRate to reduce the rate at which network parameters are changed. This is beneficial when fine-tuning a pre-trained network and prevents the network from changing too rapidly.

options = trainingOptions('sgdm', ...
  'MiniBatchSize', 32, ...
  'InitialLearnRate', 1e-6, ...
  'MaxEpochs', 10);

Train the R-CNN detector. Training can take a few minutes to complete.

rcnn = trainRCNNObjectDetector(stopSigns, layers, options, 'NegativeOverlapRange', [0 0.3]);
*******************************************************************
Training an R-CNN Object Detector for the following object classes:

* stopSign

Step 1 of 3: Extracting region proposals from 27 training images...done.

Step 2 of 3: Training a neural network to classify objects in training data...

|=========================================================================================|
|     Epoch    |   Iteration  | Time Elapsed |  Mini-batch  |  Mini-batch  | Base Learning|
|              |              |  (seconds)   |     Loss     |   Accuracy   |     Rate     |
|=========================================================================================|
|            3 |           50 |         9.27 |       0.2895 |       96.88% |     0.000001 |
|            5 |          100 |        14.77 |       0.2443 |       93.75% |     0.000001 |
|            8 |          150 |        20.29 |       0.0013 |      100.00% |     0.000001 |
|           10 |          200 |        25.94 |       0.1524 |       96.88% |     0.000001 |
|=========================================================================================|

Network training complete.

Step 3 of 3: Training bounding box regression models for each object class...100.00%...done.

R-CNN training complete.
*******************************************************************

Test the R-CNN detector on a test image.

img = imread('stopSignTest.jpg');

[bbox, score, label] = detect(rcnn, img, 'MiniBatchSize', 32);

Display strongest detection result.

[score, idx] = max(score);

bbox = bbox(idx, :);
annotation = sprintf('%s: (Confidence = %f)', label(idx), score);

detectedImg = insertObjectAnnotation(img, 'rectangle', bbox, annotation);

figure
imshow(detectedImg)

Remove the image directory from the path.

rmpath(imDir);

Input Arguments

collapse all

R-CNN object detector, specified as an rcnnObjectDetector object. To create this object, call the trainRCNNObjectDetector function with training data as input.

Input image, specified as a real, nonsparse, grayscale or RGB image.

Data Types: uint8 | uint16 | int16 | double | single | logical

Regions of interest within the image, specified as an M-by-4 matrix defining M rectangular regions. Each row contains a four-element vector of the form [x y width height]. This vector specifies the upper left corner and size of a region in pixels.

Name-Value Pair Arguments

Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside quotes. You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.

Example: 'MiniBatchSize',64Example: 'ExecutionEnvironment','cpu'

Size of smaller batches for R-CNN data processing, specified as the comma-separated pair consisting of 'MiniBatchSize' and an integer. Larger batch sizes lead to faster processing but take up more memory.

Hardware resource used to classify image regions, specified as the comma-separated pair consisting of 'ExecutionEnvironment' and 'auto', 'gpu', or 'cpu'.

  • 'auto' — Use a GPU if it is available. Otherwise, use the CPU.

  • 'gpu' — Use the GPU. To use a GPU, you must have Parallel Computing Toolbox and a CUDA enabled NVIDIA GPU with a compute capability of 3.0 or higher. If a suitable GPU is not available, the function returns an error.

  • 'cpu' — Use the CPU.

Output Arguments

collapse all

Classification labels of regions, returned as an M-by-1 categorical array. M is the number of regions of interest in rois. Each class name in labels corresponds to a classification score in scores and a region of interest in rois. classifyRegions obtains the class names from the input detector.

Highest classification score per region, returned as an M-by-1 vector of values in the range [0, 1]. M is the number of regions of interest in rois. Each classification score in scores corresponds to a class name in labels and a region of interest in rois. A higher score indicates higher confidence in the classification.

All classification scores per region, returned as an M-by-N matrix of values in the range [0, 1]. M is the number of regions in rois. N is the number of class names stored in the input detector. Each row of classification scores in allscores corresponds to a region of interest in rois. A higher score indicates higher confidence in the classification.

Introduced in R2016b