Pretrained SqueezeNet convolutional neural network
SqueezeNet is a convolutional neural network that is trained on more than a million images from the ImageNet database . The network is 18 layers deep and can classify images into 1000 object categories, such as keyboard, mouse, pencil, and many animals. As a result, the network has learned rich feature representations for a wide range of images. This function returns a SqueezeNet v1.1 network, which has similar accuracy to SqueezeNet v1.0 but requires fewer floating-point operations per prediction . The network has an image input size of 227-by-227. For more pretrained networks in MATLAB®, see Pretrained Convolutional Neural Networks.
To retrain the network on a new classification task, follow the steps of Train Deep Learning Network to Classify New Images. Load the SqueezeNet network instead of
GoogLeNet and change the names of the layers that you replace to
'ClassificationLayer_predictions', respectively. Because SqueezeNet
does not have a fully connected layer in the end of the network, replace the
'conv10' layer with a new convolutional layer with
numClasses filters and a filter size of 1.
net = squeezenet
Download and install the Deep Learning Toolbox Model for SqueezeNet Network support package.
squeezenet at the command line.
If the Deep Learning
Toolbox Model for SqueezeNet Network support
package is not installed, then the function provides a link to the required
support package in the Add-On Explorer. To install the support package,
click the link, and then click Install. Check that
the installation is successful by typing
the command line. If the required support package is installed, then the
function returns a
net = squeezenet
net = DAGNetwork with properties: Layers: [68×1 nnet.cnn.layer.Layer] Connections: [75×2 table]
 ImageNet. http://www.image-net.org
 Iandola, Forrest N., Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, and Kurt Keutzer. "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size." arXiv preprint arXiv:1602.07360 (2016).