# unitGenerator

Create unsupervised image-to-image translation (UNIT) generator network

## Syntax

``net = unitGenerator(inputSizeSource)``
``net = unitGenerator(inputSizeSource,Name,Value)``

## Description

example

````net = unitGenerator(inputSizeSource)` creates a UNIT generator network, `net`, for input of size `inputSizeSource`. For more information about the network architecture, see UNIT Generator Network. The network has two inputs and four outputs: The two network inputs are images in the source and target domains. By default, the target image size is same as source image size. You can change the number of channels in the target image by specifying the '`NumTargetInputChannels`' name-value argument.Two of the network outputs are self-reconstructed outputs, in other words, source-to-source and target-to-target translated images. The other two network outputs are the source-to-target and target-to-source translated images. This function requires Deep Learning Toolbox™.```

example

````net = unitGenerator(inputSizeSource,Name,Value)` modifies aspects of the UNIT generator network using name-value arguments.```

## Examples

collapse all

Specify the network input size for RGB images of size 128-by-128.

`inputSize = [128 128 3];`

Create a UNIT generator that generates RGB images of the input size.

`net = unitGenerator(inputSize)`
```net = dlnetwork with properties: Layers: [9x1 nnet.cnn.layer.Layer] Connections: [8x2 table] Learnables: [168x3 table] State: [0x3 table] InputNames: {'inputSource' 'inputTarget'} OutputNames: {1x4 cell} Initialized: 1 View summary with summary. ```

Display the network.

`analyzeNetwork(net)`

Specify the network input size for RGB images of size 128-by-128.

`inputSize = [128 128 3];`

Create a UNIT generator with five residual blocks, three of which are shared between the encoder and decoder modules.

```net = unitGenerator(inputSize,"NumResidualBlocks",5, ... "NumSharedBlocks",3)```
```net = dlnetwork with properties: Layers: [9x1 nnet.cnn.layer.Layer] Connections: [8x2 table] Learnables: [152x3 table] State: [0x3 table] InputNames: {'inputSource' 'inputTarget'} OutputNames: {1x4 cell} Initialized: 1 View summary with summary. ```

Display the network.

`analyzeNetwork(net)`

## Input Arguments

collapse all

Input size of the source image, specified as a 3-element vector of positive integers. `inputSizeSource` has the form [H W C], where H is the height, W is the width, and C is the number of channels. The length of each dimension must be evenly divisible by 2^`NumDownsamplingBlocks`.

### Name-Value Arguments

Specify optional pairs of arguments as `Name1=Value1,...,NameN=ValueN`, where `Name` is the argument name and `Value` is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and enclose `Name` in quotes.

Example: `'NumDownsamplingBlocks',3` creates a network with 3 downsampling blocks

Number of downsampling blocks in the source encoder and target encoder subnetworks, specified as a positive integer. In total, the encoder module downsamples the source and target input by a factor of 2^`NumDownsamplingBlocks`. The source decoder and target decoder subnetworks have the same number of upsampling blocks.

Number of residual blocks in the encoder module, specified as a positive integer. The decoder module has the same number of residual blocks.

Number of residual blocks in the shared encoder subnetwork, specified as a positive integer. The shared decoder subnetwork has the same number of residual blocks. The network should contain at least one shared residual block.

Number of channels in the target image, specified as a positive integer. By default, '`NumTargetChannels`' is the same as the number of channels in the source image, `inputSizeSource`.

Number of filters in the first convolution layer, specified as a positive even integer.

Filter size in the first and last convolution layers of the network, specified as a positive odd integer or 2-element vector of positive odd integers of the form [height width]. When you specify the filter size as a scalar, the filter has equal height and width.

Filter size in intermediate convolution layers, specified as a positive odd integer or 2-element vector of positive odd integers of the form [height width]. The intermediate convolution layers are the convolution layers excluding the first and last convolution layer. When you specify the filter size as a scalar, the filter has identical height and width.

Style of padding used in the network, specified as one of these values.

`PaddingValue`DescriptionExample
Numeric scalarPad with the specified numeric value
`$\left[\begin{array}{ccc}3& 1& 4\\ 1& 5& 9\\ 2& 6& 5\end{array}\right]\to \left[\begin{array}{ccccccc}2& 2& 2& 2& 2& 2& 2\\ 2& 2& 2& 2& 2& 2& 2\\ 2& 2& 3& 1& 4& 2& 2\\ 2& 2& 1& 5& 9& 2& 2\\ 2& 2& 2& 6& 5& 2& 2\\ 2& 2& 2& 2& 2& 2& 2\\ 2& 2& 2& 2& 2& 2& 2\end{array}\right]$`
`'symmetric-include-edge'`Pad using mirrored values of the input, including the edge values
`$\left[\begin{array}{ccc}3& 1& 4\\ 1& 5& 9\\ 2& 6& 5\end{array}\right]\to \left[\begin{array}{ccccccc}5& 1& 1& 5& 9& 9& 5\\ 1& 3& 3& 1& 4& 4& 1\\ 1& 3& 3& 1& 4& 4& 1\\ 5& 1& 1& 5& 9& 9& 5\\ 6& 2& 2& 6& 5& 5& 6\\ 6& 2& 2& 6& 5& 5& 6\\ 5& 1& 1& 5& 9& 9& 5\end{array}\right]$`
`'symmetric-exclude-edge'`Pad using mirrored values of the input, excluding the edge values
`$\left[\begin{array}{ccc}3& 1& 4\\ 1& 5& 9\\ 2& 6& 5\end{array}\right]\to \left[\begin{array}{ccccccc}5& 6& 2& 6& 5& 6& 2\\ 9& 5& 1& 5& 9& 5& 1\\ 4& 1& 3& 1& 4& 1& 3\\ 9& 5& 1& 5& 9& 5& 1\\ 5& 6& 2& 6& 5& 6& 2\\ 9& 5& 1& 5& 9& 5& 1\\ 4& 1& 3& 1& 4& 1& 3\end{array}\right]$`
`'replicate'`Pad using repeated border elements of the input
`$\left[\begin{array}{ccc}3& 1& 4\\ 1& 5& 9\\ 2& 6& 5\end{array}\right]\to \left[\begin{array}{ccccccc}3& 3& 3& 1& 4& 4& 4\\ 3& 3& 3& 1& 4& 4& 4\\ 3& 3& 3& 1& 4& 4& 4\\ 1& 1& 1& 5& 9& 9& 9\\ 2& 2& 2& 6& 5& 5& 5\\ 2& 2& 2& 6& 5& 5& 5\\ 2& 2& 2& 6& 5& 5& 5\end{array}\right]$`

Method used to upsample activations, specified as one of these values:

Data Types: `char` | `string`

Weight initialization used in convolution layers, specified as `"glorot"`, `"he"`, `"narrow-normal"`, or a function handle. For more information, see Specify Custom Weight Initialization Function (Deep Learning Toolbox).

Activation function to use in the network except after the first and final convolution layers, specified as one of these values. The `unitGenerator` function automatically adds a leaky ReLU layer after the first convolution layer. For more information and a list of available layers, see Activation Layers (Deep Learning Toolbox).

Activation function after the final convolution layer in the source decoder, specified as one of these values. For more information and a list of available layers, see Output Layers (Deep Learning Toolbox).

Activation function after the final convolution layer in the target decoder, specified as one of these values. For more information and a list of available layers, see Output Layers (Deep Learning Toolbox).

## Output Arguments

collapse all

UNIT generator network, returned as a `dlnetwork` (Deep Learning Toolbox) object.

collapse all

### UNIT Generator Network

A UNIT generator network consists of three subnetworks in an encoder module followed by three subnetworks in a decoder module. The default network follows the architecture proposed by Liu, Breuel, and Kautz [1].

The encoder module downsamples the input by a factor of 2^`NumDownsamplingBlocks`. The encoder module consists of three subnetworks.

• The source encoder subnetwork, called 'encoderSourceBlock', has an initial block of layers that accepts data in the source domain, XS. The subnetwork then has `NumDownsamplingBlocks` downsampling blocks that downsample the data and `NumResidualBlocks``NumSharedBlocks` residual blocks.

• The target encoder subnetwork, called 'encoderTargetBlock', has an initial block of layers that accepts data in the target domain, XS. The subnetwork then has `NumDownsamplingBlocks` downsampling blocks that downsample the data, and `NumResidualBlocks``NumSharedBlocks` residual blocks.

• The output of the source encoder and target encoder are combined by a `concatenationLayer` (Deep Learning Toolbox)

• The shared residual encoder subnetwork, called 'encoderSharedBlock', accepts the concatenated data and has `NumSharedBlocks` residual blocks.

The decoder module consists of three subnetworks that perform a total of `NumDownsamplingBlocks` upsampling operations on the data.

• The shared residual decoder subnetwork, called 'decoderSharedBlock', accepts data from the encoder and has `NumSharedBlocks` residual blocks.

• The source decoder subnetwork, called 'decoderSourceBlock', has `NumResidualBlocks``NumSharedBlocks` residual blocks, `NumDownsamplingBlocks` downsampling blocks that downsample the data, and a final block of layers that returns the output. This subnetwork returns two outputs in the source domain: XTS and XSS. The output XTS is an image translated from the target domain to the source domain. The output XSS is a self-reconstructed image from the source domain to the source domain.

• The target decoder subnetwork, called 'decoderTargetBlock', has `NumResidualBlocks``NumSharedBlocks` residual blocks, `NumDownsamplingBlocks` downsampling blocks that downsample the data, and a final block of layers that returns the output. This subnetwork returns two outputs in the target domain: XST and XTT. The output XTS is an image translated from the source domain to the target domain. The output XTT is a self-reconstructed image from the target domain to the target domain.

The table describes the blocks of layers that comprise the subnetworks.

Block TypeLayersDiagram of Default Block
Initial block

Downsampling block

Residual block

Upsampling block
• An upsampling layer that upsamples by a factor of 2 according to the `UpsampleMethod` name-value argument. The convolution layer has a filter size of `FilterSizeInIntermediateBlocks`.

• An `instanceNormalizationLayer` (Deep Learning Toolbox).

• An activation layer specified by the `ActivationLayer` name-value argument.

Final block
• A `convolution2dLayer` (Deep Learning Toolbox) with a stride of [1 1] and a filter size of `FilterSizeInFirstAndLastBlocks`.

• An optional activation layer specified by the `SourceFinalActivationLayer` and `TargetFinalActivationLayer` name-value arguments.

## References

[1] Liu, Ming-Yu, Thomas Breuel, and Jan Kautz. "Unsupervised Image-to-Image Translation Networks." Advances in Neural Information Processing Systems 30 (NIPS 2017). Long Beach, CA: 2017. https://arxiv.org/abs/1703.00848.

## Version History

Introduced in R2021a