conv
– Ops for convolutional neural nets¶
Note
Two similar implementation exists for conv2d:
signal.conv2d
andnnet.conv2d
.
The former implements a traditional 2D convolution, while the latter implements the convolutional layers present in convolutional neural networks (where filters are 3D and pool over several input channels).
Note
As of December 2015, a new conv2d interface has been introduced.
nnet.conv2d
defines an
abstract theano graph convolution operation
(nnet.abstract_conv.AbstractConv2d
)
that will be replaced by an actual convolution implementation during
the optimization phase.
Since the abstract Op does not have any implementation, it will prevent computations in the unoptimized graph, and cause problems with DebugMode, test values, and when compiling with optimizer=None.
By default, if cuDNN is available, we will use it, otherwise we will fall back to using the gemm version (slower then cuDNN in most cases and uses more memory).
Either cuDNN and the gemm version can be disabled using the Theano flags
optimizer_excluding=conv_dnn
and optimizer_excluding=conv_gemm
,
respectively. In this case, we will fall back to using the legacy
convolution code, which is slower, but does not require extra memory.
To verify that cuDNN is used, you can supply the Theano flag
optimizer_including=cudnn
. This will raise an error if cuDNN is
unavailable.
It is not advised to ever disable cuDNN, as this is usually the fastest option. Disabling the gemm version is only useful if cuDNN is unavailable and you run out of GPU memory.
There are two other implementations: An FFTbased convolution integrated into Theano, and an implementation by Alex Krizhevsky available via Pylearn2. See the documentation below on how to use them.
Old conv2d interface is still accessible through nnet.conv.conv2d
.
TODO: Give examples on how to use these things! They are pretty complicated.
 Implemented operators for neural network 2D / image convolution:
nnet.conv.conv2d
. CPU convolution implementation, previously used as the convolution interface. This is the standard operator for convolutional neural networks working with batches of multichannel 2D images, available. It computes a convolution, i.e., it flips the kernel. Most of the more efficient GPU implementations listed below can be inserted automatically as a replacement for nnet.conv.conv2d via graph optimizations. Some of these graph optimizations are enabled by default, others can be enabled via Theano flags. Since November 24th, 2014, you can also use a metaoptimizer to automatically choose the fastest implementation for each specific convolution in your graph using the old interface. For each instance, it will compile and benchmark each applicable implementation of the ones listed below and choose the fastest one. As performance is dependent on input and filter shapes, this only works for operations introduced via nnet.conv.conv2d with fully specified shape information. Enable it via the Theano flagoptimizer_including=conv_meta
, and optionally set it to verbose mode via the flag metaopt.verbose=1.conv2d_fft
This is a GPUonly version of nnet.conv2d that uses an FFT transform to perform the work. It flips the kernel just likeconv2d
. conv2d_fft should not be used directly as it does not provide a gradient. Instead, use nnet.conv2d and allow Theano’s graph optimizer to replace it by the FFT version by setting ‘THEANO_FLAGS=optimizer_including=conv_fft’ in your environment. If enabled, it will take precedence over cuDNN and the gemm version. It is not enabled by default because it has some restrictions on input and uses a lot more memory. Also note that it requires CUDA >= 5.0, scikits.cuda >= 0.5.0 and PyCUDA to run. To deactivate the FFT optimization on a specific nnet.conv2d while the optimization flag is active, you can set itsversion
parameter to'no_fft'
. To enable it for just one Theano function:mode = theano.compile.get_default_mode() mode = mode.including('conv_fft') f = theano.function(..., mode=mode)
cudaconvnet wrapper for 2d correlation
Wrapper for an opensource GPUonly implementation of conv2d by Alex Krizhevsky, very fast, but with several restrictions on input and kernel shapes, and with a different memory layout for the input. It does not flip the kernel.
This is in Pylearn2, where it is normally called from the linear transform implementation, but it can also be used directly from within Theano as a manual replacement for nnet.conv2d.
GpuCorrMM
This is a GPUonly 2d correlation implementation taken from caffe’s CUDA implementation and also used by Torch. It does not flip the kernel.For each element in a batch, it first creates a Toeplitz matrix in a CUDA kernel. Then, it performs a
gemm
call to multiply this Toeplitz matrix and the filters (hence the name: MM is for matrix multiplication). It needs extra memory for the Toeplitz matrix, which is a 2D matrix of shape(no of channels * filter width * filter height, output width * output height)
.As it provides a gradient, you can use it as a replacement for nnet.conv2d. But usually, you will just use nnet.conv2d and allow Theano’s graph optimizer to automatically replace it by the GEMM version if cuDNN is not available. To explicitly disable the graph optimizer, set
THEANO_FLAGS=optimizer_excluding=conv_gemm
in your environment. If using it, please see the warning about a bug in CUDA 5.0 to 6.0 below.CorrMM
This is a CPUonly 2d correlation implementation taken from caffe’s cpp implementation and also used by Torch. It does not flip the kernel. As it provides a gradient, you can use it as a replacement for nnet.conv2d. For convolutions done on CPU, nnet.conv2d will be replaced by CorrMM. To explicitly disable it, setTHEANO_FLAGS=optimizer_excluding=conv_gemm
in your environment.dnn_conv
GPUonly convolution using NVIDIA’s cuDNN library. This requires that you have cuDNN installed and available, which in turn requires CUDA 6.5 and a GPU with compute capability 3.0 or more.If cuDNN is available, by default, Theano will replace all nnet.conv2d operations with dnn_conv. To explicitly disable it, set
THEANO_FLAGS=optimizer_excluding=conv_dnn
in your environment. As dnn_conv has a gradient defined, you can also use it manually.
 Implemented operators for neural network 3D / video convolution:
conv3D
3D Convolution applying multichannel 3D filters to batches of multichannel 3D images. It does not flip the kernel.conv3d_fft
GPUonly version of conv3D using FFT transform. conv3d_fft should not be called directly as it does not provide a gradient. Instead, use conv3D and allow Theano’s graph optimizer to replace it by the FFT version by settingTHEANO_FLAGS=optimizer_including=conv3d_fft:convgrad3d_fft:convtransp3d_fft
in your environment. This is not enabled by default because it does not support strides and uses more memory. Also note that it requires CUDA >= 5.0, scikits.cuda >= 0.5.0 and PyCUDA to run. To enable for just one Theano function:mode = theano.compile.get_default_mode() mode = mode.including('conv3d_fft', 'convgrad3d_fft', 'convtransp3d_fft') f = theano.function(..., mode=mode)
GpuCorr3dMM
This is a GPUonly 3d correlation relying on a Toeplitz matrix and gemm implementation (seeGpuCorrMM
) It needs extra memory for the Toeplitz matrix, which is a 2D matrix of shape(no of channels * filter width * filter height * filter depth, output width * output height * output depth)
. As it provides a gradient, you can use it as a replacement for nnet.conv3d. Alternatively, you can use nnet.conv3d and allow Theano’s graph optimizer to replace it by the GEMM version by settingTHEANO_FLAGS=optimizer_including=conv3d_gemm:convgrad3d_gemm:convtransp3d_gemm
in your environment. This is not enabled by default because it uses some extra memory, but the overhead is small compared to conv3d_fft, there are no restrictions on input or kernel shapes and strides are supported. If using it, please see the warning about a bug in CUDA 5.0 to 6.0 inGpuCorrMM
.conv3d2d
Another conv3d implementation that uses the conv2d with data reshaping. It is faster in some cases than conv3d, and work on the GPU. It flip the kernel.

theano.tensor.nnet.
conv2d
(input, filters, input_shape=None, filter_shape=None, border_mode='valid', subsample=(1, 1), filter_flip=True, image_shape=None, **kwargs)[source]¶ This function will build the symbolic graph for convolving a minibatch of a stack of 2D inputs with a set of 2D filters. The implementation is modelled after Convolutional Neural Networks (CNN).
Parameters:  input (symbolic 4D tensor) – Minibatch of feature map stacks, of shape
(batch size, input channels, input rows, input columns).
See the optional parameter
input_shape
.  filters (symbolic 4D tensor) – Set of filters used in CNN layer of shape
(output channels, input channels, filter rows, filter columns).
See the optional parameter
filter_shape
.  input_shape (None, tuple/list of len 4 of int or Constant variable) – The shape of the input parameter.
Optional, possibly used to choose an optimal implementation.
You can give
None
for any element of the list to specify that this element is not known at compile time.  filter_shape (None, tuple/list of len 4 of int or Constant variable) – The shape of the filters parameter.
Optional, possibly used to choose an optimal implementation.
You can give
None
for any element of the list to specify that this element is not known at compile time.  border_mode (str, int or tuple of two int) –
Either of the following:
'valid'
: apply filter wherever it completely overlaps with the input. Generates output of shape: input shape  filter shape + 1
'full'
: apply filter wherever it partly overlaps with the input. Generates output of shape: input shape + filter shape  1
'half'
: pad input with a symmetric border offilter rows // 2
 rows and
filter columns // 2
columns, then perform a valid convolution. For filters with an odd number of rows and columns, this leads to the output shape being equal to the input shape. int
: pad input with a symmetric border of zeros of the given width, then perform a valid convolution.
(int1, int2)
: pad input with a symmetric border ofint1
rows and
int2
columns, then perform a valid convolution.
 subsample (tuple of len 2) – Factor by which to subsample the output. Also called strides elsewhere.
 filter_flip (bool) – If
True
, will flip the filter rows and columns before sliding them over the input. This operation is normally referred to as a convolution, and this is the default. IfFalse
, the filters are not flipped and the operation is referred to as a crosscorrelation.  image_shape (None, tuple/list of len 4 of int or Constant variable) – Deprecated alias for input_shape.
 kwargs (Any other keyword arguments are accepted for backwards) – compatibility, but will be ignored.
Returns: Set of feature maps generated by convolutional layer. Tensor is of shape (batch size, output channels, output rows, output columns)
Return type: Symbolic 4D tensor
Notes
If cuDNN is available, it will be used on the GPU. Otherwise, it is the CorrMM convolution that will be used “caffe style convolution”.
This is only supported in Theano 0.8 or the development version until it is released.
 input (symbolic 4D tensor) – Minibatch of feature map stacks, of shape
(batch size, input channels, input rows, input columns).
See the optional parameter

theano.sandbox.cuda.fftconv.
conv2d_fft
(input, filters, image_shape=None, filter_shape=None, border_mode='valid', pad_last_dim=False)[source]¶ Perform a convolution through fft.
Only support input which will be even on the last dimension (width). All other dimensions can be anything and the filters can have an even or odd width.
If you must use input which has an odd width, you can either pad it or use the pad_last_dim argument which will do it for you and take care to strip the padding before returning. Don’t use this argument if you are not sure the input is odd since the padding is unconditional and will make even input odd, thus leading to problems.
On valid mode the filters must be smaller than the input.
Parameters:  input – (b, ic, i0, i1).
 filters – (oc, ic, f0, f1).
 border_mode ({'valid', 'full'}) –
 pad_last_dim – Unconditionally pad the last dimension of the input to to turn it from odd to even. Will strip the padding before returning the result.

theano.tensor.nnet.Conv3D.
conv3D
(V, W, b, d)[source]¶ 3D “convolution” of multiple filters on a minibatch.
(does not flip the kernel, moves kernel with a user specified stride)
Parameters:  V – Visible unit, input. Dimensions: (batch, row, column, time, in channel).
 W – Weights, filter. Dimensions: (out channel, row, column, time ,in channel).
 b – Bias, shape == (W.shape[0],).
 d – Strides when moving the filter over the input(dx, dy, dt).
Notes
The order of dimensions does not correspond to the one in conv2d. This is for optimization.
The GPU implementation is very slow. You should use
conv3d2d
orconv3d_fft
for a GPU graph instead.See also
Someone()
,between()
,the()

theano.sandbox.cuda.fftconv.
conv3d_fft
(input, filters, image_shape=None, filter_shape=None, border_mode='valid', pad_last_dim=False)[source]¶ Perform a convolution through fft.
Only supports input whose shape is even on the last dimension. All other dimensions can be anything and the filters can have an even or odd last dimension.
The semantics associated with the last three dimensions are not important as long as they are in the same order between the inputs and the filters. For example, when the convolution is done on a sequence of images, they could be either (duration, height, width) or (height, width, duration).
If you must use input which has an odd width, you can either pad it or use the pad_last_dim argument which will do it for you and take care to strip the padding before returning. pad_last_dim checks that the last dimension is odd before the actual paddding
On valid mode the filters must be smaller than the input.
Parameters:  input – (b, ic, i0, i1, i2).
 filters – (oc, ic, f0, f1, i2).
 border_mode ({'valid', 'full'}.) –
 pad_last_dim – Unconditionally pad the last dimension of the input to to turn it from odd to even. Will strip the padding before returning the result.

theano.tensor.nnet.conv3d2d.
conv3d
(signals, filters, signals_shape=None, filters_shape=None, border_mode='valid')[source]¶ Convolve spatiotemporal filters with a movie.
It flips the filters.
Parameters:  signals – Timeseries of images whose pixels have color channels. Shape: [Ns, Ts, C, Hs, Ws].
 filters – Spatiotemporal filters. Shape: [Nf, Tf, C, Hf, Wf].
 signals_shape – None or a tuple/list with the shape of signals.
 filters_shape – None or a tuple/list with the shape of filters.
 border_mode – The only one tested is ‘valid’.
Notes
Another way to define signals: (batch, time, in channel, row, column) Another way to define filters: (out channel,time,in channel, row, column)
For the GPU, you can use this implementation or
conv3d_fft
.See also
Someone made a script that shows how to swap the axes between both 3d convolution implementations in Theano. See the last attachment

theano.tensor.nnet.conv.
conv2d
(input, filters, image_shape=None, filter_shape=None, border_mode='valid', subsample=(1, 1), **kargs)[source]¶ Deprecated, old conv2d interface. This function will build the symbolic graph for convolving a stack of input images with a set of filters. The implementation is modelled after Convolutional Neural Networks (CNN). It is simply a wrapper to the ConvOp but provides a much cleaner interface.
Parameters:  input (symbolic 4D tensor) – Minibatch of feature map stacks, of shape (batch size, stack size, nb row, nb col) see the optional parameter image_shape
 filters (symbolic 4D tensor) – Set of filters used in CNN layer of shape (nb filters, stack size, nb row, nb col) see the optional parameter filter_shape
 border_mode ({'valid', 'full'}) – ‘valid’only apply filter to complete patches of the image. Generates output of shape: image_shape  filter_shape + 1. ‘full’ zeropads image to multiple of filter shape to generate output of shape: image_shape + filter_shape  1.
 subsample (tuple of len 2) – Factor by which to subsample the output. Also called strides elsewhere.
 image_shape (None, tuple/list of len 4 of int, None or Constant variable) – The shape of the input parameter. Optional, used for optimization like loop unrolling You can put None for any element of the list to tell that this element is not constant.
 filter_shape (None, tuple/list of len 4 of int, None or Constant variable) – Optional, used for optimization like loop unrolling You can put None for any element of the list to tell that this element is not constant.
 kwargs –
Kwargs are passed onto ConvOp. Can be used to set the following: unroll_batch, unroll_kern, unroll_patch, openmp (see ConvOp doc).
 openmp: By default have the same value as
 config.openmp. For small image, filter, batch size, nkern and stack size, it can be faster to disable manually openmp. A fast and incomplete test show that with image size 6x6, filter size 4x4, batch size==1, n kern==1 and stack size==1, it is faster to disable it in valid mode. But if we grow the batch size to 10, it is faster with openmp on a core 2 duo.
Returns: Set of feature maps generated by convolutional layer. Tensor is of shape (batch size, nb filters, output row, output col).
Return type: symbolic 4D tensor
Abstract conv interface

class
theano.tensor.nnet.abstract_conv.
AbstractConv2d
(imshp=None, kshp=None, border_mode='valid', subsample=(1, 1), filter_flip=True)[source]¶ Abstract Op for the forward convolution. Refer to
BaseAbstractConv2d
for a more detailed documentation.

class
theano.tensor.nnet.abstract_conv.
AbstractConv2d_gradInputs
(imshp=None, kshp=None, border_mode='valid', subsample=(1, 1), filter_flip=True)[source]¶ Gradient wrt. inputs for AbstractConv2d. Refer to
BaseAbstractConv2d
for a more detailed documentation.Note: You will not want to use this directly, but rely on Theano’s automatic differentiation or graph optimization to use it as needed.

class
theano.tensor.nnet.abstract_conv.
AbstractConv2d_gradWeights
(imshp=None, kshp=None, border_mode='valid', subsample=(1, 1), filter_flip=True)[source]¶ Gradient wrt. filters for AbstractConv2d. Refer to
BaseAbstractConv2d
for a more detailed documentation.Note: You will not want to use this directly, but rely on Theano’s automatic differentiation or graph optimization to use it as needed.

class
theano.tensor.nnet.abstract_conv.
BaseAbstractConv2d
(imshp=None, kshp=None, border_mode='valid', subsample=(1, 1), filter_flip=True)[source]¶ Base class for AbstractConv
Define an abstract convolution op that will be replaced with the appropriate implementation
Parameters:  imshp (None, tuple/list of len 4 of int or Constant variable) – The shape of the input parameter.
Optional, possibly used to choose an optimal implementation.
You can give
None
for any element of the list to specify that this element is not known at compile time. imshp is defined w.r.t the forward conv.  kshp (None, tuple/list of len 4 of int or Constant variable) – The shape of the filters parameter.
Optional, possibly used to choose an optimal implementation.
You can give
None
for any element of the list to specify that this element is not known at compile time. kshp is defined w.r.t the forward conv.  border_mode (str, int or tuple of two int) –
Either of the following:
'valid'
: apply filter wherever it completely overlaps with the input. Generates output of shape: input shape  filter shape + 1
'full'
: apply filter wherever it partly overlaps with the input. Generates output of shape: input shape + filter shape  1
'half'
: pad input with a symmetric border offilter rows // 2
 rows and
filter columns // 2
columns, then perform a valid convolution. For filters with an odd number of rows and columns, this leads to the output shape being equal to the input shape. int
: pad input with a symmetric border of zeros of the given width, then perform a valid convolution.
(int1, int2)
: pad input with a symmetric border ofint1
rows and
int2
columns, then perform a valid convolution.
 subsample: tuple of len 2
 Factor by which to subsample the output. Also called strides elsewhere.
 filter_flip: bool
 If
True
, will flip the filter rows and columns before sliding them over the input. This operation is normally referred to as a convolution, and this is the default. IfFalse
, the filters are not flipped and the operation is referred to as a crosscorrelation.
 imshp (None, tuple/list of len 4 of int or Constant variable) – The shape of the input parameter.
Optional, possibly used to choose an optimal implementation.
You can give

theano.tensor.nnet.abstract_conv.
bilinear_kernel_1D
(ratio, normalize=True)[source]¶ Compute 1D kernel for bilinear upsampling
This function builds the 1D kernel that can be used to upsample a tensor by the given ratio using bilinear interpolation.
Parameters:  ratio (int or Constant/Scalar Theano tensor of int* dtype) – the ratio by which an image will be upsampled by the returned filter in the 2D space.
 normalize (bool) – param normalize: indicates whether to normalize the kernel or not. Default is True.
Returns: the 1D kernels that can be applied to any given image to upsample it by the indicated ratio using bilinear interpolation in one dimension.
Return type: symbolic 1D tensor

theano.tensor.nnet.abstract_conv.
bilinear_kernel_2D
(ratio, normalize=True)[source]¶ Compute 2D kernel for bilinear upsampling
This function builds the 2D kernel that can be used to upsample a tensor by the given ratio using bilinear interpolation.
Parameters:  ratio (int or Constant/Scalar Theano tensor of int* dtype) – the ratio by which an image will be upsampled by the returned filter in the 2D space.
 normalize (bool) – param normalize: indicates whether to normalize the kernel or not. Default is True.
Returns: the 2D kernels that can be applied to any given image to upsample it by the indicated ratio using bilinear interpolation in two dimensions.
Return type: symbolic 2D tensor

theano.tensor.nnet.abstract_conv.
bilinear_upsampling
(input, ratio, batch_size=None, num_input_channels=None, use_1D_kernel=True)[source]¶ Compute bilinear upsampling
This function will build the symbolic graph for upsampling a tensor by the given ratio using bilinear interpolation.
Parameters:  input (symbolic 4D tensor) – minibatch of feature map stacks, of shape (batch size, input channels, input rows, input columns) that will be upsampled.
 ratio (int or Constant or Scalar Tensor of int* dtype) – the ratio by which the input is upsampled in the 2D space (row and col size).
 batch_size (None, int or Constant variable) – The size of the first dimension of the input variable. Optional, possibly used to choose an optimal implementation. batch_size will be used only if num_input_channels is not None.
 num_input_channels (None, int or Constant variable) – The size of the second dimension of the input variable. Optional, possibly used to choose an optimal implementation. num_input_channels will be used only if batch_size is not None.
 use_1D_kernel (bool) – if set to true, row and column will be upsampled seperately by 1D kernels, otherwise they are upsampled together using a 2D kernel. The final result is the same, only the speed can differ, given factors such as upsampling ratio.
Returns: set of feature maps generated by bilinear upsampling. Tensor is of shape (batch size, num_input_channels, input row size * ratio, input column size * ratio)
Return type: symbolic 4D tensor
Notes
Note: The kernel used for bilinear interpolation is fixed (not learned). Note: When the upsampling ratio is even, the last row and column is repeated one extra time compared to the first row and column which makes the upsampled tensor asymmetrical on both sides. This does not happen when the upsampling ratio is odd.

theano.tensor.nnet.abstract_conv.
conv2d
(input, filters, input_shape=None, filter_shape=None, border_mode='valid', subsample=(1, 1), filter_flip=True)[source]¶ This function will build the symbolic graph for convolving a minibatch of a stack of 2D inputs with a set of 2D filters. The implementation is modelled after Convolutional Neural Networks (CNN).
Refer to
nnet.conv2d
for a more detailed documentation.

theano.tensor.nnet.abstract_conv.
conv2d_grad_wrt_inputs
(output_grad, filters, input_shape, filter_shape=None, border_mode='valid', subsample=(1, 1), filter_flip=True)[source]¶ Compute conv output gradient w.r.t its inputs
This function builds the symbolic graph for getting the gradient of the output of a convolution (namely output_grad) w.r.t the input of the convolution, given a set of 2D filters used by the convolution, such that the output_grad is upsampled to the input_shape.
Parameters:  output_grad (symbolic 4D tensor) – minibatch of feature map stacks, of shape (batch size, input channels, input rows, input columns). This is the tensor that will be upsampled or the output gradient of the convolution whose gradient will be taken with respect to the input of the convolution.
 filters (symbolic 4D tensor) – set of filters used in CNN layer of shape (output channels,
input channels, filter rows, filter columns). See the
optional parameter
filter_shape
.  input_shape ([None/int/Constant] * 2 + [Tensor/int/Constant] * 2) – The shape of the input (upsampled) parameter. A tuple/list of len 4, with the first two dimensions being None or int or Constant and the last two dimensions being Tensor or int or Constant. Not Optional, since given the output_grad shape and the subsample values, multiple input_shape may be plausible.
 filter_shape (None or [None/int/Constant] * 4) – The shape of the filters parameter. None or a tuple/list of len 4.
Optional, possibly used to choose an optimal implementation.
You can give
None
for any element of the list to specify that this element is not known at compile time.  border_mode (str, int or tuple of two int) –
Either of the following:
'valid'
 apply filter wherever it completely overlaps with the input. Generates output of shape: input shape  filter shape + 1
'full'
 apply filter wherever it partly overlaps with the input. Generates output of shape: input shape + filter shape  1
'half'
 pad input with a symmetric border of
filter rows // 2
rows andfilter columns // 2
columns, then perform a valid convolution. For filters with an odd number of rows and columns, this leads to the output shape being equal to the input shape. It is known as ‘same’ elsewhere. int
 pad input with a symmetric border of zeros of the given width, then perform a valid convolution.
(int1, int2)
 pad input with a symmetric border of
int1
rows andint2
columns, then perform a valid convolution.
 subsample (tuple of len 2) – The subsampling used in the forward pass. Also called strides elsewhere.
 filter_flip (bool) – If
True
, will flip the filter rows and columns before sliding them over the input. This operation is normally referred to as a convolution, and this is the default. IfFalse
, the filters are not flipped and the operation is referred to as a crosscorrelation.
Returns: set of feature maps generated by convolutional layer. Tensor is of shape (batch size, output channels, output rows, output columns)
Return type: symbolic 4D tensor
Notes
Note: If cuDNN is available, it will be used on the GPU. Otherwise, it is the CorrMM convolution that will be used “caffe style convolution”. Note: This is only supported in Theano 0.8 or the development version until it is released.

theano.tensor.nnet.abstract_conv.
conv2d_grad_wrt_weights
(input, output_grad, filter_shape, input_shape=None, border_mode='valid', subsample=(1, 1), filter_flip=True)[source]¶ Compute conv output gradient w.r.t its weights
This function will build the symbolic graph for getting the gradient of the output of a convolution (output_grad) w.r.t its wights.
Parameters:  input (symbolic 4D tensor) – minibatch of feature map stacks, of shape (batch size, input channels, input rows, input columns). This is the input of the convolution in the forward pass.
 output_grad (symbolic 4D tensor) – minibatch of feature map stacks, of shape (batch size, input channels, input rows, input columns). This is the gradient of the output of convolution.
 filter_shape ([None/int/Constant] * 2 + [Tensor/int/Constant] * 2) – The shape of the filter parameter. A tuple/list of len 4, with the first two dimensions being None or int or Constant and the last two dimensions being Tensor or int or Constant. Not Optional, since given the output_grad shape and the input_shape, multiple filter_shape may be plausible.
 input_shape (None or [None/int/Constant] * 4) – The shape of the input parameter. None or a tuple/list of len 4.
Optional, possibly used to choose an optimal implementation.
You can give
None
for any element of the list to specify that this element is not known at compile time.  border_mode (str, int or tuple of two ints) –
Either of the following:
'valid'
 apply filter wherever it completely overlaps with the input. Generates output of shape: input shape  filter shape + 1
'full'
 apply filter wherever it partly overlaps with the input. Generates output of shape: input shape + filter shape  1
'half'
 pad input with a symmetric border of
filter rows // 2
rows andfilter columns // 2
columns, then perform a valid convolution. For filters with an odd number of rows and columns, this leads to the output shape being equal to the input shape. It is known as ‘same’ elsewhere. int
 pad input with a symmetric border of zeros of the given width, then perform a valid convolution.
(int1, int2)
 pad input with a symmetric border of
int1
rows andint2
columns, then perform a valid convolution.
 subsample (tuple of len 2) – The subsampling used in the forward pass of the convolutional operation. Also called strides elsewhere.
 filter_flip (bool) – If
True
, will flip the filter rows and columns before sliding them over the input. This operation is normally referred to as a convolution, and this is the default. IfFalse
, the filters are not flipped and the operation is referred to as a crosscorrelation.
Returns: set of feature maps generated by convolutional layer. Tensor is of shape (batch size, output channels, output rows, output columns)
Return type: symbolic 4D tensor
Notes
Note: If cuDNN is available, it will be used on the GPU. Otherwise, it is the CorrMM convolution that will be used “caffe style convolution”. Note: This is only supported in Theano 0.8 or the development version until it is released.

theano.tensor.nnet.abstract_conv.
get_conv_output_shape
(image_shape, kernel_shape, border_mode, subsample)[source]¶ This function compute the output shape of convolution operation.
Parameters:  image_shape (tuple of int (symbolic or numeric) corresponding to the input) – image shape. Its four (or five) element must correspond respectively to: batch size, number of input channels, height and width (and possibly depth) of the image. None where undefined.
 kernel_shape (tuple of int (symbolic or numeric) corresponding to the) – kernel shape. Its four (or five) elements must correspond respectively to: number of output channels, number of input channels, height and width (and possibly depth) of the kernel. None where undefined.
 border_mode (string, int (symbolic or numeric) or tuple of int (symbolic) – or numeric). If it is a string, it must be ‘valid’, ‘half’ or ‘full’. If it is a tuple, its two (or three) elements respectively correspond to the padding on height and width (and possibly depth) axis.
 subsample (tuple of int (symbolic or numeric) Its or three elements) – espectively correspond to the subsampling on height and width (and possibly depth) axis.
Returns: output_shape – four element must correspond respectively to: batch size, number of output channels, height and width of the image. None where undefined.
Return type: tuple of int corresponding to the output image shape. Its

theano.tensor.nnet.abstract_conv.
get_conv_shape_1axis
(image_shape, kernel_shape, border_mode, subsample)[source]¶ This function compute the output shape of convolution operation.
Parameters:  image_shape (int or None. Corresponds to the input image shape on a) – given axis. None if undefined.
 kernel_shape (int or None. Corresponds to the kernel shape on a given) – axis. None if undefined.
 border_mode (string or int. If it is a string, it must be) – ‘valid’, ‘half’ or ‘full’. If it is an integer, it must correspond to the padding on the considered axis.
 subsample (int. It must correspond to the subsampling on the) – considered axis.
Returns: out_shp – considered axis. None if undefined.
Return type: int corresponding to the output image shape on the