Tags list

L1   L2   NLP   abs   abs_tanh   activation   binary   binomial   binomial NLP   cost   cross-entropy   even   gauss   gaussian   hyperbolic   hyperbolic tangent   increasing   logistic   noise   non-negative   normalized   odd   pepper   rectifier   regularization   salt   sigmoid   softplus   softsign   tangent   tanh   unary  

pylearn.formulas.activations

Activation function for artificial neural units.

pylearn.formulas.activations.abs_act(input)

Returns the symbolic variable that represents the absolute value of input.

f(input) = |input|

Parameters:

input : tensor-like

A Theano variable with type theano.Tensor, or a value that can be converted to one \in \mathbb{R}^n

Returns:

ret : a Theano variable with the same shape as the input

where the absolute function is mapped to each element of the input x.

Tags: abs, activation :

pylearn.formulas.activations.abs_tanh(x)

Return a symbolic variable representing the absolute value of the hyperbolic tangent of x.

\textrm{abs\_tanh}(x) = |\textrm{tanh}(x)|

The image of \textrm{abs\_tanh}(x) is the interval [0, 1), in theory. In practice, due to rounding errors in floating point representations, \textrm{abs\_tanh}(x) will lie in the range [0, 1].

Parameters:

x : tensor-like

A Theano variable with type theano.Tensor, or a value that can be converted to one \in \mathbb{R}^n

Returns:

ret : a Theano variable with the same shape as the input

where the abs_tanh function is mapped to each element of the input x.

Tags: abs, abs_tanh, activation, even, hyperbolic, hyperbolic tangent, non-negative, tangent, tanh, unary :

pylearn.formulas.activations.abs_tanh_normalized(x)

Return a symbolic variable representing the absolute value of a normalized tanh (hyperbolic tangent) of the input x. TODO: where does 1.759 come from? why is it normalized like that?

\textrm{abs\_tanh\_normalized}(x) = \left|1.759\textrm{ tanh}\left(\frac{2x}{3}\right)\right|

The image of \textrm{abs\_tanh\_normalized}(x) is the range [0, 1.759), in theory. In practice, due to rounding errors in floating point representations, \textrm{abs\_tanh\_normalized}(x) will lie in the approximative closed range [0, 1.759]. The exact upper bound depends on the precision of the floating point representation.

Parameters:

x: tensor-like :

A Theano variable with type theano.Tensor, or a value that can be converted to one \in \mathbb{R}^n

Returns:

ret: a Theano variable with the same shape as the input :

where the abs_tanh_normalized function is mapped to each element of the input x.

Tags: abs, abs_tanh, activation, even, hyperbolic, hyperbolic tangent, non-negative, normalized, tangent, tanh, unary :

pylearn.formulas.activations.abssoftsign_act(input)

Returns a symbolic variable that computes the absolute value of the softsign function on the input tensor input.

f(input) = \left| \frac{input}{1.0 +|input|} \right|

Parameters:

input : tensor-like

A Theano variable with type theano.Tensor, or a value that can be converted to one \in \mathbb{R}^n

Returns:

ret : a Theano variable with the same shape as the input

where the absolute value of the softsign function is mapped to each element of the input x.

Tags: abs, activation, softsign :

pylearn.formulas.activations.rectifier_act(input)

Returns a symbolic variable that computes the value of the input if and only if it is positive, 0 otherwise.

f(input) = \left \lbrace \begin{array}{l}
            input \quad \text{ if } input > 0 \
            0     \quad \text{ else }
         \end{array}
         \right \}

Parameters:

input : tensor-like

A Theano variable with type theano.Tensor, or a value that can be converted to one \in \mathbb{R}^n

Returns:

ret : a Theano variable with the same shape as the input

A tensor always positive whose element equals the inputs if it is also positive or to 0 otherwise

Tags: activation, rectifier :

pylearn.formulas.activations.sigmoid(x)

Return a symbolic variable representing the sigmoid (logistic) function of the input x.

\textrm{sigmoid}(x) = \frac{1}{1 + e^x}

The image of \textrm{sigmoid}(x) is the open interval (0, 1), in theory. In practice, due to rounding errors in floating point representations, \textrm{sigmoid}(x) will lie in the closed range [0, 1].

Parameters:

x : tensor-like

A Theano variable with type theano.Tensor, or a value that can be converted to one \in \mathbb{R}^n

Returns:

ret : a Theano variable with the same shape as the input

where the sigmoid function is mapped to each element of the input x.

Tags: activation, increasing, logistic, non-negative, sigmoid, unary :

pylearn.formulas.activations.softplus_act(input)

Returns a symbolic variable that computes the softplus of input. Note : (TODO) rescale in order to have a steady state regime close to 0

at initialization.

f(input) = ln \left( 1 + e^{input} \right)

Parameters:

input : tensor-like

A Theano variable with type theano.Tensor, or a value that can be converted to one \in \mathbb{R}^n

Returns:

ret : a Theano variable with the same shape as the input

where the softsign function is mapped to each element of the input x.

Tags: activation, softplus :

pylearn.formulas.activations.softsign_act(input)

Returns a symbolic variable that computes the softsign of input.

f(input) = \frac{input}{1.0 + |input|}

Parameters:

input : tensor-like

A Theano variable with type theano.Tensor, or a value that can be converted to one \in \mathbb{R}^n

Returns:

ret : a Theano variable with the same shape as the input

where the softsign function is mapped to each element of the input x.

Tags: activation, softsign :

pylearn.formulas.activations.tanh(x)

Return a symbolic variable representing the tanh (hyperbolic tangent) of the input x.

\textrm{tanh}(x) = \frac{e^{2x} - 1}{e^{2x} + 1}

The image of \textrm{tanh}(x) is the open interval (-1, 1), in theory. In practice, due to rounding errors in floating point representations, \textrm{tanh}(x) will lie in the closed range [-1, 1].

Parameters:

x : tensor-like

A Theano variable with type theano.Tensor, or a value that can be converted to one \in \mathbb{R}^n

Returns:

ret : a Theano variable with the same shape as the input

where the tanh function is mapped to each element of the input x.

Tags: activation, hyperbolic, hyperbolic tangent, increasing, odd, tangent, tanh, unary :

pylearn.formulas.activations.tanh_normalized(x)

Return a symbolic variable representing a normalized tanh (hyperbolic tangent) of the input x. TODO: where does 1.759 come from? why is it normalized like that?

\textrm{tanh\_normalized}(x) = 1.759\textrm{ tanh}\left(\frac{2x}{3}\right)

The image of \textrm{tanh\_normalized}(x) is the open interval (-1.759, 1.759), in theory. In practice, due to rounding errors in floating point representations, \textrm{tanh\_normalized}(x) will lie in the approximative closed range [-1.759, 1.759]. The exact bound depends on the precision of the floating point representation.

Parameters:

x : tensor-like

A Theano variable with type theano.Tensor, or a value that can be converted to one \in \mathbb{R}^n

Returns:

ret : a Theano variable with the same shape as the input

where the tanh_normalized function is mapped to each element of the input x.

Tags: activation, hyperbolic, hyperbolic tangent, increasing, normalized, odd, tangent, tanh, unary :

pylearn.formulas.costs

TODO: make sur stabilization optimization are done. TODO: make test TODO: check that this work for nd tensor.

class pylearn.formulas.costs.MultiHingeMargin(use_c_code='/usr/bin/g++')

This is a hinge loss function for multiclass predictions.

For each vector X[i] and label index yidx[i], output z[i] = 1 - margin

where margin is the difference between X[i, yidx[i]] and the maximum other element of X[i].

Methods

pylearn.formulas.costs.absnormtanh_cross_entropy(output, target)

crossentropy of a “absolute normalized” tanh activation

L_{CE} \equiv t\log(\frac{1+\tanh(0.6666*|a|)}2) + (1-t)\log(\frac{1-\tanh(0.6666*|a|)}2)

type output:Theano variable
param output:Output before activation
type target:Theano variable
param target:Target
note:no stabilization done.

Tags: abs, binary, cost, cross-entropy, normalized, tanh

pylearn.formulas.costs.abstanh_crossentropy(output, target)

crossentropy of a absolute value tanh activation

L_{CE} \equiv t\log(\frac{1+\tanh(|a|)}2) + (1-t)\log(\frac{1-\tanh(|a|)}2)

type output:Theano variable
param output:Output before activation
type target:Theano variable
param target:Target
note:no stabilization done.

Tags: abs, binary, cost, cross-entropy, tanh

pylearn.formulas.costs.binary_crossentropy(output, target)

Compute the crossentropy of binary output wrt binary target.

L_{CE} \equiv t\log(o) + (1-t)\log(1-o)

type output:Theano variable
param output:Binary output or prediction \in[0,1]
type target:Theano variable
param target:Binary target usually \in\{0,1\}
note:no stabilization optimization needed for a generic output variable

Tags: binary, cost, cross-entropy

pylearn.formulas.costs.cross_entropy(output_act, output, target, act=None)

Execute the cross entropy with a sum on the last dimension and a mean on the first dimension.

If act is in ‘sigmoid’, ‘tanh’, ‘tanhnorm’, ‘abstanh’, ‘abstanhnorm’ we call the specialized version.

mean(sum(sqr(output-target),axis=-1),axis=0)

Parameters:
  • output_act (Theano variable) – Output after activation
  • output (Theano variable) – Output before activation

:type target:Theano variable :param target: Target :type act: str or None :param act: The type of activation used

pylearn.formulas.costs.normtanh_crossentropy(output, target)

crossentropy of a “normalized” tanh activation (LeCun)

L_{CE} \equiv t\log(\frac{1+\tanh(0.6666a)}2) + (1-t)\log(\frac{1-\tanh(0.6666a)}2)

type output:Theano variable
param output:Output before activation
type target:Theano variable
param target:Target
note:no stabilization done.

Tags: binary, cost, cross-entropy, normalized, tanh

pylearn.formulas.costs.quadratic_cost(output, target)

The quadratic cost of output again target with a sum on the last dimension and a mean on the first dimension.

mean(sum(sqr(output-target),axis=-1),axis=0)

Parameters:output (Theano variable) – The value that we want to compare again target

:type target:Theano variable :param target: The value that we consider correct

pylearn.formulas.costs.sigmoid_crossentropy(output, target)

crossentropy of a sigmoid activation

L_{CE} \equiv t\log(\sigma(a)) + (1-t)\log(1-\sigma(a))

type output:Theano variable
param output:Output before activation
type target:Theano variable
param target:Target
note:no stabilization done.

Tags: binary, cost, cross-entropy, sigmoid

pylearn.formulas.costs.tanh_crossentropy(output, target)

crossentropy of a tanh activation

L_{CE} \equiv t\log(\frac{1+\tanh(a)}2) + (1-t)\log(\frac{1-\tanh(a)}2)

type output:Theano variable
param output:Output before activation
type target:Theano variable
param target:Target
note:no stabilization done.

Tags: binary, cost, cross-entropy, tanh

pylearn.formulas.noise

Noise functions used to train Denoising Auto-Associators.

Functions in this module often include a noise_lvl argument that controls the amount of noise that the function applies. The noise contract is simple: noise_lvl is a symbolic variable going from 0 to 1. 0: no change. 1: maximum noise.

pylearn.formulas.noise.binomial_noise(theano_rng, input, noise_lvl, noise_value=0)

Return inp with randomly-chosen elements set to zero.

TODO: MATH DEFINITION

type input:Theano tensor variable
param input:input
type noise_lvl:float
param noise_lvl:
 The probability of setting each element to zero.
type noise_value:
 Theano scalar variable
param noise_value:
 The value that we want when their is noise.

Tags: binomial, noise, salt

pylearn.formulas.noise.gaussian_noise(theano_rng, inp, noise_lvl)

This add gaussian NLP noise to inp

type inp:Theano variable
param inp:The input that we want to add noise
type noise_lvl:float
param noise_lvl:
 The standard deviation of the gaussian.

Tags: gauss, gaussian, noise

pylearn.formulas.noise.pepper_and_salt_noise(theano_rng, inp, noise_lvl)

This add pepper and salt noise to inp

type inp:Theano variable
param inp:The input that we want to add noise
type noise_lvl:tuple(float,float)
param noise_lvl:
 The probability of changing each element to zero or one. (prob of salt, prob of pepper)
note:The sum of the prob of salt and prob of pepper should be less then 1.

Tags: NLP, binomial, binomial NLP, noise, pepper, salt

pylearn.formulas.regularization

Different symbolic regularization and sparsity functions.

pylearn.formulas.regularization.l1(x, target=0, axis_sum=-1, axis_mean=0)

Construct the L1 regularization penalty \sum|x-target|

type x:Theano variable
param x:Weights or other variable to regularize
type target:Theano variable
param target:Target of x
type axis_sum:Scalar
param axis_sum:Axis along which the penalty terms will be summed (e.g. output units)
type axis_mean:Scalar
param axis_mean:
 Axis along which the penalty terms will be averaged (e.g. minibatches)
note:no stabilization required

Tags: L1, regularization

pylearn.formulas.regularization.l2(x, target=0, axis_sum=-1, axis_mean=0)

Construct the L2 regularization penalty \sum(x-target)^2

type x:Theano variable
param x:Weights or other variable to regularize
type target:Theano variable
param target:Target of x
type axis_sum:Scalar
param axis_sum:Axis along which the penalty terms will be summed (e.g. output units)
type axis_mean:Scalar
param axis_mean:
 Axis along which the penalty terms will be averaged (e.g. minibatches)
note:no stabilization required

Tags: L2, regularization