Package pylearn :: Package algorithms :: Module cost
[hide private]

Module cost

source code

Cost functions.


Note: All of these functions return one cost per example. So it is your job to perform a tensor.sum over the individual example losses.

To Do:
Functions [hide private]
 
quadratic(target, output, axis=1) source code
 
cross_entropy(target, output, mean_axis=0, sum_axis=1)
This is the cross-entropy over a binomial event, in which each dimension is an independent binomial trial.
source code
 
KL_divergence(target, output)
This is a KL divergence over a binomial event, in which each dimension is an independent binomial trial.
source code

Imports: T, xlogx


Function Details [hide private]

cross_entropy(target, output, mean_axis=0, sum_axis=1)

source code 

This is the cross-entropy over a binomial event, in which each dimension is an independent binomial trial.

To Do: This is essentially duplicated as nnet_ops.binary_crossentropy

Warning: OUTPUT and TARGET are reversed in nnet_ops.binary_crossentropy

KL_divergence(target, output)

source code 

This is a KL divergence over a binomial event, in which each dimension is an independent binomial trial.

Note: We do not compute the mean, because if target and output have different shapes then the result will be garbled.