ICLR 2013 in conjunction with AISTATS 2013, Scottsdale, Arizona, May 2nd-4th 2013
Submission deadline: January 15th 2013
One can submit a short-workshop like paper (not considered a publication) or a longer paper considered for publication in the ICLR proceedings or in a JMLR special issue. A novel reviewing and publication model is introduced, based on open reviews, and pre-publication in arXiv, as advocated by Yann LeCun for many years.
It is well understood that the performance of machine learning methods
is heavily dependent on the choice of data representation (or
features) on which they are applied. The rapidly developing field of
representation learning is concerned with questions surrounding how we
can best learn meaningful and useful representations of data. We take
a broad view of the field, and include in it topics such as deep
learning and feature learning, metric learning, kernel learning,
compositional models, non-linear structured prediction, and issues
regarding non-convex optimization.
Despite the importance of representation learning to machine learning
and to application areas such as vision, speech, audio and NLP, there
is currently no common venue for researchers who share a common
interest in this topic. The goal of ICLR is to help fill this void.
A non-exhaustive list of relevant topics:
- unsupervised representation learning
- supervised representation learning
- metric learning and kernel learning
- dimensionality expansion, sparse modeling
- hierarchical models
- optimization for representation learning
- implementation issues, parallelization, software platforms, hardware
- applications in vision, audio, speech, and natural language processing.
- other applications
For more details visit the conference’s web site: