Call For Papers: ICLR 2015

3rd International Conference on Learning Representations (ICLR2015)

Website: Submission deadline: December 19, 2014 Location: Hilton San Diego Resort & Spa, May 7-9, 2015


It is well understood that the performance of machine learning methods is heavily dependent on the choice of data representation (or features) on which they are applied. The rapidly developing field of representation learning is concerned with questions surrounding how we can best learn meaningful and useful representations of data. We take a broad view of the field, and include in it topics such as deep learning and feature learning, metric learning, kernel learning, compositional models, non-linear structured prediction, and issues regarding non-convex optimization.

Despite the importance of representation learning to machine learning and to application areas such as vision, speech, audio and NLP, there was no venue for researchers who share a common interest in this topic. The goal of ICLR has been to help fill this void.

A non-exhaustive list of relevant topics: – unsupervised, semisupervised, and supervised representation learning – metric learning and kernel learning – dimensionality expansion – sparse modeling – hierarchical models – optimization for representation learning – learning representations of outputs or states – implementation issues, parallelization, software platforms, hardware – applications in vision, audio, speech, natural language processing, robotics, neuroscience, or any other field

The program will include keynote presentations from invited speakers, oral presentations, and posters. This year, the program will also include a joint session with AISTATS.

ICLR’s Two Tracks

ICLR has two publication tracks.

Conference Track: These papers are reviewed as standard conference papers. Papers should be between 6-9 pages in length. Accepted papers will be presented at the main conference as either an oral or poster presentation and will be included in the official proceedings. A subset of accepted conference track papers will be selected to participate in a JMLR special topics issue on the subject of Representation Learning. Authors of the selected papers will be given an opportunity to extend their original submissions with supplementary material.

Workshop Track: Papers submitted to this track are ideally 2-3 pages long and describe late-breaking developments. This track is meant to carry on the tradition of the former Snowbird Learning Workshop. These papers are non-archival workshop papers, and therefore may be published elsewhere.

Note that submitted conference track papers that are not accepted to the conference proceedings are automatically considered for the workshop track.

ICLR Submission Instructions

1. Authors should post their submissions (both conference and workshop tracks) on arXiv:

2. Once the arXiv paper is publicly visible (there can be an approx. 30 hour delay), authors should go to the openreview ICLR2015 website to submit to either the conference track or the workshop track.

To register on the openreview ICLR2015 website, the submitting author must have a Google account.

For more information on paper preparation, including style files and the URL for the openreview ICLR2015 website, please see

Submission deadline: December 19, 2014

Notes: i. Regarding the conference submission’s 6-9 page limits, these are really meant as guidelines and will not be strictly enforced. For example, figures should not be shrunk to illegible size to fit within the page limit. However, in order to ensure a reasonable workload for our reviewers, papers that go beyond the 9 pages should be formatted to include a 9 page submission and a separate supplementary material submission that will be optionally reviewed. If the paper is selected for the JMLR special topic issue, this supplementary material can be incorporated into the final journal version. ii. Workshop track submissions should be formatted as a short paper, with introduction, problem statement, brief explanation of solution, figure(s) and references. They should not merely be abstracts. iii. Paper revisions will be permitted, and in fact are encouraged, in response to comments from and discussions with the reviewers (see “An Open Reviewing Paradigm” below). iv. Authors are encouraged to post their papers to arXiv early enough that the paper has an arXiv number and URL by the submission deadline of 19 Dec. 2014. However, if these are not yet available, authors have up to one week after the submission deadline to provide the arXiv number and URL. At submission time, simply provide the title, authors, abstract, and temporary arXiv number indicating that the paper has been submitted to arXiv.

An Open Reviewing Paradigm

1. Submissions to ICLR are posted on arXiv prior to being submitted to the conference.

2. Authors submit their paper to either the ICLR conference track or workshop track via the the openreview ICLR2015 website.

3. After the authors have submitted their papers via, the ICLR program committee designates anonymous reviewers as usual.

4. The submitted reviews are published without the name of the reviewer, but with an indication that they are the designated reviews.

5. Anyone can openly (non-anonymously) write and publish comments on the paper. Anyone can ask the program chairs for permission to become an anonymous designated reviewer (open bidding). The program chairs have ultimate control over the publication of each anonymous review. Open commenters will have to use their real names, linked with their Google Scholar profiles.

6. Authors can post comments in response to reviews and comments. They can revise the paper as many times as they want, possibly citing some of the reviews. Reviewers are expected to revise their reviews in light of paper revisions.

7. The review calendar includes a generous amount of time for discussion between the authors, anonymous reviewers, and open commentators. The goal is to improve the quality of the final submissions.

8. The ICLR program committee will consider all submitted papers, comments, and reviews and will decide which papers are to be presented in the conference track, which are to be presented in the workshop track, and which will not appear at ICLR.

9. Papers that are presented in the workshop track or are not accepted will be considered non-archival, and may be submitted elsewhere (modified or not), although the ICLR site will maintain the reviews, the comments, and the links to the arXiv versions.

General Chairs

Yoshua Bengio, Université de Montreal Yann LeCun, New York University and Facebook

Program Chairs

Brian Kingsbury, IBM Research Samy Bengio, Google Nando de Freitas, University of Oxford Hugo Larochelle, Université de Sherbrooke


The organizers can be contacted at

Google’s Entry to ImageNet 2014 Challenge

Imagenet 2014 competition is one of the largest and the most challenging computer vision challenge. This challenge is held annually and each year it attracts top machine learning and computer vision researchers. Neural networks, specifically convolutional neural networks again made a big impact on the result of this year’s challenge [1]. Google’s approach won the classification and object recognition challenges. Google used a new variant of convolutional neural network called “Inception” for classification, and for detection the R-CNN [5] was used. The results and the approach that Google’s team took are summarized here [2, 3]. Google’s team was able to train a much smaller neural network and obtained much better results  compared to results obtained with convolutional neural networks in the previous year’s challenges.  Andrej Karpathy, one of the organizer of the competition, summarized his experience and the challenge itself in his blog post [4].

[1] Imagenet 2014 LSVRC results,, Last retrieved on: 19-09-2014.

[2] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich, Going Deeper with Convolutions, Arxiv Link:

[3] GoogLeNet presentation,, Last retrieved on: 19-09.2014..

[4] What I learned from competing against a convnet on imagenet,, Last retrieved on: 19-09-2014.

[5] Girshick, Ross, et al. “Rich feature hierarchies for accurate object detection and semantic segmentation.” arXiv preprint arXiv:1311.2524 (2013).

Andrew Ng is hired by Baidu

Andrew Ng who is one of the co-founder of Coursera, an ex-employee of Google, professor at University of Stanford and an important contributor for machine learning has just been hired by Baidu[1,2,3]. Andrew Ng is going to take on the role of Chief Scientist at Baidu in Silicon Valley. Adam Coates, previously a PhD and Postdoc student of Andrew Ng,  is going to join Baidu as well and his research is going to be mainly focused on unsupervised learning algorithms.




ICLR 2014 Videos are available online

International Conference on Learning Representations (ICLR 2014) held between 14th of April and 16th of April in Banff with great interest from the deep learning community. The videos of talks are made available by the organizers at a Youtube channel[1].

[1]Youtube channel for ICLR 2014,

Google Acquires Deep Mind

Google is acquiring an AI startup called DeepMind for more than 500 million dollars[1,2]. Deep mind has recently hired several deep learning experts and recent graduates from Geoffrey Hinton’s, Yann Lecun’s, Yoshua Bengio’s and Jurgen Schmidhuber’s groups. One of the co-founders of DeepMind, Shane Legg was a PhD student at IDSIA.  According to [2] Google and Facebook was in competition to buyout DeepMind.

[1] Techcrunch

[2] The information

An Article about History of Deep Learning

Wired has just published a new article about brief history of deep learning and the role of Hinton on development of the deep learning field. The article also mentions about CIFAR and the contribution of its member to deep learning:

Google’s new Deep Learning Algorithm Transcribes House Numbers

During his summer internship, Ian Goodfellow (currently a PhD student at UdeM Lisa Lab) and his collaborators from Google, Yaroslav Bulatov, Julian Ibarz, Sacha Arnoud, Vinay Shet, submitted a paper to ICLR 2014 that proposes a deep learning method which successfully transcribes the house numbers from Google Streetview images. This work took wide coverage in the internet media[1, 2, 3].




ICLR 2014 Submissions are Open for Comments

ICLR2014 submissions are open for comments/reviews on ICLR is the International Conference on Learning Representations. It uses an post-publication open review system. There are lots of interesting new work on deep learning and feature learning in there. Please make comments and contribute to making the submissions better[1].

[1] Yann LeCun’s Google+ and Facebook Post.


Facebook Hires Yann Lecun

Facebook decided to hire prominent NYU professor Yann LeCun as the new director of their AI lab. Yann LeCun will still be a part time professor at NYU at newly established Data Science Institute. Another NYU professor Rob Fergus will also join the Facebook AI team. Mark Zuckerberg officially announced that they hired Yann LeCun at NIPS 2013 Deep Learning Workshop.

[1 ] Yann LeCun’s facebook post about his decision,

[2] Yann LeCun’s anouncement about his decision at NIPS 2013,

[3] Facebook hires NYU deep learning expert to run its new AI lab, GigaOM,

Yahoo Acquires Startup LookFlow To Work On Flickr And ‘Deep Learning’

LookFlow, a startup that describes itself as “an entirely new way to explore images you love,” just announced that it has been acquired by Yahoo and will be joining the Flickr team[1,2,3]. The company is cofounded by Bobby Jaros and Simon Osindero. Their company was utilizing deep learning techniques for image recognition problems[1,2].

News sources:

[1] The next web, Emil Protalanski,

[2] Techcrunch, Anthony Ha,

Lookflow’s web site: