ICLR 2014 Videos are available online

International Conference on Learning Representations (ICLR 2014) held between 14th of April and 16th of April in Banff with great interest from the deep learning community. The videos of talks are made available by the organizers at a Youtube channel[1].

[1]Youtube channel for ICLR 2014, https://www.youtube.com/playlist?list=PLhiWXaTdsWB-3O19E0PSR0r9OseIylUM8

Google Acquires Deep Mind

Google is acquiring an AI startup called DeepMind for more than 500 million dollars[1,2]. Deep mind has recently hired several deep learning experts and recent graduates from Geoffrey Hinton’s, Yann Lecun’s, Yoshua Bengio’s and Jurgen Schmidhuber’s groups. One of the co-founders of DeepMind, Shane Legg was a PhD student at IDSIA.  According to [2] Google and Facebook was in competition to buyout DeepMind.

[1] Techcrunch

[2] The information

An Article about History of Deep Learning

Wired has just published a new article about brief history of deep learning and the role of Hinton on development of the deep learning field. The article also mentions about CIFAR and the contribution of its member to deep learning:

http://www.wired.com/wiredenterprise/2014/01/geoffrey-hinton-deep-learning

Google’s new Deep Learning Algorithm Transcribes House Numbers

During his summer internship, Ian Goodfellow (currently a PhD student at UdeM Lisa Lab) and his collaborators from Google, Yaroslav Bulatov, Julian Ibarz, Sacha Arnoud, Vinay Shet, submitted a paper to ICLR 2014 that proposes a deep learning method which successfully transcribes the house numbers from Google Streetview images. This work took wide coverage in the internet media[1, 2, 3].

[1] http://www.wired.co.uk/news/archive/2014-01/07/google-street-view-house-numbers

[2] http://motherboard.vice.com/blog/how-google-knows-your-house-number

[3] http://www.technologyreview.com/view/523326/how-google-cracked-house-number-identification-in-street-view/

ICLR 2014 Submissions are Open for Comments

ICLR2014 submissions are open for comments/reviews on OpenReview.net: http://openreview.net/venue/iclr2014. ICLR is the International Conference on Learning Representations. It uses an post-publication open review system. There are lots of interesting new work on deep learning and feature learning in there. Please make comments and contribute to making the submissions better[1].

[1] Yann LeCun’s Google+ and Facebook Post.

 

Facebook Hires Yann Lecun

Facebook decided to hire prominent NYU professor Yann LeCun as the new director of their AI lab. Yann LeCun will still be a part time professor at NYU at newly established Data Science Institute. Another NYU professor Rob Fergus will also join the Facebook AI team. Mark Zuckerberg officially announced that they hired Yann LeCun at NIPS 2013 Deep Learning Workshop.

[1 ] Yann LeCun’s facebook post about his decision,  https://www.facebook.com/yann.lecun/posts/10151728212367143

[2] Yann LeCun’s anouncement about his decision at NIPS 2013, http://www.youtube.com/watch?feature=player_embedded&v=Eoljz6sK7mo

[3] Facebook hires NYU deep learning expert to run its new AI lab, GigaOM, http://gigaom.com/2013/12/09/facebook-hires-nyu-deep-learning-expert-to-run-its-new-ai-lab/

Yahoo Acquires Startup LookFlow To Work On Flickr And ‘Deep Learning’

LookFlow, a startup that describes itself as “an entirely new way to explore images you love,” just announced that it has been acquired by Yahoo and will be joining the Flickr team[1,2,3]. The company is cofounded by Bobby Jaros and Simon Osindero. Their company was utilizing deep learning techniques for image recognition problems[1,2].

News sources:

[1] The next web, Emil Protalanski, http://thenextweb.com/insider/2013/10/23/yahoo-acquires-ai-startup-lookflow-improve-discovery-flickr-build-deep-learning-group/

[2] Techcrunch, Anthony Ha, http://techcrunch.com/2013/10/23/yahoo-acquires-startup-lookflow-to-work-on-flickr-and-deep-learning/

Lookflow’s web site:

[3] https://lookflow.com/

ICLR 2014

2nd International Conference on Learning Representations (ICLR2014) will take place in Banff, Canada, April 14-16 2014.

The same open reviewing process that is pioneered with ICLR 2013 is going to be used. The process was highly praised by authors and reviewers alike last year (see David Soergel’s presentation at the ICML2013 workshop on peer reviewing and publishing models)

The Call for Papers follows:

——————————————————————————————
2nd International Conference on Learning Representations
(ICLR2014)
——————————————————————————————

Website: representationlearning2014
Submission deadline (for initial arXiv submission): December 20th 2013

Held at the: Rimrock Resort Hotel, Banff, Canada on April 14th-16th 2014

Overview
————-
It is well understood that the performance of machine learning methods is heavily dependent on the choice of data representation (or features) on which they are applied. The rapidly developing field of representation learning is concerned with questions surrounding how we can best learn meaningful and useful representations of data.  We take a broad view of the field, and include in it topics such as deep learning and feature learning, metric learning, kernel learning, compositional models, non-linear structured prediction, and issues regarding non-convex optimization.
Despite the importance of representation learning to machine learning and to application areas such as vision, speech, audio and NLP, there is currently no common venue for researchers who share a common interest in this topic. The goal of ICLR is to help fill this void and following the success of the 1st ICLR conference in May 2013, we present a second edition of the conference.

A non-exhaustive list of relevant topics:
- unsupervised representation learning
- supervised representation learning
- metric learning and kernel learning
- dimensionality expansion, sparse modeling
- hierarchical models
- optimization for representation learning
- implementation issues, parallelization, software platforms, hardware
- applications in vision, audio, speech, and natural language processing, robotics and neuroscience.
- other applications

ICLR2014′s Two Submission Tracks

ICLR2014 has two publication tracks:

Conference Track: These papers are reviewed as standard conference papers. Papers should be between 6-9 pages in length. Accepted papers will be presented at the main conference as either an oral or poster presentation and will be included in the official ICLR2014 proceedings.  A subset of accepted conference track papers will be selected to participate in a JMLR special topics issue on the subject of Representation Learning. Authors of the selected papers will be given an opportunity to extend their original submissions with supplementary material.

Workshop Track: Papers submitted to this track are ideally 2-3 pages long and describe late-breaking developments. This track is meant to carry on the tradition of the former Snowbird Learning Workshop. These papers are considered as workshop papers (and can be published elsewhere). They will be lightly reviewed by ICLR reviewers.

ICLR2014 Submission Instructions:

(1) Authors should post their submissions (both conference and workshop tracks) on arXiv: http://arxiv.org
(2) Once the arXiv paper is publicly visible (there can be an approx. 30 hour delay), authors should go to the openreview ICLR2014 website: http://openreview.net/iclr2014 to submit to either the conference track or the workshop track.
To register on the openreview ICLR2014 website, the submitting author requires a google account.

Both tracks will use the NIPS format (style files available here:http://nips.cc/PaperInformation/StyleFiles) or ICML format (files are available there).
Submission deadline (for initial arXiv submission): December 20th 2013

Notes:
i. Regarding the conference submission’s 6-9 page limits, these are really meant as guidelines and will not be strictly enforced. For example, figures should not be shrunk to illegible size to fit within the page limit. However, in order to ensure a reasonable workload for our reviewers, papers that go beyond the 9 pages should be formatted to include a 9 page submission and a separate supplementary material submission that will be optionally reviewed. If the paper is selected for the JMLR special topic issue, this supplementary material can be incorporated into the final journal version.
ii. Workshop track submissions should be formatted as a short paper, with introduction, problem statement, brief explanation of solution, figure(s) and references. They should not merely be abstracts.
iii. Paper revisions will be permitted in response to reviewer comments (see “An Open Reviewing Paradigm” section below).

An Open Reviewing Paradigm:

Following the success achieved last year with openreview.net, ICLR2014 will use an open publication and reviewing model that proceeds as follows:
- After the authors have posted their submissions on arXiv, the ICLR program committee designates anonymous reviewers as usual.
- The submitted reviews are published without the name of the reviewer, but with an indication that they are the designated reviews. Anyone can write and publish comments on the paper (non anonymously). Anyone can ask the program chairs for permission to become an anonymous designated reviewer (open bidding). The program chairs have ultimate control over the publication of each anonymous review. Open commenters will have to use their real name, linked with their Google Scholar profile.
- Authors can post comments in response to reviews and comments. They can revise the paper as many times as they want, possibly citing some of the reviews.
- By Feb 22nd 2014, the ICLR program committee will consider all submitted papers, comments, and reviews and will decide which papers are to be presented at the conference as oral or poster. Although papers can be modified after that date, there is no guarantee that the modifications will be taken into account by the committee.
- Papers that are not accepted for publication in the proceedings will be considered non-archival, and could be submitted elsewhere (modified or not), although the ICLR site will maintain the reviews, the comments, and the links to the arXiv versions.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Invited Speakers
————————
To be announced.

General Chairs
———————
Yoshua Bengio, Université de Montreal
Yann LeCun, New York University

Program Chairs
———————–
Aaron Courville, Université de Montreal
Rob Fergus, New York University
Brian Kingsbury, IBM Research

Contact
———–
The organizers can be contacted at: iclr2014 dot programchairs at gmail dot com

Deep Learning Successes Obtained by IDSIA

IDSIA is one of the largest and oldest lab that focuses on deep learning. Their machine learning team is being led by Jürgen Schmidhuber. In the last 5 years, they had several successes on different machine learning competitions. Here are some of their recent competitive achievements in competitions and challenges:

  • 1 Sept 2013: Deep neural network of the Swiss AI Lab IDSIA is the best artificial offline recogniser of Chinese characters from the ICDAR 2013 competition (3755 classes), approaching human performance. For more information: http://arxiv.org/abs/1309.0261

A new machine learning startup uses Maxout and Stochastic Pooling in their Pipeline

According to a recently published gigaOM article, a Denver based startup (AlchemyAPI) started to use Maxout [1] and Stochastic Pooling [2] in their object recognition  pipeline. Using those deep learning techniques, they claim to deliver Google-level machine learning services.

[1] Goodfellow, I. J., Warde-Farley, D., Mirza, M., Courville, A., & Bengio, Y. , Proceedings of the 30th International Conference on Machine Learning (ICML’13), in: Proceedings of the 30th International Conference on Machine Learning (ICML’13), ACM.

[2] Zeiler, Matthew D., and Rob Fergus. “Stochastic pooling for regularization of deep convolutional neural networks.” ICLR (2013).