2009-04-30: CBLL Thesis Defense: Marc'Aurelio Ranzato:
Unsupervised Learning of Feature Hierarchies.
Thursday April 30, 2009 at 1:30 PM, VLG: Room 1221, 715/719 Broadway, New York, NY 10003.
ICML Workshop on Learning Feature Hierarchies. This workshop will
take place on June 18, 2009 in Montreal, Canada, in conjunction with
the 26th International Conference on Machine Learning (ICML 2009).
The organizers are: Kai Yu (NEC Labs-America), Ruslan Salakhutdinov
(U. of Toronto), Yann LeCun (Courant Institute, NYU), Geoffrey Hinton
(U. of Toronto) and Yoshua Bengio (U. of Montreal).
2009-04-10: New paper on feature learning available:
Learning Invariant Features through Topographic Filter Maps,
to appear in CVPR 2009. The method describes a method to learn SIFT-like locally-invariant feature
detectors from data. The method minimizes a reconstruction criterion under a "block sparsity
constraint", which minimizes the number of "pools" of features that are active.
filters whose outputs are grouped in a pool end up detecting similar features,
and the pool outputs can be interpreted as complex-cell like locally-invariant
2008-10-19: Jeff Han on the Daily Show. Our friend and collaborator Jeff Han was on the Daily Show yesterday
showing off his magic touch screen as part of a spoof conspiracy thriller involving John Oliver and CNN's John King.
In case you missed it, here is the video.
2008-10-16: Old Courant Institute Reports at the Internet Archive.
The Internet Archive has scanned the
archives of old tech reports the Courant Institute of Mathematical Science
(our home institution) and put it up on their website.
The best thing is that the Archive uses our own DjVu format to distribute the documents (among
other inferior formats ;-). The collection includes
several reports by The Man Himself: Richard Courant.
The "DjVu" links on the archive website actually fires up a Java DjVu viewer (which is kind of slow),
instad of calling your DjVu viewer or DjVu plug-in in your browser. The best is to click on the
"http" link, and then click on the DjVu file. This project is funded by a
grant from the Sloan foundation to the Archive.
2008-10-15: upcoming seminars: 10/28: Killian Weinberger (Yahoo), 10/29: Ronan Collobert (NEC),
11/26: Alex Berg (Columbia), 12/03: Fei-Fei Li (Princeton), 12/17: Jason Weston (NEC).
click here for details.
2008-10-02: CBLL is awarded an NSF grant from the Energing Frontiers in Research and Innovation program
entitled "Deep Learning in the Mammalian Visual Cortex". Collaborators in this project are Ed Boyden (MIT),
Yang Dan (Berkeley), Yann LeCun (NYU), Andrew Ng (Stanford), and Sebastian Seung (MIT).
There an NSF announcement, and a
press release from NYU.
2008-04-18: A cool Video of our off-road mobile robot is available below .
More details about the project (and more, higher quality videos) are available
from our LAGR project web page.
2008-04-14: RARE SPECIAL EVENT: Talk by Geoff Hinton (U. of Toronto),
Monday 14th at 11:30 at the Courant Institute, Warren Weaver Hall,
Room 1302. Title: An efficient way to learn multiplicative interactions
2008-04-12: coming CBLL seminars:
04/16: Michael Littman (Rutgers); 04/23: Daqvid Blei (Princeton); 04/30: Tony Jebara (Columbia).
2008-04-11: Video and slides of Yann's talk on deep learning at Google and Stanford.
2008-04-11: A new video describing our LAGR off-road mobile robot system is
2008-03-19: [Press] Another article about Jeff Hawkins, this time in Forbes magazine, in which Yann is quoted. He is described
as a "neuroscience professor at MIT". They must have mixed up affiliations and quotes between Yann and Sebastian Seung.
2008-03-08: [Press] There is an article in The Economist this week about
Jeff Hawkins and his new company Numenta, in which Yann and Geoffrey Hinton are quoted.
2008-02-25: [Press] new videos of a 15 minute interview of Yann LeCun on machine learning
research, lecturing styles, where NIPS is going, the philosophy of science,
and various other topics. (provided by Video Lectures).
2008-02-25: new videos and slides available on the talk page,
including Yann's talks at the Deep Learning NIPS 2007 satellite session, and at
the NIPS workshop on large-scale and efficient learning.
2008-02-24: slides and videos of all the talks at the
Deep Learning NIPS 2007 satellite session are now available at the
meeting's main web site.
2007-04-18: RARE EVENT: Geoffrey Hinton is giving a talk at NYU on
April 18th at 11:30 AM (see announcement).
2007-04-11: CBLL seminar by Pradeep Ravikumar from CMU.
CBLL, April 11th at 11:30 AM.
2007-03-01: A new paper on the limitations of kernel methods
and the challenges of scaling up learning algorithms is
2005-10-20: CBLL seminar by Sebastian Seung. See
2005-09-07: Yann's 4-hour tutorial at the 2005 IPAM Graduate
Summer School on Intelligent Extraction of Information from Graphs and
High Dimensional Data, is available.
The slides and the streaming video of the lectures are available here.
The tutorial includes such topics as energy-based models, convolutional
nets, trainable metrics, and graph transformer networks, with
applications (including live demos) to handwriting recognition,
invariant object recognition, face detection, image segmentation,
and vision-based obstacle avoidance for mobile robots.
The Summer school's
main web site includes videos and slides of all the talks.
2005-02-28: upcoming CS
colloquia on machine learning: Lawrence Saul (02/28), Tong Zhang (03/09), Alex
Gray (03/11), David Blei (04/04), Gert Lanckriet (04/20), Guy Lebanon (04/22).
2005-02-03: upcoming seminars by Alex
Pouget, John Langford, and Larry Carin.
2004-12-01: CBLL in partnership with Net-Scale Technologies,
has been selected as of one of the 8 teams that will participate
in he US government-funded LAGR project
(Learning Applied to Ground Robots).
2004-08-24: CBLL has an open postdoc position for a research
project on vision-based robot navigation using machine learning methods.
2004-08-23: The NORB dataset for object recognition is available
The Computational and Biological Learning Lab at the Courant Institute
was founded in September 2003. Our main research goal is to devise new
methods, theories, and algorithms to derive knowledge from data: the
process commonly known as learning. Understanding learning is
a necessary step toward building intelligent systems that can learn
from experience. It may also give us a better understanding of the
mechanisms that underly human and animal intelligence.
Over the last several years, Machine Learning methods have been
applied with considerable success to such tasks as prediction,
classification, detection, recognition, diagnosis, and to practically
every areas of data modeling and decision making. Our interests
at the CBLL include:
"Deep" Learning, Unsupervised Learning: learning "deep"
feature hierarchies: learning high-level internal representation
of data from unlabeled examples.
Large-scale learning: the design of
large-scale learning systems involving tens of thousands of input
variables, and millions of training samples;
Biological learning: Elucidating the neural
mechanisms of learning in animals and humans. What is
the learning algorithm of the cortex?
Energy-Based Models: non-probabilistic graphical models
without intractable partition functions.
Learning, perception, vision: visual perception is one of
the most challenging domains for learning. We are developing learning
systems for such tasks as the detection and
recognition of objects in
Computational models of biological vision: what better way to
understand biological vision systems than to build and artificial one?
Mobile Robots: building trainable vision systems
to make mobile robots
navigate and avoid obstacles in rugged terrain.
Relational Graphical Models for Regression: we are building
new types of relational graphical model for regression problems.
The main application is predicting house prices.
End-to-end learning: using learning in every stages and
components of a system, from raw inputs to ultimate output;
Applications: we work on all applications of machine
learning and pattern recognition: data mining, biological data analysis,
mobile robotics, face detection/recognition, object recognition in
images, document understanding...