CBLL HOME
VLG Group
News/Events
Seminars
People
Research
Publications
Talks
Demos
Datasets
Software
Courses
Links
Group Meetings
Join CBLL
Y. LeCun's website
CS at Courant
Courant Institute
NYU
Lush
Lush

Talks and Seminars at CBLL and Around


CBLL Seminars take place at NYU, 715/719 Broadway (between Waverly and Washington Place, Subway: 8th street or Astor Place). 12th floor. Room 1221.

2008-03-05: CBLL Seminar: John Langford (Yahoo! Research)

TITLE: Learning without the Loss

In many natural situations, you can probe the loss (or reward) for one action, but you do not know the loss of other actions. This problem is simpler and more tractable than reinforcement learning, but still substantially harder than supervised learning because it has an inherent exploration component. I will discuss two algorithms for this setting.

(1) Epoch-greedy, which is a very simple method for trading off between exploration and exploitation.
(2) Offset Tree, which is a method for reducing this problem to binary classification.

2007-04-18: CBLL Special Seminar: Geoffrey Hinton (Toronto)

RARE EVENT!!
2007-04-18: Wed April 18, 2007, 11:30 AM
LOCATION: Room 1302, Warren Weaver Hall, Courant Institute, New York University, 251 Mercer Street, New York, NY,
(NOTE: not the usual location)

TITLE: An efficient way to learn deep generative models

Geoffrey Hinton
Canadian Institute for Advanced Research and University of Toronto.

SLIDES OF THE TALK: [PPT (4MB)] [PDF (4MB)] [DjVu (2MB)]

I will describe an efficient, unsupervised learning procedure for deep generative models that contain millions of parameters and many layers of hidden features. The features are learned one layer at a time without any information about the final goal of the system. After the layer-by-layer learning, a subsequent fine-tuning process can be used to significantly improve the generative or discriminative performance of the multilayer network by making very slight changes to the features.

I will demonstrate this approach to learning deep networks on a variety of tasks including: Creating generative models of handwritten digits and human motion; finding non-linear, low-dimensional representations of very large datasets; and predicting the next word in a sentence. I will also show how to create hash functions that map similar objects to similar addresses, thus allowing hash functions to be used for finding similar objects in a time that is independent of the size of the database.

2007-04-11: CBLL Seminar: Pradeep Ravikumar (CMU)

2007-04-11: Room 1221, 719 Broadway, Wed April 11, 11:30 AM

TITLE: Techniques for approximate inference and structure learning in Discrete Markov Random Fields

Markov random fields (MRFs), or undirected graphical models, are graphical representations of probability distributions. Each graph represents a family of distributions -- the nodes of the graph represent random variables, the edges encode independence assumptions, and weights over the edges and cliques specify a particular member of the family.

Key inference tasks within this framework include estimating the normalization constant (also called the partition function), event probability estimation, and computing the most probable configuration. In addition, a key modeling task is to estimate the graph structure of the underlying MRF from data. In this talk, I'll give a high-level picture of these queries, and some of the methods we have developed to answer these queries.

Joint work with John Lafferty and Martin Wainwright.

2007-02-14: CBLL Seminar: Risi Kondor (Columbia)

Wednesday 02/14 at 11:30 AM. 715/719 Broadway, Room 1221 (12th floor).
TITLE: A complete set of rotationally and translationally invariant features based on a generalization of the bispectrum to non- commutative groups

Risi Kondor, Columbia University

Deriving translation and rotation invariant representations is a fundamental problem in computer vision with a substantial literature. I propose a new set of features which

a, are simultaneously invariant to translation and rotation;
b, are sufficient to reconstruct the original image with no loss (up to a badwidth limit);
c, do not involve matching with a template image or any similar discontinuous operation.

The new features are based on Kakarala's generalization of the bispectrum to compact Lie groups and a projection onto the sphere. I validated the method on a handwritten digit recognition dataset with randomly translated and rotated digits.

2007-01-25: CBLL Seminar: Francis Bach (ENSMP)

Thursday January 25th, 11:30AM Room 1221 (12th floor), NYU, 715/719 Broadway,

TITLE: Image Classification with Segmentation Graph Kernels

Francis Bach, Ecole Nationale Superieure des Mines de Paris

The output of image segmentation is often represented by a labelled graph, each vertex corresponding to a segmented region, with edges joining neighboring regions. However, such rich representations of images have mostly remained underused for learning tasks, partly due to the observed instability of the segmentation process and the inherent difficulty of inexact graph matching or other graph mining problems with uncertain graphs. Recent advances in kernel-based methods have allowed to handle structured objects such as graphs by defining similarity measures via kernels, that can be used for many learning tasks such as classification with a support vector machine. In this paper, we propose a family of kernels between two segmentation graphs, each obtained by watershed transforms from the original images. Our kernels are based on soft matchings of subtree patterns of the respective graphs, leveraging the natural structure of images while remaining robust to the segmentation process uncertainty. Our family of kernels yields competitive performances on common image classification benchmarks. Moreover, by using kernels to compute similarity measures between images, we are able to take advantage of recent advances of kernel-based learning methods: semi-supervised learning allows to reduce the required number of labelled images, while multiple kernel learning algorithms efficiently select the most relevant kernels within the family for a particular learning task.

Joint work with Zaid Harchaoui.

2006-12-20: CBLL Seminar: Pierre Baldi (UCI)

Wednesday December 20, at 11:00 in room 1221, 715/719 Broadway, New York
TITLE: Charting Chemical Space with Computers: Challenges and Opportunities for AI and Machine Learning

SPEAKER: Pierre Baldi, UC Irvine.

ABSTRACT: Small molecules with at most a few dozen atoms play a fundamental role in organic chemistry and biology. They can be used as combinatorial building blocks for chemical synthesis, as molecular probes for perturbing and analyzing biological systems, and for the screening/design/discovery of new drugs. As datasets of small molecules become increasingly available, it becomes important to develop computational methods for the classification and analysis of small molecules and in particular for the prediction of their physical, chemical, and biological properties.

We will describe datasets and machine learning methods, in particular kernel methods, for chemical molecules represented by 1D strings, 2D graphs of bonds, and 3D structures. We will demonstrate state-of-the-art results for the prediction of physical, chemical, or biological properties including the prediction of toxicity and anti-cancer activity and the applications of these methods to the discovery of new drug leads. More broadly, we will discuss some of the challenges and opportunities for computer science, AI, and machine learning in chemistry.

2006-06-23: CBLL Seminar: Wolf Kinzle

June 23rd, 2:30PM, 715 Broadway 12th floor conference room
TITLE: Learning an interest point detector from human eye movements

W. Kienzle, F.A. Wichmann, B. Schoelkopf, and M.O. Franz

The talk is about learning an interest point detector (saliency map) from human eye movement statistics. Instead of modelling biologically plausible image features (edge, blob, corner filters, etc.), we simply train a classifier on pixel values of fixated vs. randomly selected image patches. Thus, the learned function provides a measure of interestingness, but without being biased towards plausible but possibly misleading biological assumptions. We describe the data collection, training, and evaluation process, and show that our learned saliency measure significantly accounts for human eye movements. Furthermore, we illustrate connections to existing interest operators, and present a multi-scale interest point detector based on the learned function.

2006-04-20: CBLL Seminar: Brendan Frey

Time: Thursday, April 20, 2006 at 11:00AM, Place: 719 Broadway, Room 1221

TITLE: Affinity propagation for combined bottom-up and top-down clustering

Brendan J. Frey, University of Toronto

Clustering is a critical task in the analysis of scientific data and in natural or artificial sensory processing. Existing techniques either are bottom-up and make pair-wise decisions when linking together training cases, or are top-down and represent each cluster using a parametric model, while alternately assigning training cases to clusters and updating parameters. I'll describe an algorithm that we call `affinity propagation', which for the first time combines complementary advantages of these distinct approaches. Affinity propagation can use sophisticated cluster models, but operates by propagating real-valued messages between pairs of training cases. Because affinity propagation replaces the estimation of model parameters with a step that considers many potential models and many possible cluster assignments, it can find better solutions than strictly bottom-up or top-down methods.

Work done in collaboration with Delbert Dueck, University of Toronto.

2006-03-11: CBLL Seminar: Boris Epshtein

Time: Wednesday, March 15, 2006 at 3:00PM, 719 Broadway, Room 1221

TITLE: Visual classification by a hierarchy of semantic fragments

Boris Epshtein, Weizmann Institute

We describe visual classification by a hierarchy of semantic fragments. In fragment-based classification, objects within a class are represented by common sub-structures selected during training. Here we propose two extensions to the basic fragment-based scheme. The first extension is the extraction and use of feature hierarchies. We describe a method that automatically constructs complete feature hierarchies from image examples, and show that features constructed hierarchically are significantly more informative and better for classification compared with similar non-hierarchical features. The second extension is the use of so-called semantic fragments to represent object parts. The goal of a semantic fragment is to represent the different possible appearances of a given object part. The visual appearance of such object parts can differ substantially, and therefore traditional image similarity-based methods are inappropriate for the task. We show how the method can automatically learn the part structure of a new domain, identify the main parts, and how their appearance changes across objects in the class. We discuss the implications of these extensions to object classification and recognition.

Joint work with Prof. Shimon Ullman.

2005-10-20: CBLL Seminar: Sebastian Seung

2005-10-20: Room 1221, 719 Broadway, Thursday Feb 10, 12:00PM
Sebastian Seung Brain and Cognitive Science Dept, MIT

TITLE: Representing part-whole relationships in recurrent networks

There is much debate about the computational function of top-down synaptic connections in the visual system. Here we explore the hypothesis that top-down connections, like bottom-up connections, reflect part-whole relationships. We analyze a recurrent network with bidirectional synaptic interactions between a layer of neurons representing parts and a layer of neurons representing wholes. Within each layer, there is lateral inhibition. When the network detects a whole, it can rigorously enforce part-whole relationships by ignoring parts that do not belong. The network can complete the whole by filling in missing parts. The network can refuse to recognize a whole, if the activated parts do not conform to a stored part-whole relationship. Parameter regimes in which these behaviors happen are identified using the theory of permitted and forbidden sets. The network behaviors are illustrated by recreating Rumelhart and McClelland's ``interactive activation'' model. (joint work with Viren Jain and Valentin Zhigulin)

2005-05-02: CBLL Seminar: Jean Ponce

2005-05-02: Room 1221, 719 Broadway, Thursday Feb 10, 2:00PM
Jean Ponce Beckman Institute, UIUC

TITLE: 3D Photography

This talk addresses the problem of automatically acquiring three-dimensional object and scene models from multiple pictures, a process known as 3D photography. I will introduce a relative of Chasles' absolute conic, the absolute quadratic complex, and discuss its applications to the calibration of cameras with rectangular or square pixels without the use of calibration charts. I will also present a novel algorithm that uses the geometric and photometric constraints associated with multiple calibrated photographs to construct high-quality solid models of complex 3D objects in the form of carved visual hulls. If time permits, I will also briefly discuss our most recent results on category-level object recognition.

Joint work with Yasutaka Furukawa, Svetlana Lazebnik, Kenton McHenry, Theo Papadopoulo, Cordelia Schmid, Monique Teillaud and Bill Triggs.

2005-02-10: CBLL Seminar: Larry Carin

2005-02-10: Room 1221, 719 Broadway, Thursday Feb 10, 11:00AM
Larry Carin, Duke University

TITLE: Application of Active Learning and Semi-Supervised Techniques in Adaptive Sensing

In sensing problems one typically has a small quantity of labeled data and a large quantity of unlabeled data we must characterize. In addition, when sensing we often have access to much of the unlabeled data simultaneously. This therefore affords the opportunity to employ semi-supervised classification algorithms, designed based on all available information, i.e., based on all labeled and unlabeled data. In addition, to augment the small quantity of labeled data, with the goal of reducing classification risk, one may employ active learning. In this context active learning may be manifested by acquiring labels on a small subset of the unlabeled data, with the examples chosen for labeling based on information-theoretic metrics. Moreover, active learning may also be employed in a multi-sensor setting, in which rather than acquiring labels we acquire new multi-sensor data, with properly tailored sensors and sensor waveforms. In this talk the basic ideas of active and semi-supervised learning are discussed in the context of sensing. We also discuss the utility of new machine learning technology for the sensing problem, such as variational Bayes inference. The ideas are demonstrated using several examples of measured multi-sensor data.

2005-02-09: CBLL Seminar: John Langford

2005-02-09: Room 1221, 719 Broadway, Wednesday, Feb. 9th, 3:30pm
John Langford, Toyota Technological Institute, Chicago

TITLE: Cost Sensitive Classification with Binary Classification

Cost sensitive classification is the problem of making a choice from an arbitrary set so as to minimize the cost of the choice. Binary classification is the problem of making a single correct binary prediction.

Cost sensitive classification can be reduced to binary classification in such a way that a small regret (= error rate above the minimum error rate) on the created binary classification problems implies a small regret on the cost sensitive classification problems.

This implies that a binary classifier can hope to solve (essentially) any learning problem with any bounded loss function. It also implies that any consistent binary classifier can be made into a consistent multiclass classifier.

John Langford will explain how this reduction works.

2005-02-04: CNS Seminar: Alex Pouget

2005-02-04: CNS building, Rm 815, 1:00 PM (special presentation at the usual Vision Journal Club time)
Alex Pouget, Department of Brain and Cognitive Sciences, University of Rochester

Recent psychophysical experiments indicate that humans use approximate Bayesian inference in a wide variety of tasks, ranging from cue integration to decision making to motor control. This implies that neurons both represent probability distributions and combine those distributions according to a close approximation to Bayes rule. We will demonstrate how such Bayesian inference can be implemented in the dynamics of recurrent analog circuits using cue integration as an example. We will also present recent recordings showing that the receptive field of multisensory neurons in area VIP are consistent with the predictions of our model. We will end by discussing our recent attempt to generalize this approach to network of spiking neurons.

2004-03-24: Guest Lecture: Lawrence Saul

2004-03-24: WWH 101, 5:00PM:
Guest Lecture by Lawrence Saul, University of Pennsylvania.
TITLE: unsupervised learning, dimensionality reduction, and non-linear embedding.

More information about L. Saul and his work is available here

2004-03-12: Seminar: Brendan Frey

2004-03-12: WWH 1314, 3:00PM:
Brendan J. Frey, University of Toronto
TITLE: Learning the "Epitome" of an Image

I will describe a new model of image data that we call the "epitome". The epitome of an image is its miniature, condensed version containing the essence of the textural and shape properties of the image. As opposed to previously used simple image models, such as templates or basis functions, the size of the epitome is considerably smaller than the size of the image or object it represents, but the epitome still contains most constitutive elements needed to reconstruct the image. A collection of images often shares an epitome, e.g., when images are a few consecutive frames from a video sequence, or when they are photographs of similar objects. A particular image in a collection is defined by its epitome and a smooth mapping from the epitome to the image pixels. When the epitome model is used within a hierarchical generative model, appropriate inference algorithms can be derived to extract epitomes from a single image or a collection of images and at the same time perform various inference tasks, such as image segmentation, motion estimation, object removal, super-resolution and image denoising.

Go to http://research.microsoft.com/~jojic/epitome.htm for a sneak preview.

Joint work with Nebojsa Jojic and Anitha Kannan.

2004-03-04: Seminar: Jean Ponce

2004-03-04: 575 Broadway, Room 1221, 4:00PM:
Jean Ponce, Beckman Institute and Department of Computer
TITLE: Toward True 3D Object Recognition

This talk addresses the problem of recognizing three-dimensional (3D) objects in photographs and image sequences, revisiting viewpoint invariants as a -local- representation of shape and appearance. The key insight is that, although smooth surfaces are almost never planar in the large, and thus do not (in general) admit global invariants, they are always planar in the small---that is, sufficiently small surface patches can always be thought of as being comprised of coplanar points---and thus can be represented locally by planar invariants. This is the basis for a new, unified approach to object recognition where object models consist of a collection of small (planar) patches, their invariants, and a description of their 3D spatial relationship. I will illustrate this approach with two fundamental instances of the 3D object recognition problem: (1) modeling rigid 3D objects from a small set of unregistered pictures and recognizing them in cluttered photographs taken from unconstrained viewpoints; and (2) representing, learning, and recognizing non-uniform texture patterns under non-rigid transformations. I will also discuss extensions to the analysis of video sequences and the recognition of object categories. If time permits, I will conclude with a brief presentation of our recent work on 3D photography.

Joint work with Svetlana Lazebnik, Frederick Rothganger, and Cordelia Schmid.

.