Predicting Depth, Surface Normals and Semantic Labels
with a Common Multi-Scale Convolutional Architecture

David Eigen      Rob Fergus

{deigen,fergus}@cs.nyu.edu

Paper PDF (ICCV 2015)


We address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling. We use a multiscale convolutional network that is able to adapt easily to each task using only small modifications, regressing from the input image to the output map directly. Our method progressively refines predictions using a sequence of scales, and captures many image details without any superpixels or low-level segmentation. We achieve state-of-the-art performance on benchmarks for all three tasks.

We also recently submitted depth and normals predictions from a slightly earlier version of our system in the Reconstruction Meets Recognition Challenge (RMRC) at ECCV 2014.

Paper PDF:
Code:
Predicted outputs for NYU Depth v2 test set:
UPDATE 14 May 2016: The files for semantic labels predictions were previously saved using an older version of our model by mistake. We have re-saved the semantic labels predictions using the model from the paper.
Ground truth normals: