Disentangling Task Transfer Learning

(page under construction -- release of models and data coming soon)
Amir R. Zamir, Alexander Sax*, William B. Shen*
Leonidas Guibas, Jitendra Malik, Silvio Savarese


The paper and supplementary material describing the methodology and evaluation.



Go to the Supervision API to find a supervision-efficient transfer strategy.

API Page


Download best-in-class pretrained models from the paper.

Pretrained Models


Download the data. Almost 4M multiply-annotated images of indoor spaces.


Demo of 20 Tasks

Upload an image and see live results for 20 distinct vision tasks, all output from our pretrained networks.

Live Demo

Transfer Visualization

Examine any combination of source to target transfer via sample videos.

Transfer Visualization Page


Would having surface normals simplify the depth estimation of an image? Do visual tasks have a relationship, or are they unrelated? Common sense suggests that visual tasks are interdependent, implying the existence of structure among tasks. However, a proper model is needed for the structure to be actionable, e.g., to reduce the supervision required by utilizing task relationships. We therefore ask: which tasks transfer to an arbitrary target task, and how well? Or, how do we learn a set of tasks collectively, with less total supervision?
These are some of the questions that can be answered by a computational model of the vision tasks space, as proposed in this paper. We explore the task structure utilizing a sampled dictionary of 2D, 2.5D, 3D, and semantic tasks, and modeling their (1st and higher order) transfer behaviors in a latent space. The product can be viewed as a computational task taxonomy (Taskonomy) and a map of the task space. We study the consequences of this structure, e.g., the emerging task relationships, and exploit them to reduce supervision demand. For instance, we show that the total number of labeled datapoints needed to solve a set of 10 tasks can be reduced to 1/4 while keeping performance nearly the same by using features from multiple proxy tasks. Users can employ a provided Binary Integer Programming solver that leverages the taxonomy to find efficient supervision policies for their own use cases.

Process overview. The steps involved in creating the taxonomy.

Supervision API

The provided API uses our results to recommend a superior set of transfers. By using these transfers, we can get similar results close to a fully supervised network using substantially less data.

Example taxonomies. Generated from the API.

Transfer Visualization

In order to evaluate the quality of the learned transfer functions, we ran the transfer networks on a random youtube video. Visit the Transfer Visualization page to analyze how well different sources transfer to a target, or how well a source transfers to different targets. Compare this to the task-specific networks as well as to baselines trained on ImageNet or trained on the same data as the transfer networks, but without transfer learning.

Example taxonomies. Generated from the API.

Pretrained Models -- A Unified Bank of 25 Vision Tasks

Click on each task to see sample results.
Try the live demo on your query image.
Download pretrained models in the bank.

Denoising Autoencoder

Uncorrupted version of corrupted image.

Surface Normals

Pixel-wise surface normals.

Z-buffer Depth

Range estimation.


Coloring for grayscale images.


Shading function with new lighting.

Room Layout

Orientation, size, and translation of the current room.

Camera Pose (fixated)

Relative camera pose with matched optical centers.

Camera Pose (nonfix.)

Relative camera pose with distinct optical centers.

Vanishing Points

Three Manhattan-world vanishing points.


Magnitude of the principal curvatures.

Unsupervised 2D Segm.

Felzenswalb/graph-cut oversegmentation on RGB image.

Unsupervised 2.5D Segm.

Felzenswalb/graph-cut oversegmentation on RGB-D-Normals-Curvature image.

3D Keypoints

Keypoint estimation from geometric features.

2D Keypoints

Keypoint estimation from texture features.

Occlusion Edges

Edges which occlude parts of the scene.

Texture Edges

Edges computed from the RGB image.


Masked centers of images.

Semantic Segmentation

Pixel-level semantic classification.

Object Classification

Knowledge distillation from ImageNet.

Scene Classification

Knowledge distillation from MIT Places.

Jigsaw Puzzle

Inverse permutation of a scrambled image.


Odometry with three camera poses.


Image compression and decompression.

Point Matching

Classifying pairs of possibly matching images.


3.9 Mil.





Tags per Image



We provide a large and high-quality dataset of varied indoor scenes.

Complete pixel-level geometric information via aligned meshes.

Globally consistent camera poses. Complete camera intrinsics.

High-definition images.

3x times big as ImageNet.

* If you are interested in using the full dataset (12 TB), then please contact the authors.

Paper & Supplementary Materials

Zamir, Sax*, Shen*, Guibas, Malik, Savarese.
Taskonomy: Disentangling Task Transfer Learning.
CVPR 2018

Please cite the paper if you use the method, models, database, or API.

@ARTICLE {TaskonomyTaskTransfer2017,
 author = {Amir R. Zamir and Alexander Sax* and William B. Shen* and Leonidas J. Guibas and Jitendra Malik and Silvio Savarese},
 title = {Taskonomy: Disentangling Task Transfer Learning},
 journal = {CVPR},
 year = {2018},