Visual Dictionary
Teaching computers to recognize objects
Download dataset
Download poster
Publications
You have submitted 0 labels.
The system can now recognize 0 images.
Labels
WordNet
Images
Confidence
Visual dictionary: Visualization of 53,464 english nouns arranged by meaning. Each tile shows the average color of the images that correspond to each term.

Visual dictionary

Click on top of the map to visualize the images in that region of the visual dictionary.

We present a visualization of all the nouns in the English language arranged by semantic meaning. Each of the tiles in the mosaic is an arithmetic average of images relating to one of 53,464 nouns. The images for each word were obtained using Google's Image Search and other engines. A total of 7,527,697 images were used, each tile being the average of 140 images. The average reveals the dominant visual characteristics of each word. For some, the average turns out to be a recognizable image; for others the average is a colored blob. The list of nouns was obtained from Wordnet, a database compiled by lexicographers which records the semantic relationship between words. Using this database, we extract a tree-structured semantic hierarchy which we use to arrange tiles within the poster. We tessellate the poster using the hierarchy so that the proximity of two tiles is given by their semantic distance. Thus the poster explores the relationship between visual and semantic similarity. For a large part of our language the two are closely correlated as shown by the extent of visual clustering within the poster. The large-scale groupings correspond to broad categories such as plants or people. Within the plant cluster, for example, tighter semantic groupings are visible such as flowers or trees. In turn each of these clusters contains further groupings all the way down to individual, highly specific nouns. The averaging within each tile removes the variation between images of a given word, enhancing the similarly between neighbors. By clicking on top of the map, you will see the word corresponding to that location, the average image and the first 12 images returned by the image search online tools.

Currently computers have difficult recognizing objects in images. While practical solutions exist for a few simple classes such as human faces or cars, the more general problem of recognizing all different classes of objects in the world (e.g. guitars, bottles, telephones) remains unsolved. Computer Vision researchers are currently investigating methods that can recognize and localize thousands of different object categories in complex scenes. A key component of these algorithms is the data used to train the computers' model of each object. Current approaches use collections of images gathered by hand. Our research explores how the billions of images available on the Internet can be used to train models for object recognition. We gathered from the web 79 million images. We are using this massive dataset to train a computer to recognize objects within an image and to understand the scenes depicted in photographs.

Teaching computers to see

When you visualize the images for each word, you can click on top of each image and select if they are correct examples of the associated word (a green frame will appear around the image) or if they are incorrect (a red cross will appear). If you are unsure, just click until the frame around the image is black. Each time you click on top of an image, the selection goes from (correct, incorrect, I do not know) and starts again. You can submit your selection even if there are many images for which you are not sure what the right decision is. Your selections will be combined with selections by other users in order to get a more confident labeling. Once you are satisfied with your selection press the submit button. We will use your selections to train a computer vision algorithm to recognize images and to re-rank the images for each word. Therefore, as more annotations are provided the results will improve.

 

Funding support came from NSF Career award (ISI 0747120), ISF and a Microsoft Research gift.