An image recognition network dreams about every object it knows. Part 2/2: non-animals

Second video from Ville-Matias Heikkilä uses deep-dream like technique to visually reveal a collected neural dataset, this time featuring man-made objects and food:

Network used: VGG CNN-S (pretrained with Imagenet)

There are 1000 output neurons in the network, one for each image recognition category. In this video, the output of each of these neurons is separately amplified using backpropagation (i.e. deep dreaming).

The middle line shows the category title of the amplified neuron. The bottom line shows the category title of the highest competing neuron. Color coding: green = amplification very succesful (second tier far behind), yellow = close competition with the second tier, red = this is the second tier.

Some parameters adjusted mid-rendering, sorry.


The first video (on the animal dataset) can be found here