neural imaging

youtube

Abstract:

Deep neural networks (DNNs) have recently been achieving state-of-the-art performance on a variety of pattern-recognition tasks, most notably visual classification problems. Given that DNNs are now able to classify objects in images with near-human-level performance, questions naturally arise as to what differences remain between computer and human vision. A recent study revealed that changing an image (e.g. of a lion) in a way imperceptible to humans can cause a DNN to label the image as something else entirely (e.g. mislabeling a lion a library). Here we show a related result: it is easy to produce images that are completely unrecognizable to humans, but that state-of-the-art DNNs believe to be recognizable objects with 99.99% confidence (e.g. labeling with certainty that white noise static is a lion). Specifically, we take convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and then find images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class. It is possible to produce images totally unrecognizable to human eyes that DNNs believe with near certainty are familiar objects. Our results shed light on interesting differences between human vision and current DNNs, and raise questions about the generality of DNN computer vision.

What’s hilarious about my fic, Artificial Doll, is that I was all “ah, yes, this is the time where my computer science course becomes useful”. And I’m really glad you guys all say it’s well done.

Ironically, I was pretty bad in the AI subject (search algos and learning), it was mostly my groupmates who did the practical work. So it’s a hodge podge nightmare of what I remembered.

A tip though:

From what I’ve read, AIs don’t start instantly knowing how to do the thing (I’ve read that in a few Ironman stuff, like blip it on and it knows how to do the thing and kinda uhhhh) . You gotta teach them, this is usually in the form of giving them data and then checking the number of rights and wrongs they make. Which is why Kaiba is dueling him despite the fact that he always loses, it’s not for the challenge. He’s feeding him data and figuring out what’s missing.

Well idk, that’s how my AI-ish schoolwork was like. Most of the time, my class was stuck giving sample data to our programs. A lot took hours.

2

We use our AI car, BB8, to develop and test our DriveWorks software. The make and model of the vehicle doesn’t matter; we’ve used cars from Lincoln and Audi so far, and will use others in the future. What makes BB8 an AI car, and showcases the power of deep learning, is the deep neural network that translates images from a forward-facing camera into steering commands. 

Read our blog

This is your brain on LSD, in one incredible chart

To produce these images, Dr. David Nutt, lead on the study at Imperial College London, got 20 people doped up. Using three kinds of neural imaging — arterial spin labeling, resting state MRI and magnetoencephalography — Nutt’s team found changes in brain blood flow, increased electrical activity and a big communication spike in the parts of your brain that handle vision, motion, hearing and awareness. Here’s what the study proves.

Follow @the-future-now

4

Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

Computer Science academic paper by Alec Radford, Luke Metz, and Soumith Chintala explores neural network method of generating new forms from huge image datasets, particularly human faces and interior rooms:

In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations. 

More Here

4

An image recognition network dreams about every object it knows. Part 2/2: non-animals

Second video from Ville-Matias Heikkilä uses deep-dream like technique to visually reveal a collected neural dataset, this time featuring man-made objects and food:

Network used: VGG CNN-S (pretrained with Imagenet)

There are 1000 output neurons in the network, one for each image recognition category. In this video, the output of each of these neurons is separately amplified using backpropagation (i.e. deep dreaming).

The middle line shows the category title of the amplified neuron. The bottom line shows the category title of the highest competing neuron. Color coding: green = amplification very succesful (second tier far behind), yellow = close competition with the second tier, red = this is the second tier.

Some parameters adjusted mid-rendering, sorry.

Link

The first video (on the animal dataset) can be found here