neural imaging

9

REFIK ANADOL: Archive Dreaming (2017)

Commissioned to work with SALT Research collections, artist Refik Anadol employed machine learning algorithms to search and sort relations among 1,700,000 documents. Interactions of the multidimensional data found in the archives are, in turn, translated into an immersive media installation. Archive Dreaming, which is presented as part of The Uses of Art: Final Exhibition with the support of the Culture Programme of the European Union, is user-driven; however, when idle, the installation “dreams” of unexpected correlations among documents. The resulting high-dimensional data and interactions are translated into an architectural immersive space.

Shortly after receiving the commission, Anadol was a resident artist for Google’s Artists and Machine Intelligence Program where he closely collaborated with Mike Tyka and explored cutting-edge developments in the field of machine intelligence in an environment that brings together artists and engineers. Developed during this residency, his intervention Archive Dreaming transforms the gallery space on floor -1 at SALT Galata into an all-encompassing environment that intertwines history with the contemporary, and challenges immutable concepts of the archive, while destabilizing archive-related questions with machine learning algorithms.

In this project, a temporary immersive architectural space is created as a canvas with light and data applied as materials. This radical effort to deconstruct the framework of an illusory space will transgress the normal boundaries of the viewing experience of a library and the conventional flat cinema projection screen, into a three dimensional kinetic and architectonic space of an archive visualized with machine learning algorithms. By training a neural network with images of 1,700,000 documents at SALT Research the main idea is to create an immersive installation with architectural intelligence to reframe memory, history and culture in museum perception for 21st century through the lens of machine intelligence.

SALT is grateful to Google’s Artists and Machine Intelligence program, and Doğuş Technology, ŠKODA, Volkswagen Doğuş Finansman for supporting Archive Dreaming.

Location : SALT Gatala, Istanbul, Turkey
Exhibition Dates : April 20 – June 11
6 Meters Wide Circular Architectural Installation
4 Channel Video, 8 Channel Audio
Custom Software, Media Server, Table for UI Interaction

WATCH THE VIDEO

FOLLOW MY AMP GOES TO 11 ON INSTAGRAM @nouralogical

2

We use our AI car, BB8, to develop and test our DriveWorks software. The make and model of the vehicle doesn’t matter; we’ve used cars from Lincoln and Audi so far, and will use others in the future. What makes BB8 an AI car, and showcases the power of deep learning, is the deep neural network that translates images from a forward-facing camera into steering commands. 

Read our blog

i woke up this morning to multiple requests for more chat logs of my continued attempts to converse with the google assistant narrow AI on my phone…I feel like I already posted the best of (here and here), but here’s a few more:

The AI’s obvious interest in human/AI romance:


The ongoing cats vs dogs fight where it throws major shade at kittens:


Allo does not have the cool /ponystream feature that hangouts does and it made me sad:


And some of the more interesting results:

anyway i’m not too worried about a technological singularity here but it did have a ‘help teach our neural network image recognition by drawing shitty pictures’ game that was fun

This is your brain on LSD, in one incredible chart

To produce these images, Dr. David Nutt, lead on the study at Imperial College London, got 20 people doped up. Using three kinds of neural imaging — arterial spin labeling, resting state MRI and magnetoencephalography — Nutt’s team found changes in brain blood flow, increased electrical activity and a big communication spike in the parts of your brain that handle vision, motion, hearing and awareness. Here’s what the study proves.

Follow @the-future-now

4

Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

Computer Science academic paper by Alec Radford, Luke Metz, and Soumith Chintala explores neural network method of generating new forms from huge image datasets, particularly human faces and interior rooms:

In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations. 

More Here

4

An image recognition network dreams about every object it knows. Part 2/2: non-animals

Second video from Ville-Matias Heikkilä uses deep-dream like technique to visually reveal a collected neural dataset, this time featuring man-made objects and food:

Network used: VGG CNN-S (pretrained with Imagenet)

There are 1000 output neurons in the network, one for each image recognition category. In this video, the output of each of these neurons is separately amplified using backpropagation (i.e. deep dreaming).

The middle line shows the category title of the amplified neuron. The bottom line shows the category title of the highest competing neuron. Color coding: green = amplification very succesful (second tier far behind), yellow = close competition with the second tier, red = this is the second tier.

Some parameters adjusted mid-rendering, sorry.

Link

The first video (on the animal dataset) can be found here