neural imaging

9

REFIK ANADOL: Archive Dreaming (2017)

Commissioned to work with SALT Research collections, artist Refik Anadol employed machine learning algorithms to search and sort relations among 1,700,000 documents. Interactions of the multidimensional data found in the archives are, in turn, translated into an immersive media installation. Archive Dreaming, which is presented as part of The Uses of Art: Final Exhibition with the support of the Culture Programme of the European Union, is user-driven; however, when idle, the installation “dreams” of unexpected correlations among documents. The resulting high-dimensional data and interactions are translated into an architectural immersive space.

Shortly after receiving the commission, Anadol was a resident artist for Google’s Artists and Machine Intelligence Program where he closely collaborated with Mike Tyka and explored cutting-edge developments in the field of machine intelligence in an environment that brings together artists and engineers. Developed during this residency, his intervention Archive Dreaming transforms the gallery space on floor -1 at SALT Galata into an all-encompassing environment that intertwines history with the contemporary, and challenges immutable concepts of the archive, while destabilizing archive-related questions with machine learning algorithms.

In this project, a temporary immersive architectural space is created as a canvas with light and data applied as materials. This radical effort to deconstruct the framework of an illusory space will transgress the normal boundaries of the viewing experience of a library and the conventional flat cinema projection screen, into a three dimensional kinetic and architectonic space of an archive visualized with machine learning algorithms. By training a neural network with images of 1,700,000 documents at SALT Research the main idea is to create an immersive installation with architectural intelligence to reframe memory, history and culture in museum perception for 21st century through the lens of machine intelligence.

SALT is grateful to Google’s Artists and Machine Intelligence program, and Doğuş Technology, ŠKODA, Volkswagen Doğuş Finansman for supporting Archive Dreaming.

Location : SALT Gatala, Istanbul, Turkey
Exhibition Dates : April 20 – June 11
6 Meters Wide Circular Architectural Installation
4 Channel Video, 8 Channel Audio
Custom Software, Media Server, Table for UI Interaction

WATCH THE VIDEO

FOLLOW MY AMP GOES TO 11 ON INSTAGRAM @nouralogical

2

ARTIFICIAL INTELLIGENCE ANALYZES GRAVITATIONAL LENSES 10 MILLION TIMES FASTER

** Synopsis: SLAC and Stanford researchers demonstrate that brain-mimicking ‘neural networks’ can revolutionize the way astrophysicists analyze their most complex data, including extreme distortions in spacetime that are crucial for our understanding of the universe. **

Researchers from the Department of Energy’s SLAC National Accelerator Laboratory and Stanford University have for the first time shown that neural networks – a form of artificial intelligence – can accurately analyze the complex distortions in spacetime known as gravitational lenses 10 million times faster than traditional methods.

“Analyses that typically take weeks to months to complete, that require the input of experts and that are computationally demanding, can be done by neural nets within a fraction of a second, in a fully automated way and, in principle, on a cell phone’s computer chip,” said postdoctoral fellow Laurence Perreault Levasseur, a co-author of a study published today in Nature.

Lightning Fast Complex Analysis

The team at the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC), a joint institute of SLAC and Stanford, used neural networks to analyze images of strong gravitational lensing, where the image of a faraway galaxy is multiplied and distorted into rings and arcs by the gravity of a massive object, such as a galaxy cluster, that’s closer to us. The distortions provide important clues about how mass is distributed in space and how that distribution changes over time – properties linked to invisible dark matter that makes up 85 percent of all matter in the universe and to dark energy that’s accelerating the expansion of the universe.

Until now this type of analysis has been a tedious process that involves comparing actual images of lenses with a large number of computer simulations of mathematical lensing models. This can take weeks to months for a single lens.

But with the neural networks, the researchers were able to do the same analysis in a few seconds, which they demonstrated using real images from NASA’s Hubble Space Telescope and simulated ones.

To train the neural networks in what to look for, the researchers showed them about half a million simulated images of gravitational lenses for about a day. Once trained, the networks were able to analyze new lenses almost instantaneously with a precision that was comparable to traditional analysis methods. In a separate paper, submitted to The Astrophysical Journal Letters, the team reports how these networks can also determine the uncertainties of their analyses.

Prepared for Data Floods of the Future

“The neural networks we tested – three publicly available neural nets and one that we developed ourselves – were able to determine the properties of each lens, including how its mass was distributed and how much it magnified the image of the background galaxy,” said the study’s lead author Yashar Hezaveh, a NASA Hubble postdoctoral fellow at KIPAC.

This goes far beyond recent applications of neural networks in astrophysics, which were limited to solving classification problems, such as determining whether an image shows a gravitational lens or not.

The ability to sift through large amounts of data and perform complex analyses very quickly and in a fully automated fashion could transform astrophysics in a way that is much needed for future sky surveys that will look deeper into the universe – and produce more data – than ever before.

The Large Synoptic Survey Telescope (LSST), for example, whose 3.2-gigapixel camera is currently under construction at SLAC, will provide unparalleled views of the universe and is expected to increase the number of known strong gravitational lenses from a few hundred today to tens of thousands.

“We won’t have enough people to analyze all these data in a timely manner with the traditional methods,” Perreault Levasseur said. “Neural networks will help us identify interesting objects and analyze them quickly. This will give us more time to ask the right questions about the universe.”

A Revolutionary Approach

Neural networks are inspired by the architecture of the human brain, in which a dense network of neurons quickly processes and analyzes information.

In the artificial version, the “neurons” are single computational units that are associated with the pixels of the image being analyzed. The neurons are organized into layers, up to hundreds of layers deep. Each layer searches for features in the image. Once the first layer has found a certain feature, it transmits the information to the next layer, which then searches for another feature within that feature, and so on.

“The amazing thing is that neural networks learn by themselves what features to look for,” said KIPAC staff scientist Phil Marshall, a co-author of the paper. “This is comparable to the way small children learn to recognize objects. You don’t tell them exactly what a dog is; you just show them pictures of dogs.”

But in this case, Hezaveh said, “It’s as if they not only picked photos of dogs from a pile of photos, but also returned information about the dogs’ weight, height and age.”

Although the KIPAC scientists ran their tests on the Sherlock high-performance computing cluster at the Stanford Research Computing Center, they could have done their computations on a laptop or even on a cell phone, they said. In fact, one of the neural networks they tested was designed to work on iPhones.

“Neural nets have been applied to astrophysical problems in the past with mixed outcomes,” said KIPAC faculty member Roger Blandford, who was not a co-author on the paper. “But new algorithms combined with modern graphics processing units, or GPUs, can produce extremely fast and reliable results, as the gravitational lens problem tackled in this paper dramatically demonstrates. There is considerable optimism that this will become the approach of choice for many more data processing and analysis problems in astrophysics and other fields.”

TOP IMAGES….KIPAC researchers used images of strongly lensed galaxies taken with the Hubble Space Telescope to test the performance of neural networks, which promise to speed up complex astrophysical analyses tremendously. (Yashar Hezaveh/Laurence Perreault Levasseur/Phil Marshall/Stanford/SLAC National Accelerator Laboratory; NASA/ESA)

LOWER IMAGE….Scheme of an artificial neural network, with individual computational units organized into hundreds of layers. Each layer searches for certain features in the input image (at left). The last layer provides the result of the analysis. The researchers used particular kinds of neural networks, called convolutional neural networks, in which individual computational units (neurons, gray spheres) of each layer are also organized into 2-D slabs that bundle information about the original image into larger computational units. (Greg Stewart/SLAC National Accelerator Laboratory)

2

We use our AI car, BB8, to develop and test our DriveWorks software. The make and model of the vehicle doesn’t matter; we’ve used cars from Lincoln and Audi so far, and will use others in the future. What makes BB8 an AI car, and showcases the power of deep learning, is the deep neural network that translates images from a forward-facing camera into steering commands. 

Read our blog

anonymous asked:

I WAS JUST KIDIDKDHJ bc i saw ur farm reply thing and yA BUT THATS COOL IDK WHAT THAT IS TBH WHAT DO U DO??? :-o

FDJSKLFDS its all good .. im a farmer in my dreams…. and its a rly big field tbh like u can explore a lot of different focuses (cell/tissue engineering and artifical organs, neural engineering, prosthetics, medical imaging systems)… personally my focus is medical imaging and im studying more along the electrical engineering and math side of things but it’s basically just.. a big mash up of biology chemistry ece computer science math ..? etc !!! and like idk for Sure what i wanna when i graduate but im thinking something dealing with medical data analysis? :0

4

Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

Computer Science academic paper by Alec Radford, Luke Metz, and Soumith Chintala explores neural network method of generating new forms from huge image datasets, particularly human faces and interior rooms:

In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations. 

More Here