algorithms

4

Predictive Policing by Jeffrey Brantingham

The Guardian reveals that one of the main researchers behind predictive policing, now offered under the company PredPol is UCLA Anthropologist Jeffrey Brantingham. The PredPol website and Guardian article only touch the surface on how the algorithms work, safely making claims that ‘this is not minority report’ and that the predictions are only for when and where crime may occur, not who the criminal will be. To dig deeper I’ve read some of Brantingham’s research papers. I took an interest in one particular paper that devises Reaction-Diffusion models of crime. This grabbed my attention in particular because Generative Artists usually employ these computational methods to generate biological-looking forms. What Brantingham has done is argue that “reaction-diffusion models provide a mechanistic explanation for crime pattern formation given simple assumptions about the diffusion of crime risk and localized search by offenders”.  The diagram labelled A to E is captioned with an outline of the algorithm’s process: 

The conditions for crime hotspot formation. Local diffusion of elevated risk from stochastic fluctuations in crime nucleate into crime hotspots.

(A) Urban space may be thought of as being partitioned into areas uniquely associated with each individual crime (black dots), here shown as Voronoi polygons (gray lines). Individual crimes also produce elevated risk that diffuses out over an area (dashed circles) centered on the crime location.

(C) Only when risk diffuses over relatively short distances, binding local crimes together but not more distant ones, do crime hotspots emerge.

The paper goes on to demonstrate how the hotspots can be suppressed by police intervention, which I assume is part of what later informs a ‘policing prediction’ once the model is fed with live and historical data of crime in an area. 

Deep Neural Networks are Easily Fooled

Interesting paper from Anh Nguyen, Jason Yosinski and Jeff Clune about evolved images that are unrecognizable to humans, but that state-of-the-art algorithms trained on ImageNet believe with certainty to be a familiar object. Abstract:

Deep neural networks (DNNs) have recently been achieving state-of-the-art performance on a variety of pattern-recognition tasks, most notably visual classification problems. Given that DNNs are now able to classify objects in images with near-human-level performance, questions naturally arise as to what differences remain between computer and human vision. A recent study revealed that changing an image (e.g. of a lion) in a way imperceptible to humans can cause a DNN to label the image as something else entirely (e.g. mislabeling a lion a library). Here we show a related result: it is easy to produce images that are completely unrecognizable to humans, but that state-of-the-art DNNs believe to be recognizable objects with 99.99% confidence (e.g. labeling with certainty that white noise static is a lion). Specifically, we take convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and then find images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class. It is possible to produce images totally unrecognizable to human eyes that DNNs believe with near certainty are familiar objects. Our results shed light on interesting differences between human vision and current DNNs, and raise questions about the generality of DNN computer vision.

[read more] [via stop the cyborgs]

How Cultures Move Across Continents

They may look like flight paths around North America and Europe. Or perhaps nighttime satellite photos, with cities lit up like starry constellations.

But look again.

These animations chart the movement of Western culture over the past 2,000 years, researchers report Friday in the journal Science.

To make these movies, art historian Maximilian Schich and his colleagues mapped the births and deaths of more than 150,000 notable artists and cultural leaders, such as famous painters, actors, architects, politicians, priests and even antiquarians (people who collect antiques).

A shimmering blue dot lights up each new birth, while red dots represent each death.

We can watch as artists flock from rural areas to urban centers like London, Paris, Rome and Berlin after the Renaissance. Then in the late 17th century, people start to catapult from Europe into the eastern U.S. and then eventually leapfrog over to the West Coast.

"We’re interested in the shape of the coral reef of culture," says Schich, of the University of Texas at Dallas. "We are taking a systems biology approach to art history."

After mapping the births and deaths, Schich and his team analyzed demographic data to build a model for how people and their cultural achievements ebb and flow across continents.

Right now the team has only maps for the U.S. and Europe. But Schich hopes to extend these visualizations beyond the Western world.

Continue reading.

Graphic: Where were the artists, politicians and religious leaders? A blue light denotes a birth while a red light signals a death. The lines connect the two. (Maximillian Schich and Mauro Martino)

What Happens When You Like Everything?

Journalists can be a masochistic lot.

Take Mat Honan over at Wired who decided to like everything in his Facebook News Feed:

Or at least I did, for 48 hours. Literally everything Facebook sent my way, I liked — even if I hated it. I decided to embark on a campaign of conscious liking, to see how it would affect what Facebook showed me…

…Relateds quickly became a problem, because as soon as you like one, Facebook replaces it with another. So as soon as I liked the four relateds below a story, it immediately gave me four more. And then four more. And then four more. And then four more. I quickly realized I’d be stuck in a related loop for eternity if I kept this up. So I settled on a new rule: I would like the first four relateds Facebook shows me, but no more.

So how did Facebook’s algorithm respond?

My News Feed took on an entirely new character in a surprisingly short amount of time. After checking in and liking a bunch of stuff over the course of an hour, there were no human beings in my feed anymore. It became about brands and messaging, rather than humans with messages…

…While I expected that what I saw might change, what I never expected was the impact my behavior would have on my friends’ feeds. I kept thinking Facebook would rate-limit me, but instead it grew increasingly ravenous. My feed become a cavalcade of brands and politics and as I interacted with them, Facebook dutifully reported this to all my friends and followers.

After 48 hours he gives up “because it was just too awful.”

Over at The Atlantic, Caleb Garling plays with Facebook’s algorithm as well. Instead of liking though, he tries to hack the system to see what he needs to do so that friends and followers see what he posts:

Part of the impetus was that Facebook had frustrated me. That morning I’d posted a story I’d written about the hunt for electric bacteria that might someday power remote sensors. After a few hours, the story had garnered just one like. I surmised that Facebook had decided that, for whatever reason, what I’d submitted to the blue ether wasn’t what people wanted, and kept it hidden.

A little grumpy at the idea, I wanted to see if I could trick Facebook into believing I’d had one of those big life updates that always hang out at the top of the feed. People tend to word those things roughly the same way and Facebook does smart things with pattern matching and sentiment analysis. Let’s see if I can fabricate some social love.

I posted: “Hey everyone, big news!! I’ve accepted a position trying to make Facebook believe this is an important post about my life! I’m so excited to begin this small experiment into how the Facebook algorithms processes language and really appreciate all of your support!”

And the likes poured in: “After 90 minutes, the post had 57 likes and 25 commenters.”

So can you game the Facebook algorithm? Not really, thinks Garling. Not while the code remains invisible.

At best, he writes, we might be able to intuit a “feeble correlation.”

Which might be something to like.

Elon Musk, the Tesla and Space-X founder who is occasionally compared to comic book hero Tony Stark, is worried about a new villain that could threaten humanity—specifically the potential creation of an artificial intelligence that is radically smarter than humans, with catastrophic results. Musk is talking about “Superintelligence: Paths, Dangers, Strategies” by Nick Bostrom…

Jennifer Ouellette: The english major who taught herself calculus

A few nights ago I was talking to Emily McManus, editor of TED.com, about math. I mentioned my friend Jennifer Ouellette, an english major who taught herself calculus, which is no mean feat. From that she wrote the wonderful book The Calculus Diaries, and I thought Emily would like it.

She agreed and the next morning went to find it. Like me, she remembers concepts a lot better than names, so she searched for “english major who taught herself calculus.” Pretty straightforward. Except this happened:

image

Oof. As you can see from the number of retweets, that struck a nerve with a lot of people. Of course, this isn’t the fault of a single person or group. Google’s algorithm is based on cues from what other people are searching for and uses context to try to figure out what an user meant. But algorithms, “are never as neutral as they appear.” So while no one thought “only men would teach themselves calculus,” it’s also true that that’s what the culture as a whole has decided, at least in aggregate. Whether we like it or not, we associate something about that phrase with men more than women. This has happened before, and will likely happen many times again. One of the wonderful things about relying on computers to help us is that if we’re not careful they’ll tell us who we really are. In this case that we’re living in a quite deeply sexist culture.

The deep irony, though, is that while people are responding to this quite strongly, Ouellette’s name isn’t in the tweet that’s going viral. The same algorithm that held up this rather unfortunate mirror ensures that neither Jennifer Ouellette’s name nor the name of her book, The Calculus Diaries, is getting attached to that mirror. So, this post is a very conscious (and probably feeble) attempt to rectify that. Maybe people seeing that tweet and googling the phrase will come across this post, and maybe they’ll even want to buy the book.

When Garry Kasparov lost his second match against the IBM supercomputer Deep Blue in 1997, people predicted that computers would eventually destroy chess, both as a contest and as a spectator sport. Chess might be very complicated but it is still mathematically finite. Computers that are fed the right rules can, in principle, calculate ideal chess variations perfectly, whereas humans make mistakes. Today, anyone with a laptop can run commercial chess software that will reliably defeat all but a few hundred humans on the planet. Isn’t the spectacle of puny humans playing error-strewn chess games just a nostalgic throwback? (via Steven Poole – On algorithms)

How can the public learn the role of algorithms in their daily lives, evaluating the law and ethicality of systems like the Facebook NewsFeed, search engines, or airline booking systems?

How can research on algorithms proceed without access to the algorithm?

What is the algorithm doing for a particular person?

How should we usefully visualize it?

How do people make sense of the algorithm?

What do users really need to know about algorithms?

—  Some very relevant questions raised in a conversation hosted by MIT Center for Civic Media titled Uncovering Algorithms

If you’re a human reporter quaking in your boots this week over news of a Los Angeles Times algorithm that wrote the newspaper’s initial story about an earthquake, you might want to cover your ears for this fact:

Software from Automated Insights will generate about 1 billion stories this year — up from 350 million last year, CEO and founder Robbie Allen told Poynter via phone.