algorithms

youtube

Google Translate Bot Discussion

We fed our phones one random sentence, using the impromptu text-to-speech translation feature within the Google Translate app. Then left them to discuss…

The kept talking for 15 minutes and more if left alone. The messages were from completely senseless to utterly terrifying. (e.g. “I am aware of who I am’)

[via interweb3000]

So I couldn't figure out how to actually get rid of my Seeking Arrangements account that I sent up once.

This hasn’t been an issue because I hadn’t received any messages in months and months. I have now had eight in the last eighteen hours. I asked one of these potential suitors if there had been a feature on the site in an article or on tv recently or if getting messages just made the algorithm just pushed me back into rotation. He said he was surprised that I knew what an algorithm was. Seems like I’ll be spending my snow day being mean to men. #misandry

-K

4

Predictive Policing by Jeffrey Brantingham

The Guardian reveals that one of the main researchers behind predictive policing, now offered under the company PredPol is UCLA Anthropologist Jeffrey Brantingham. The PredPol website and Guardian article only touch the surface on how the algorithms work, safely making claims that ‘this is not minority report’ and that the predictions are only for when and where crime may occur, not who the criminal will be. To dig deeper I’ve read some of Brantingham’s research papers. I took an interest in one particular paper that devises Reaction-Diffusion models of crime. This grabbed my attention in particular because Generative Artists usually employ these computational methods to generate biological-looking forms. What Brantingham has done is argue that “reaction-diffusion models provide a mechanistic explanation for crime pattern formation given simple assumptions about the diffusion of crime risk and localized search by offenders”.  The diagram labelled A to E is captioned with an outline of the algorithm’s process: 

The conditions for crime hotspot formation. Local diffusion of elevated risk from stochastic fluctuations in crime nucleate into crime hotspots.

(A) Urban space may be thought of as being partitioned into areas uniquely associated with each individual crime (black dots), here shown as Voronoi polygons (gray lines). Individual crimes also produce elevated risk that diffuses out over an area (dashed circles) centered on the crime location.

(C) Only when risk diffuses over relatively short distances, binding local crimes together but not more distant ones, do crime hotspots emerge.

The paper goes on to demonstrate how the hotspots can be suppressed by police intervention, which I assume is part of what later informs a ‘policing prediction’ once the model is fed with live and historical data of crime in an area. 

Deep Neural Networks are Easily Fooled

Interesting paper from Anh Nguyen, Jason Yosinski and Jeff Clune about evolved images that are unrecognizable to humans, but that state-of-the-art algorithms trained on ImageNet believe with certainty to be a familiar object. Abstract:

Deep neural networks (DNNs) have recently been achieving state-of-the-art performance on a variety of pattern-recognition tasks, most notably visual classification problems. Given that DNNs are now able to classify objects in images with near-human-level performance, questions naturally arise as to what differences remain between computer and human vision. A recent study revealed that changing an image (e.g. of a lion) in a way imperceptible to humans can cause a DNN to label the image as something else entirely (e.g. mislabeling a lion a library). Here we show a related result: it is easy to produce images that are completely unrecognizable to humans, but that state-of-the-art DNNs believe to be recognizable objects with 99.99% confidence (e.g. labeling with certainty that white noise static is a lion). Specifically, we take convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and then find images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class. It is possible to produce images totally unrecognizable to human eyes that DNNs believe with near certainty are familiar objects. Our results shed light on interesting differences between human vision and current DNNs, and raise questions about the generality of DNN computer vision.

[read more] [via stop the cyborgs]

Data archaeology helps builders avoid buried treasure

IN 2010, when builders were excavating the site of the former World Trade Center in New York, they stumbled across something rather unusual: a large wooden boat, later dated to the 1700s.

Hitting archaeological remains is a familiar problem for builders, because the land they are excavating has often been in use for hundreds, if not thousands, of years.

Democrata, a UK data analytics start-up, wants to help companies guess what’s in the ground before they start digging. Using predictive algorithms, their new program maps where artefacts might still be found in England and Wales, in order to help companies avoid the time and cost of excavation. “It’s an expensive problem to have once you’ve started digging,” says Geoff Roberts, CEO of Democrata. Read more.

How Cultures Move Across Continents

They may look like flight paths around North America and Europe. Or perhaps nighttime satellite photos, with cities lit up like starry constellations.

But look again.

These animations chart the movement of Western culture over the past 2,000 years, researchers report Friday in the journal Science.

To make these movies, art historian Maximilian Schich and his colleagues mapped the births and deaths of more than 150,000 notable artists and cultural leaders, such as famous painters, actors, architects, politicians, priests and even antiquarians (people who collect antiques).

A shimmering blue dot lights up each new birth, while red dots represent each death.

We can watch as artists flock from rural areas to urban centers like London, Paris, Rome and Berlin after the Renaissance. Then in the late 17th century, people start to catapult from Europe into the eastern U.S. and then eventually leapfrog over to the West Coast.

"We’re interested in the shape of the coral reef of culture," says Schich, of the University of Texas at Dallas. "We are taking a systems biology approach to art history."

After mapping the births and deaths, Schich and his team analyzed demographic data to build a model for how people and their cultural achievements ebb and flow across continents.

Right now the team has only maps for the U.S. and Europe. But Schich hopes to extend these visualizations beyond the Western world.

Continue reading.

Graphic: Where were the artists, politicians and religious leaders? A blue light denotes a birth while a red light signals a death. The lines connect the two. (Maximillian Schich and Mauro Martino)

What Happens When You Like Everything?

Journalists can be a masochistic lot.

Take Mat Honan over at Wired who decided to like everything in his Facebook News Feed:

Or at least I did, for 48 hours. Literally everything Facebook sent my way, I liked — even if I hated it. I decided to embark on a campaign of conscious liking, to see how it would affect what Facebook showed me…

…Relateds quickly became a problem, because as soon as you like one, Facebook replaces it with another. So as soon as I liked the four relateds below a story, it immediately gave me four more. And then four more. And then four more. And then four more. I quickly realized I’d be stuck in a related loop for eternity if I kept this up. So I settled on a new rule: I would like the first four relateds Facebook shows me, but no more.

So how did Facebook’s algorithm respond?

My News Feed took on an entirely new character in a surprisingly short amount of time. After checking in and liking a bunch of stuff over the course of an hour, there were no human beings in my feed anymore. It became about brands and messaging, rather than humans with messages…

…While I expected that what I saw might change, what I never expected was the impact my behavior would have on my friends’ feeds. I kept thinking Facebook would rate-limit me, but instead it grew increasingly ravenous. My feed become a cavalcade of brands and politics and as I interacted with them, Facebook dutifully reported this to all my friends and followers.

After 48 hours he gives up “because it was just too awful.”

Over at The Atlantic, Caleb Garling plays with Facebook’s algorithm as well. Instead of liking though, he tries to hack the system to see what he needs to do so that friends and followers see what he posts:

Part of the impetus was that Facebook had frustrated me. That morning I’d posted a story I’d written about the hunt for electric bacteria that might someday power remote sensors. After a few hours, the story had garnered just one like. I surmised that Facebook had decided that, for whatever reason, what I’d submitted to the blue ether wasn’t what people wanted, and kept it hidden.

A little grumpy at the idea, I wanted to see if I could trick Facebook into believing I’d had one of those big life updates that always hang out at the top of the feed. People tend to word those things roughly the same way and Facebook does smart things with pattern matching and sentiment analysis. Let’s see if I can fabricate some social love.

I posted: “Hey everyone, big news!! I’ve accepted a position trying to make Facebook believe this is an important post about my life! I’m so excited to begin this small experiment into how the Facebook algorithms processes language and really appreciate all of your support!”

And the likes poured in: “After 90 minutes, the post had 57 likes and 25 commenters.”

So can you game the Facebook algorithm? Not really, thinks Garling. Not while the code remains invisible.

At best, he writes, we might be able to intuit a “feeble correlation.”

Which might be something to like.

In this essay I argue that an important recent development in the struggle to represent algorithms is that computer algorithms now have their own public relations. That is, they have both a public-facing identity and new promotional discourses that depict them as efficient, valuable, powerful, and objective. It is vital that we understand how the algorithms that dominate our experience operate upon us. Yet commercial companies -a recent phenomenon- now systematically manage our image of algorithms and the information we receive about them. Algorithms themselves, rather than just the companies that operate them, have become the subject of mass marketing claims. To make this clear, I analyze a variety of visual and multimedia depictions of algorithms. I begin by reviewing a variety of historical and contemporary attempts to represent algorithms for novices in educational settings, and then I compare these to recent commercial depictions. I will conclude with a critique of current trends and a call for a counter-visuality that can resist them.
— 

Seeing the Sort: The Aesthetic and Industrial Defense of “The Algorithm” // http://median.newmediacaucus.org/art-infrastructures-information/seeing-the-sort-the-aesthetic-and-industrial-defense-of-the-algorithm/

If an Algorithm Wrote This, How Would You Even Know?

These robo-writers don’t just regurgitate data, either; they create human-sounding stories in whatever voice — from staid to sassy — befits the intended audience. Or different audiences. They’re that smart. And when you read the output, you’d never guess the writer doesn’t have a heartbeat…

A shocking amount of what we’re reading is created not by humans, but by computer algorithms. Can you tell the difference? Take the quiz.

The above conceit is as gimmicky and meaningless as the Turing Test. This has nothing to do with ‘smart’. Can you tell the difference between a snippet of bad poetry written in an archaic-seeming form by a human vs. an algorithm? What about that brief excerpt of a stilted foreign language translation: were the non-native sounding constructions the result of the limitations of natural language processing or the idiosyncrasies of the translator and editors? How about that swath of numerical facts sprinkled with a handful of qualitative words—did a human copy/paste that from software output and bind it with generic clichés, or did a program copy/paste some generic clichés around the output it generated using human input and design?

Human bias already exists and simply exists at a remove (curation, design) with algorithmic content that is readable and informative; the solution is the onus of critical readership in the modern world either way; blackboxing is a more specific issue of problems analyzing large data sets by hand, or of mathematical complexity, et cetera.

Even if you could argue that humans would be less critical in their acceptance of algorithmic content (still, that’s on them, or on the weak computer science education they were raised with), this would require knowledge that it’s algorithmic, thus contradicting the notion that not knowing it’s algorithmic is the problem. If anything, assuming wrongly the author is a shady human would make you less accepting.

Not to mention, by the time our written media landscape consists in large part of algorithmically generated writing that’s so skillful in long-form that the average human reader can’t differentiate it, we’ll have algorithmic browser reading extensions or competing article-bots that can parse such content and determine its authorship and/or offer rival analyses using the same NLP and predictive capabilities that generated it. :)

How can the public learn the role of algorithms in their daily lives, evaluating the law and ethicality of systems like the Facebook NewsFeed, search engines, or airline booking systems?

How can research on algorithms proceed without access to the algorithm?

What is the algorithm doing for a particular person?

How should we usefully visualize it?

How do people make sense of the algorithm?

What do users really need to know about algorithms?

—  Some very relevant questions raised in a conversation hosted by MIT Center for Civic Media titled Uncovering Algorithms
2

Invisible & fake [girl/boy]friends on demand

Invisible Girlfriend, LLC is developing fake online girl/boyfriends. The service wants to simulate a real relationship on demand. Just sign up, give him/her a name, choose his/her age and personality, create a fake story of how you met your lovely algo-ghost and start texting. Hookline:

Invisible Boy/Girlfriend gives you real-world and social proof that you’re in a relationship - even if you’re not - so you can get back to living life on your own terms.

The FAQ is worth a read:

WHY IS INVISIBLE BOYFRIEND A THING?

Relationships, for the good they all do us, create pressure from friends and family. Ever told friends at a bar about George or Michael, the boy of your dreams that “seems too good to be true?” He likely is. And without proof, there’s no way to convince people otherwise. We want to help those in want of a tailored, accessible boyfriend to avoid awkward social situations and questions.

WHO WOULD NEED AN “ON DEMAND” BOYFRIEND?

Ever been trapped and forced to tell a lie, then another, until even you don’t know the truth? Maybe our real world girlfriends are in a same-sex relationship and they’re hiding the truth from disapproving relatives. Or maybe Larry, a Class 3 clinger, is bothering you at work because you don’t have a better half. Or you’re too invested in work to pursue romance.

Social media has mainstreamed the “relationship status” question. Today’s society, at least the coupled portion, doesn’t mind asking these pressing, super-personal questions of friends and coworkers. Are you LGBT? Deployed overseas? Focusing on a promotion? An Invisible Boyfriend can help you manage real-world distractions.

I’m pretty sure that there is a big demand for fake relationships in our world. It’s nothing new tbh. But if human-like AIs like Samantha in HER or the virtual representation of the dead Ash in Be Right Back (Black Mirror S02E01) are close (just think of domestic AIs like Siri, Cortana, Cubify, Jibo etc), then we should start to discuss the desirability of such futures and all the possible socio-tech developments.

Just a thought for the near future: What if your fake online algo-girl/boyfriend goes berserk? He/She only needs to boot a different personality (on purpose or due to incorrect circuits) or share obvious inconsistencies on your favorite social network. Goodbye perfect world.

[Invisible Boyfriend] [Invisibile Girlfriend] [h/t Marco Righetto]

Jennifer Ouellette: The english major who taught herself calculus

A few nights ago I was talking to Emily McManus, editor of TED.com, about math. I mentioned my friend Jennifer Ouellette, an english major who taught herself calculus, which is no mean feat. From that she wrote the wonderful book The Calculus Diaries, and I thought Emily would like it.

She agreed and the next morning went to find it. Like me, she remembers concepts a lot better than names, so she searched for “english major who taught herself calculus.” Pretty straightforward. Except this happened:

Oof. As you can see from the number of retweets, that struck a nerve with a lot of people. Of course, this isn’t the fault of a single person or group. Google’s algorithm is based on cues from what other people are searching for and uses context to try to figure out what an user meant. But algorithms, “are never as neutral as they appear.” So while no one thought “only men would teach themselves calculus,” it’s also true that that’s what the culture as a whole has decided, at least in aggregate. Whether we like it or not, we associate something about that phrase with men more than women. This has happened before, and will likely happen many times again. One of the wonderful things about relying on computers to help us is that if we’re not careful they’ll tell us who we really are. In this case that we’re living in a quite deeply sexist culture.

The deep irony, though, is that while people are responding to this quite strongly, Ouellette’s name isn’t in the tweet that’s going viral. The same algorithm that held up this rather unfortunate mirror ensures that neither Jennifer Ouellette’s name nor the name of her book, The Calculus Diaries, is getting attached to that mirror. So, this post is a very conscious (and probably feeble) attempt to rectify that. Maybe people seeing that tweet and googling the phrase will come across this post, and maybe they’ll even want to buy the book.