Follow posts tagged #clive thompson in seconds.Sign up
“Look at me” vs. “Look at this”
From Clive Thompson’s “In Defense of Pinterest”:
Indeed, part of the value of Pinterest is that it brings you out of yourself and into the world of things. As the Huffington Post writer Bianca Bosker argued, Facebook and Twitter are inwardly focused (“Look at me!”) while Pinterest is outwardly focused (“Look at this!”). It’s the world as seen through not your eyes but your imagination. “In such a self-obsessed society, this is a place where people are focusing attention on something other than themselves,” says Courtney Brennan, an avid Pinterest user.
That reminds me of this Ander Monson bit:
We find ourselves not by turning inward toward what we imagine is inside us, but by the act of looking outward at the world. The self is nothing without what it looks at. On its own, it’s inert. Kick it. Poke it. It seems dead. But point it at something else… and it perks up.
For me, Pinterest is a place to collect but not necessarily reflect — one of the things we gain from tools like Pinterest is the ability to collect bits of inspiration and snatch other folks’ bits of inspiration, but what we lose is sharing the thinking behind the collection, the gears in the collector’s head that the images turn.
I still like blogging because it lets me show those gears. My favorite bloggers all share things with me while providing a context of what it is that they’re sharing, why they’re sharing it, and why I might care about it, too.
I don’t know, it feels like there’s “look at me” and there’s ”look at this” but there’s also “Look at this with me…”
PS. I’m on Pinterest: @austinkleon
Weekend Reading: Why Your Fabulous Job Sucks (1999)
Why Your Fabulous Job Sucks
by Clive Thompson
Originally published in the March 1999 issue of Shift Magazine
Full article: http://krick.3feetunder.com/jobsucks.htm
A lot of that talk has centered on the Internet itself and how the delivery methods and business models for publishing — whether journalism or books, advertising or subscriptions — have moved and continue to move away from what we now call “traditional” print. In 2013, the “new” media and the “new” labor economy aren’t really all that new anymore, but we’re still trying to figure out how to make it all work.
This weekend’s extracurricular reading is an article from 1999. It’s not about writers; it’s about some of the people who made the Internet, which is where many of us now put our writing. Some of it’s a bit dated; some isn’t. But it’s all interesting, particularly as a piece that looks at how the “net” has (and has not) changed the economy and culture of work, beginning within its own ranks.
The studied hipness of new media is a fascinating and rather devious cultural illusion. Those ultra-cool offices cover up a seldom-discussed truth: That the jobs themselves are often deeply exploitative, demanding intense work and devotion for relatively low pay and zero security. Ironically, the coolness of the new-media workplace is an essential part of the exploitation. By making work more like play, employers neatly erase the division between the two, which ensures that their young employees will almost never leave the office.
And it’s true: High-tech employees hang out at work for hours, long after the city has gone to bed. They’ll kill themselves over deadlines, putting in up to eighty hours a week and pulling all-nighters at the drop of a hat. And instead of going justifiably postal over these crazy workloads (or unionizing), they’ll smile and thank their lucky stars for being part of the digital revolution, the cultural flashpoint of the nineties. For employers, of course, it’s a sweet deal; you can’t buy flexibility like that. As more than one worker has told me, a website design company can almost always hold a meeting at two o’clock on a Saturday afternoon because, well, everyone’s there. Where else would they be?
Read the whole thing: http://krick.3feetunder.com/jobsucks.htm (Shift Magazine, the Canadian tech magazine where this article first ran, is no longer in business; this is the best link to the archived piece that we’ve found, which we got via http://tetw.tumblr.com/.)
How We Will Read: Clive Thompson
This post is part of “How We Will Read,” a Findings interview series exploring the future of books from the perspectives of publishers, writers, and intellectuals. Read our kickoff post with Steven Johnson here.
This week we sat down with Clive Thompson, contributing writer for WIRED and the New York Times Magazine, perennial blogger, and maybe the most energetic person to ever grace our offices. Enthusiastic and hilarious, Clive is actually bursting with ideas about what the future looks like — and what seem like insane ideas or improbable projections are often backed up by a surprising amount of on-the-fly statistical citations. Clive has done his homework, it seems, for every subject on the planet. In our conversation, he seemed to effortlessly switch gears from publishing to literacy, to education, to demographics, and then on to networked societies and television shows.
Clive is currently working on his first book, about the future of thought in the age of machines. He is a prolific Tweeter and Instagrammer, and you can also find him at his blog, Collision Detection. He’s written about the future of reading before, here and here. Below, he explores some of his ideas for where he think the written word is heading. His conclusions? In the future, we might be “ass-deep in books,” and he’ll need a T-shirt that says, “Piss off, I’m reading War and Peace on my iPhone!”
How do you do most of your reading these days?
I do about half in print and half on various screens. I ended up reading all of War and Peace on my iPhone. I have Stanza, which is this app that lets you download books directly from the Gutenberg project. It turned out the iPhone was a really great way to read longform fiction. I found the idea of approaching a very big book less intimidating because you only approach it page-by-page.
How do you annotate, and why?
I annotate aggressively. If I’m reading a piece of really long fiction, I often find that there are these fabulous things I want to remember. I want to take notes on it, so I highlight it, and if I have a thought about it, I’ll type it out quickly. Then I dump all these clippings into a format that I can look at later. In the case of War and Peace, I actually had 16,000 words worth of notes and clippings at the end of it. So I printed it out as a print-on-demand book. In short, I have a physical copy of all of my favorite parts of War and Peace that I can flip through, with my notes, but I don’t actually own a physical copy of War and Peace.
Why are you taking notes? What are you doing with that stuff?
If you look at the memory athletics competitions, where the memory athletes are given something written and they have to repeat it, they’re really good at lists of random information, they’re really good at information about people — and they hate the poetry event. It’s almost impossible to listen to a poem once, to read it once, and then remember it. There’s something about literature that’s just too complex. What does work for remembering literature is repeating. That’s why I like having these little printed books, or these little files of my notes, because I can literally pull up anything I want to remember from Moby Dick, and in repeating it, remember it. Annotating becomes a way to re-encounter things I’ve read for pleasure.
We forget most of what we read, right? The only way to fight that is to write it down, and consult it. So I frequently will almost randomly pick up an old book and look at my notes, because it refreshes you as to what you find interesting about that book. Recently I re-looked at a book and I was delighted to discover that even though I’d read the book 22 years ago, I’d highlighted a bunch of stuff and written notes to myself, and some of the things I remembered about the book were things that I’d highlighted and written about. It was proof that the act of highlighting and thinking about it and writing that little note does that little extra of cognitive work that means you’re more likely to remember something about the book. This is called the generation effect — when you generate something yourself, you’re more likely to remember it. This is one of the wonderful things for me about a world in which people are writing in books and talking about them more: This fantastic generation effect means we’re going to internalize and remember and understand more deeply the books that we’re reading.
It sounds like you’re having a conversation with the text, and maybe also with your future self.
Yeah. It’s a conversation with the author, with yourself, and in a weird way, if you take it along as a lifelong project, you are having a conversation with your future self.
Is the end game of writing creating these conversations?
Yes, absolutely. Whether it’s internal conversation in your head or socially. I’ve always regarded the endpoint of my writing to get people talking to me, to each other, to themselves about this stuff.
I actually strongly believe that social sharing of this marginalia is going to unlock unbelievable amounts of conversation. But I’m embarrassed at the quality of a lot of my notes — they’ll literally be me going like “hahaha” or “lol.” I look like a 12-year-old. But I’m assured that when you import them into Findings, they’re all private. So I’m going to import them, because I love going through Findings and seeing what people have clipped.
Clive Thompson on why kids can’t search
We’re often told that young people tend to be the most tech-savvy among us. But just how savvy are they? A group of researchers led by College of Charleston business professor Bing Pan tried to find out.
Read the full article on WiredOriginal Tweet
Clive Thompson on why kids can’t search—Wired http://t.co/DEPQ98eC #arkulus
Tweeted by @Arkulus on November 15, 2011 at 12:02PM
De la sopa a la imprenta
- Me divierte que la ley SOPA se llame sopa. Por eso la rechaza Quino.
- Cada que escucho a algún pelado hablar de derechos de autor y del peligro que internet supone para la industria del libro, me dan ganas de picarle el ojo (los ojos).
- Pienso lo mismo que Richard Stallman.
- Y por eso recomiendo lo que escribió Luigi Amara en su blog.
- Y por eso me compré el último libro de Jeff Jarvis para Kindle. Sí, lo compré. Y está buenísimo.
- Jeff Jarvis defiende internet como una nueva forma de construir públicos. Y es un yanqui clásico, de los que cree en el individuo y en el mercado y esas cosas. Así es que no vengan con que es comunista.
- Una de las cosas que Jarvis dice en su libro que la cultura del libro, como la conocemos hoy, tomó forma hasta un siglo después de que Gutenberg inventara la imprenta. Los editores aún no comprendían los alcances la nueva tecnología, así es que copiaban las características de los manuscritos a su nuevo producto. (Pueden ver una referencia a este fenómeno, conocido como skeuomorphs, en una columna reciente de Clive Thompson para Wired, pero aún no está en línea).
- Lo que Jarvis quiere decir es que aún es muy pronto para vislumbrar los efectos que internet tendrá en la producción cultural. Por eso coincido con Amara, como ya dije: De lo que se trata es de aprovechar las posibilidades tecnológicas para construir un nuevo enciclopedismo.
- Esto me ha llevado a revisar un libro mil veces citado por Jarvis (y que también aparece referido en este video, que contiene muchas de las ideas de su libro): La revolución de la imprenta en la edad moderna europea, de Elizabeth Eisenstein.
- Una vez vi una página de la primera edición del Quijote. Los signos de puntuación son un invento demasiado reciente, derivado de la imprenta.
I’m not a super visual person; I do not normally take a lot of photos. But now I am, and do. Whenever you join a new social network, there’s this sudden, gentle pressure to, y’know, be more interesting. In the case of Twitter, that manifests itself as a pressure to post ever-more-cool undiscovered URLage. In the case of Instagram, it means posting ever-more-nifty snapshots. And this in turn means that I’ve begun looking at the world around me anew. I used to walk around my neighborhood blissfully — or stressfully — ignoring my surroundings, while staring at the sidewalk (or, ironically, my iphone). Now I find myself spotting unusual bits of graffiti, or patterns that fall trees make against the sky, or how super strange the robot is on Yo Gabba Gabba when my kids watch in the morning. Or that blue door on the brownstone in the picture above: How did I not notice how pretty it was? It’s like my third eye has opened up!
From Bruce Sterling’s “State of the World 2011”:
As blogging has grown older and the bandwidth has increased, more and more blogs are non-textual, ever less writerly. In theory, my blog could become a “vlog” where I don’t type one word, but just stare into the laptop camera and deliver the parenthetical wisecracks as literal offhand remarks. It could be a podcast, or a Tumblr. With some work I could probably shoehorn my blog into my FlickR set.
The classical liberal arts are arts of the word, products of the book, the letter, the lecture. The Renaissance added the plastic arts of painting and sculpture, and modernity those of the laboratory. The new liberal arts are overwhelmingly arts of the DOCUMENT, and the photograph is the document par excellence.
Like the exact sciences, photographic arts are industrial, blurring the line between knowledge and technology. (The earliest photographers were chemists.) Like painting and sculpture, they are visual, aesthetic, based in both intuition and craft. Like writing, photography is both an action and an object: writing makes writing and photography makes photography. And like writing, photographic images have their own version of the trivium — a logic, grammar, and rhetoric.
We don’t only SEE pictures; we LEARN how they’re structured and how they become meaningful. […]
Photography is the science of the interrelation and specificity of all of these forms, as well as their reproduction, recontextualization, and redefinition. Photography is a comprehensive science; photography is a comparative literature.
From Alex Payne’s “A Thought on Communication”:
Imagine twenty years from now. Your children, if you have them, are grown. By the time your children were of schooling age, plentiful bandwidth converged with improved software and hardware to make multi-way video calling a pervasive reality. Every device one would want to use for communication has this capability: computers, mobiles, game consoles. Audio software is capable of producing an accurate searchable transcript of your every word, even in noisy environments. Synthesis has improved to give all machines natural voices to speak with.
Your children have never known a world dominated by communication via digitally presented text. Sure, people still read and write, but communicating with your peers by text is now considered archaic. Video calling another individual is de rigueur; group video chats are equally common. People dive in and out of group video conversations with ease. Three-dimensional presentation of these conversations is not uncommon, though maybe still a luxury in the developing world.
Is that last one heading too far?
”why would you subject your friends to your daily minutiae? and conversely, how much of their trivia can you absorb? the growth of ambient intimacy can seem like modern narcissism taken to a new, supermetabolic extreme—the ultimate expression of a generation of celebrity-addled youths who believe their every utterance is fascinating and ought to be shared with the world.” -clive thompson-
The Undead Zone: Why realistic graphics make humans look creepy. By Clive Thompson
In 1978, the Japanese roboticist Masahiro Mori noticed something interesting: The more humanlike his robots became, the more people were attracted to them, but only up to a point. If an android become too realistic and lifelike, suddenly people were repelled and disgusted.
The problem, Mori realized, is in the nature of how we identify with robots. When an android, such as R2-D2 or C-3PO, barely looks human, we cut it a lot of slack. It seems cute. We don’t care that it’s only 50 percent humanlike. But when a robot becomes 99 percent lifelike—so close that it’s almost real—we focus on the missing 1 percent. We notice the slightly slack skin, the absence of a truly human glitter in the eyes. The once-cute robot now looks like an animated corpse. Our warm feelings, which had been rising the more vivid the robot became, abruptly plunge downward. Mori called this plunge “the Uncanny Valley,” the paradoxical point at which a simulation of life becomes so good it’s bad.
As video games have developed increasingly realistic graphics, they have begun to suffer more and more from this same conundrum. Games have unexpectedly fallen into the Uncanny Valley.
Consider Alias, the new title based on the TV show. It’s a reasonably fun action-and-puzzle game, where you maneuver Sydney Bristow through a series of spy missions. But whenever the camera zooms in on her face, you’re staring at a Jennifer Garner death mask. I nearly shrieked out loud at one point. And whenever other characters speak to you—particularly during cut-scenes, those supposedly “cinematic” narrative moments—they’re even more ghastly. Mouths and eyes don’t move in synch. It’s as if all the characters have been shot up with some ungodly amount of Botox and are no longer able to make Earthlike expressions.
Every highly realistic game has the same problem. Resident Evil Outbreak’s humans are realistic, but their facial expressions are so deadeningly weird they’re almost scarier than the actual zombies you’re fighting. The designers of 007: Everything or Nothing managed to take the adorable Shannon Elizabeth and render her as a walleyed replicant.
The Uncanny Valley can make games less engrossing. That’s particularly true with narrative games, which rely on believable characters with whom you’re supposed to identify. The whole point is to suspend disbelief and immerse yourself. But that’s hard to do when the characters create goosebumps. You fight searing battles, solve brain-crushing puzzles, vanquish enemies, and what are you rewarded with? A chance to watch your avatar mince about the screen in some ghoulish parody of humanity.
The screwiest part of this phenomenon is that game designers pridethemselves on the quality of their sepulchral human characters. It’s part of the malaise that currently affects game design, in which too many designers assume that crisper 3-D graphics will make a game better. That may be true when it comes to scenery, explosions, or fog. But with human faces and bodies, we’re harder to fool. Neuroscientists argue that our brains have evolved specific mechanisms for face recognition, because being able to recognize something “wrong” in someone else’s face has long been crucial to survival. If that’s true, then game designers may never be able to capture that last 1 percent of realism. The more they plug away at it—the more high-resolution their human characters become—the deeper they’ll trudge into the Uncanny Valley.
Instead, maybe they should try climbing out, by going in the opposite direction and embracing low-rez simplicity. Roboticists have begun doing this. Like Mori, they’ve learned that a spare, stripped-down robot can seem more lifelike than an explicitly humanoid one. I own a Roomba, one of those Frisbee-shaped vacuum robots, and it doesn’t look even vaguely human. Yet as it zips around my living room, it seems amazingly alive, and I can’t help but feel warmly toward it. This is because of another quirk of our psychology: If something behaves in only a slightlyhuman way, we’ll fill in the blanks—we’ll read humanness into it. (That’s partly why our pets seem so intelligent and humanlike.)
Comic-strip artists have known this for years. As comic-book theorist Scott McCloud points out, we identify more deeply with simply drawn cartoon characters, like those in Peanuts, than with more realistic ones. Charlie Brown doesn’t trigger our obsession with the missing details the way a not-quite-photorealistic character does, so we project ourselves onto him more easily. That’s part of the genius behind modernist artists such as Picasso or Matisse. They realized that the best way to capture the essence of a person or object was with a single, broad-stroked detail.
Some of the best game designers understand this, too. Jet Grind Radio, the old Fear Effect * series, and the more recent Viewtiful Joeall use the chunky style of cel-shaded animation to create characters who are cartoonish yet vividly alive. Lara Croft is another good example. Even as her games became more graphically precise, the designers left Croft as a very stylized figure, the better to have players identify with her. And the only game designer who has produced a 20-year string of popular characters is Shigeru Miyamoto, the architect of Nintendo’s Disneylike visual style.
Unfortunately, though, gaming’s Uncanny Valley could be here to stay, simply because players have become used to it. In the real world of plastic surgery, face-lifts used to look horrifically strange but now go unnoticed. Likewise, we’ve played with dead, fish-eyed characters for so long that they seem kinda normal. Creepiness, like beauty, is in the eye of the beholder.