In 1877 an Italian astronomer named Giovannia Schiaparelli looked into his telescope and peered at the red planet. With the telescope technology of the time Schiaparelli could only see a very blurry image of Mars, but if he looked closely he could distinguish certain features on the surface. The most noticeable were a number of long channels he called canali (channels). He drew some pictures of what he observed and published his results. When Schiaparelli’s results were translated to English, the word “canali”, meaning “channels”, was misinterpreted as “canals”. Whereas “channels” denotes natural formations, “canals” denotes artificially made formations. The news spread across the world like wildfire, an alien civilization existed on Mars!
All over the world astronomers turned their telescopes toward Mars to take a peak at the incredible Martian civilization. One of the biggest supporters of the Martian canal theory was the American businessman, mathematician, and astronomer Percival Lowell. Lowell became a household name because of his theories, and built a long career on studying Martian civilization. After numerous observations of the planet, he wrote books, published papers, and gave speaking tours all over America to reveal his findings. According to Lowell Mars was a dying world, so to continue to survive the Martians built a series of planet wide canals to bring fresh water from the polar ice caps. He even published detailed maps of the canals. When Lowell observed Mars two years later, he noticed that two more canals had appeared. Immediately afterwards he published an article announcing that the Martians had built two new large canals in less than two years. Obviously the Martians had super advanced technology and engineering skills.
Lowell’s theories and publications sparked a Martian craze. Other scientists theorized what the Martians looked like, and how the evolved. Martians entered pop culture when the science fiction writer H.G. Wells wrote War of the Worlds, a novel in which the Martians invade and conquer Earth. Others tried to find ways to communicate with the Martians. Even the great inventor Nikola Tesla believed that he was receiving communications from Mars via radio. The Martians became the most popular aliens of the late 19th and 20th century. To many it was undeniable; Martians were real and we are not alone in the universe.
However there were many who were skeptical of Lowell’s theories. In the early 20th century new technology began to reveal cracks in the Martian theory. In 1909 new telescope technology showed that there were no canals on Mars. Rather the canals were optical illusions created by natural features such as mountains erosion patterns. Spectroscopic analysis later proved that the Martian atmosphere had no water vapor. The last nail in the Martian coffin occurred in 1965 when the probe Mariner 4 flew by Mars, taking pictures of the planet. What was found was a barren, uninhabitable world. Later exploration would show that Mars may have once been habitable, but is certainly no longer. Instead of little green aliens, at best science can speculate that Mars is inhabited by a few very simple strains of bacteria, if at all.
I remember the day I decided I was going to be an xenoanthropologist–it was in the first grade, when our teacher showed us the now-famous images from the HAROA telescopes. That was just after they’d commissioned the telescopes in Houston and Denver, and they were pointing the array at planets they’d already confirmed using smaller-baseline arrays. I remember my teacher trying to explain how it worked–that with math you could use small telescopes spread over a long distance as one big one, a principle called interferometry we could barely pronounce–but my classmates and I were more struck by what those famous first images of Gliese 667 Cc showed. Back when my parents were kids, they’d used the dimming of its parent star to speculate there was a planet there, and the first generation optical interferometers–ones with baselines measured in a few tens of kilometers–had been enough to actually resolve the planet and confirm it existed. But that image was the first time we had enough resolution to really see it as a planet–blurry, pixelated oceans, continents, icecaps. And there, on the nightside of the terminator….scattered points of dim light. It took me years in school to realize what that moment meant to me, but that’s the day I knew humans weren’t alone in the universe, and that I wanted to study those lights.
The old sci-fi books I devoured as a kid had all assumed that our contact with aliens would start with a meeting or a message: we would go to their world or they would come to ours, or we would pick up each other’s radio signals and start talking back and forth. Instead we had found them, and they probably didn’t know we existed. In fact, it took years to even know they existed. We already had spectroscopy data that showed elevated oxygen levels in their atmosphere–proof of ongoing chemical processes for replacing it, but for all scientists could initially say with those first famous photographs, we were just looking at vast colonies of photosynthetic, bioluminescent algae. It wasn’t until years had gone by, and I’d spent all of high school digging into physics, math, biology, and history, that they finally commissioned a telescope array capable of seeing smaller geographic features–tens of kilometers instead of hundreds. The articles I read on Popular Science’s site explained how scientists were charting the positions of the lights through time, cross-comparing to oceans, bays, mountains, and rivers we could infer only from the green lines they cut through deserts, and the spectral analysis of the lights themselves–finally showing proof that those points of light were the product of campfires and lamps, not just glowing moss.
In a way, that was the entire discipline of xenoanthropology: poring over the latest images from the best telescopes, seeing what could be implied, inferred, or just guessed. We watched the points of light grow, new ones sprout on other continents–were we watching another race’s Age of Exploration? People wrote entire papers on the implications of the ways those dots of light and faint daylight blobs moved and grew for their transportation, their culture, and their politics. I was one of those people, in my undergrad years and my early grad work–we were the modern day Schiaparelli, seeing canals on Mars. I look back on those papers….what we didn’t know! But it was enough to keep the interest of some of the public, the part that hadn’t gotten bored when it turned out the aliens weren’t beaming out new episodes of Kitchen Gladiators or immediately retreated into endless circular discussions of the theological implications here on Earth. But that was the best we could see with a telescope array limited by the diameter of the Earth: ten kilometers of resolution. Anything smaller than a town, and we couldn’t even see it.
I was a postdoc at the HAROA central station in Boulder when we finally got the approval for the space-based optical telescopes, the Lightsecond Array. One at Earth-Moon Lagrange Four, the other at Lagrange Five, the two telescopes were massive things, fine webs of stretched-mylar mirrors miles across. Technically, it wasn’t a lightsecond, it was 2.22 lightseconds, almost bang on 666,000 kilometers of separation squinting down the line to Gliese. When we could steal time on what we around the lab called the Lucifer Array from the plain old astronomers staring at exoplanets that didn’t sport their own branch of the tree of life, we could finally see everything. It was a little uncomfortable, actually: the unblinking Eye of Sauron peering down with a resolution of 183 meters. The press releases had described us as being able to “see a soccer stadium,” but more to the point we could see…well, whatever they called it, the game they played on the grass fields in the center of their towns and cities. With Lucifer, we could see the ships we’d speculated must exist–sail-powered, just like we’d thought. I remember the first time the lights of a ship we were tracking didn’t reappear after a major storm–the first time I looked at Gliese and knew I’d just seen hundreds of deaths that these 1700s-era people could never solve. It almost felt voyeuristic, as we catalogued wars and weather and settlers on the frontier. But I was hooked, just like I’d been in Ms. Mueller’s classroom thirty years ago.
I remember even more as we saw those lights increasing in number, the increased levels of carbon dioxide and other products in their atmosphere around their largest cities–we’d started watching about the time they’d started their first Industrial Revolution, the one powered by water, but by the way the cities started moving away from rivers and the smoke and light of their furnaces we could tell when they entered the Age of Steam. With that came new craftsmanship on their part, and we made our own improvements–new telescopes spread further from Earth, giving us the resolution to see buildings, streets, the details of their ships and trains, and more. Others wrote a dozen papers speculating on it, but I just remember shouting for my wife who’d gone to get us coffee from the breakroom when Denver got the latest round of images from one of their major ports. They’d started building some kind of statue on an island–of course, I always call her Lady Liberty. But she was hundreds of meters tall, and with that we finally knew what they looked like, all four arms and six legs of them, like spider-centaurs.
With all the developments we saw them making as I rose up the ladder of the observatory staff, my worst nightmare was always that they’d have their Black Plague, their nuclear moment: that one day I’d see entire cities flicker and burn and go out like that ship in the storm–an entire species dying out or killing itself while we couldn’t do anything but watch. That’s why we’ve been doing it–they’re developing electricity these days, we’ve seen the change in their lights, the dams on their rivers for hydro power. Someday soon, they’ll finally invent the radio, and they’ll get the signals my team has been sending out–math and encoded pictures, greetings from Earth to tell them they’re not alone. It could be lost in the interstellar static, I don’t know how long it will take for them to hear it, or to build an antenna capable of replying but…I’d like to see it happen before I’m back out of the picture.
…What is it, Sam? I’m bus…what? Show me!
I’m looking at another picture now, just off the Array. We didn’t need the resolution for it, not in the end. And we didn’t need the radio telescopes we’ve had listening for the first faint flickers of any reply among the background hum of the universe. You were smarter than that, and I should have known. We told you we were watching, and when you heard, you just wanted to show you’d heard. The image Sam’s got pulled up on her tablet is that same globe I’ve seen for decades–the one I know almost better than a map of Earth or the lunar colonies. I know every point of light, every city and town. The coordination you’re showing us! She hits play again, and it happens: across an entire seacoast, over the course of hours, the lights in a dozen cities and towns flicker off, then on, then off. An hour passes, then off and on again twice more, then another hour of darkness, then three more flashes. You get to eleven before the lights finally come back on steadily, then fade out in the familiar ways before the coming dawn. Primes, aimed right back along our radio beam. It’s amazing. I’ve spent my whole career watching you, moments of insight and wonder at the light flashing across lightyears. But I don’t think I’ll ever forget this moment–the moment you knew we were looking and you took the chance to talk back.
NASA announced today the discovery of a new planet in orbit around our sun at a distance fifty times greater than Pluto’s, deep within the Kuiper Belt. The planet is made of ice and is about ten times the diameter of Earth. It also contains a large, perfectly squared off ravine as seen in the Hubble photograph above.
“We aren’t certain what the ravine could be,” said NASA Spokeswoman Margaret Melville-Mulberry, “But its perfectly square nature suggests either intelligent life or a type of impact we are not yet familiar with. It does have what appears to be a large crater in the center of the ravine, supporting this hypothesis.” Others at NASA are slower to admit any possible sign of intelligent life, preferring to wait for confirmation on what appear to be canals. “They thought features on Mars were canals once, let’s not make the same mistake again,” said NASA engineer Sam Smithwick Stevens II.
The planet has a large gravitational pull and registers now as the largest planet in our system not to be composed primarily of gas. Also intriguing is a glow detected within the crater in the ravine, which seems to be gaining intensity as the rotation of the planet aims the crater at Earth.
NASA’s first probe to the planet will pass through its orbit in 2022, giving us a better look at the new discovery.
Like most writers, I am an inveterate procrastinator. In the course of writing this one article, I have checked my e-mail approximately 3,000 times, made and discarded multiple grocery lists, conducted a lengthy Twitter battle over whether the gold standard is actually the worst economic policy ever proposed, written Facebook messages to schoolmates I haven’t seen in at least a decade, invented a delicious new recipe for chocolate berry protein smoothies, and googled my own name several times to make sure that I have at least once written something that someone would actually want to read.
Lots of people procrastinate, of course, but for writers it is a peculiarly common occupational hazard. One book editor I talked to fondly reminisced about the first book she was assigned to work on, back in the late 1990s. It had gone under contract in 1972.
I once asked a talented and fairly famous colleague how he managed to regularly produce such highly regarded 8,000 word features. “Well,” he said, “first, I put it off for two or three weeks. Then I sit down to write. That’s when I get up and go clean the garage. After that, I go upstairs, and then I come back downstairs and complain to my wife for a couple of hours. Finally, but only after a couple more days have passed and I’m really freaking out about missing my deadline, I ultimately sit down and write.”
Over the years, I developed a theory about why writers are such procrastinators: We were too good in English class. This sounds crazy, but hear me out.
Most writers were the kids who easily, almost automatically, got A’s in English class. (There are exceptions, but they often also seem to be exceptions to the general writerly habit of putting off writing as long as possible.) At an early age, when grammar school teachers were struggling to inculcate the lesson that effort was the main key to success in school, these future scribblers gave the obvious lie to this assertion. Where others read haltingly, they were plowing two grades ahead in the reading workbooks. These are the kids who turned in a completed YA novel for their fifth-grade project. It isn’t that they never failed, but at a very early age, they didn’t have to fail much; their natural talents kept them at the head of the class.
This teaches a very bad, very false lesson: that success in work mostly depends on natural talent. Unfortunately, when you are a professional writer, you are competing with all the other kids who were at the top of their English classes. Your stuff may not—indeed, probably won’t—be the best anymore.
If you’ve spent most of your life cruising ahead on natural ability, doing what came easily and quickly, every word you write becomes a test of just how much ability you have, every article a referendum on how good a writer you are. As long as you have not written that article, that speech, that novel, it could still be good. Before you take to the keys, you are Proust and Oscar Wilde and George Orwell all rolled up into one delicious package. By the time you’re finished, you’re more like one of those 1940’s pulp hacks who strung hundred-page paragraphs together with semicolons because it was too much effort to figure out where the sentence should end.
The Fear of Turning In Nothing
Most writers manage to get by because, as the deadline creeps closer, their fears of turning in nothing eventually surpasses their fears of turning in something terrible. But I’ve watched a surprising number of young journalists wreck, or nearly wreck, their careers by simply failing to hand in articles. These are all college graduates who can write in complete sentences, so it is not that they are lazy incompetents. Rather, they seem to be paralyzed by the prospect of writing something that isn’t very good.
“Exactly!” said Stanford psychologist Carol Dweck, when I floated this theory by her. One of the best-known experts in the psychology of motivation, Dweck has spent her career studying failure, and how people react to it. As you might expect, failure isn’t all that popular an activity. And yet, as she discovered through her research, not everyone reacts to it by breaking out in hives. While many of the people she studied hated tasks that they didn’t do well, some people thrived under the challenge. They positively relished things they weren’t very good at—for precisely the reason that they should have: when they were failing, they were learning.
Dweck puzzled over what it was that made these people so different from their peers. It hit her one day as she was sitting in her office (then at Columbia), chewing over the results of the latest experiment with one of her graduate students: the people who dislike challenges think that talent is a fixed thing that you’re either born with or not. The people who relish them think that it’s something you can nourish by doing stuff you’re not good at.
“There was this eureka moment,” says Dweck. She now identifies the former group as people with a “fixed mind-set,” while the latter group has a “growth mind-set.” Whether you are more fixed or more of a grower helps determine how you react to anything that tests your intellectual abilities. For growth people, challenges are an opportunity to deepen their talents, but for “fixed” people, they are just a dipstick that measures how high your ability level is. Finding out that you’re not as good as you thought is not an opportunity to improve; it’s a signal that you should maybe look into a less demanding career, like mopping floors.
This fear of being unmasked as the incompetent you “really” are is so common that it actually has a clinical name: impostor syndrome. A shocking number of successful people (particularly women), believe that they haven’t really earned their spots, and are at risk of being unmasked as frauds at any moment. Many people deliberately seek out easy tests where they can shine, rather than tackling harder material that isn’t as comfortable.
If they’re forced into a challenge they don’t feel prepared for, they may even engage in what psychologists call “self-handicapping”: deliberately doing things that will hamper their performance in order to give themselves an excuse for not doing well. Self-handicapping can be fairly spectacular: in one study, men deliberately chose performance-inhibiting drugs when facing a task they didn’t expect to do well on. “Instead of studying,” writes the psychologist Edward Hirt, “a student goes to a movie the night before an exam. If he performs poorly, he can attribute his failure to a lack of studying rather than to a lack of ability or intelligence. On the other hand, if he does well on the exam, he may conclude that he has exceptional ability, because he was able to perform well without studying.”
Writers who don’t produce copy—or leave it so long that they couldn’t possibly produce something good—are giving themselves the perfect excuse for not succeeding.
“Work finally begins,” says Alain de Botton, “when the fear of doing nothing exceeds the fear of doing it badly.” For people with an extremely fixed mind-set, that tipping point quite often never happens. They fear nothing so much as finding out that they never had what it takes.
“The kids who race ahead in the readers without much supervision get praised for being smart,” says Dweck. “What are they learning? They’re learning that being smart is not about overcoming tough challenges. It’s about finding work easy. When they get to college or graduate school and it starts being hard, they don’t necessarily know how to deal with that.“
Embracing Hard Work
Our educational system is almost designed to foster a fixed mind-set. Think about how a typical English class works: You read a “great work” by a famous author, discussing what the messages are, and how the author uses language, structure, and imagery to convey them. You memorize particularly pithy quotes to be regurgitated on the exam, and perhaps later on second dates. Students are rarely encouraged to peek at early drafts of those works. All they see is the final product, lovingly polished by both writer and editor to a very high shine. When the teacher asks “What is the author saying here?” no one ever suggests that the answer might be “He didn’t quite know” or “That sentence was part of a key scene in an earlier draft, and he forgot to take it out in revision.”
Or consider a science survey class. It consists almost entirely of the theories that turned out to be right—not the folks who believed in the mythical “N-rays,” declared that human beings had forty-eight chromosomes, or saw imaginary canals on Mars. When we do read about falsified scientific theories of the past—Lamarckian evolution, phrenology, reproduction by “spontaneous generation”—the people who believed in them frequently come across as ludicrous yokels, even though many of them were distinguished scientists who made real contributions to their fields.
“You never see the mistakes, or the struggle,” says Dweck. No wonder students get the idea that being a good writer is defined by not writing bad stuff.
Unfortunately, in your own work, you are confronted with every clunky paragraph, every labored metaphor and unending story that refuses to come to a point. “The reason we struggle with"insecurity,” says Pastor Steven Furtick, “is because we compare our behind-the-scenes with everyone else’s highlight reel.”
About six years ago, commentators started noticing a strange pattern of behavior among the young millennials who were pouring out of college. Eventually, the writer Ron Alsop would dub them the Trophy Kids. Despite the sound of it, this has nothing to do with “trophy wives.” Rather, it has to do with the way these kids were raised. This new generation was brought up to believe that there should be no winners and no losers, no scrubs or MVPs. Everyone, no matter how ineptly they perform, gets a trophy.
As these kids have moved into the workforce, managers complain that new graduates expect the workplace to replicate the cosy, well-structured environment of school. They demand concrete, well-described tasks and constant feedback, as if they were still trying to figure out what was going to be on the exam. “It’s very hard to give them negative feedback without crushing their egos,” one employer told Bruce Tulgan, the author of Not Everyone Gets a Trophy. “They walk in thinking they know more than they know.”
When I started asking around about this phenomenon, I was a bit skeptical. After all, us old geezers have been grousing about those young whippersnappers for centuries. But whenever I brought the subject up, I got a torrent of complaints, including from people who have been managing new hires for decades. They were able to compare them with previous classes, not just with some mental image of how great we all were at their age. And they insisted that something really has changed—something that’s not limited to the super-coddled children of the elite.
“I’ll hire someone who’s twenty-seven, and he’s fine,” says Todd, who manages a car rental operation in the Midwest. “But if I hire someone who’s twenty-three or twenty-four, they need everything spelled out for them, they want me to hover over their shoulder. It’s like somewhere in those three or four years, someone flipped a switch.” They are probably harder working and more conscientious than my generation. But many seem intensely uncomfortable with the comparatively unstructured world of work. No wonder so many elite students go into finance and consulting—jobs that surround them with other elite grads, with well-structured reviews and advancement.
Today’s new graduates may be better credentialed than previous generations, and are often very hardworking, but only when given very explicit direction. And they seem to demand constant praise. Is it any wonder, with so many adults hovering so closely over every aspect of their lives? Frantic parents of a certain socioeconomic level now give their kids the kind of intensive early grooming that used to be reserved for princelings or little Dalai Lamas.
All this “help” can be actively harmful. These days, I’m told, private schools in New York are (quietly, tactfully) trying to combat a minor epidemic of expensive tutors who do the kids’ work for them, something that would have been nearly unthinkable when I went through the system 20 years ago. Our parents were in league with the teachers, not us. But these days, fewer seem willing to risk letting young Silas or Gertrude fail out of the Ivy League.
Thanks to decades of expansion, there are still enough spaces for basically every student who wants to go to college. But there’s a catch: Most of those new spaces were created at less selective schools. Two-thirds of Americans now attend a college that, for all intents and purposes, admits anyone who applies. Spots at the elite schools—the top 10 percent—have barely kept up with population growth. Meanwhile demand for those slots has grown much faster, because as the economy has gotten more competitive, parents are looking for a guarantee that their children will be successful. A degree from an elite school is the closest thing they can think of.
So we get Whiffle Parenting: constant supervision to ensure that a kid can’t knock themselves off the ladder that is thought to lead, almost automatically, through a selective college and into the good life. It’s an entirely rational reaction to an educational system in which the stakes are always rising, and any small misstep can knock you out of the race. But is this really good parenting? A golden credential is no guarantee of success, and in the process of trying to secure one for their kids, parents are depriving them of what they really need: the ability to learn from their mistakes, to be knocked down and to pick themselves up—the ability, in other words, to fail gracefully. That is probably the most important lesson our kids will learn at school, and instead many are being taught the opposite.
Earth globes? Those are so 1492. Here is our handcrafted mars globe showing the famous “Mars Canal” map by astronomer Percival Lowell (1905). This lovely film gives a glimpse into my world as a globemaker.