How A Computer Could Learn Like Your Brain-Part 2
If I place a lemon in your open palm, you won’t be able to recognize that it’s a lemon until you rub your fingers and palm up and down the edges of it. Nor would you be able to accurately recognize a song if I played you just one note. Your recognition of the lemon and the song depend completely on feeling the lemon or hearing the song for several seconds. We often aren’t aware of it, but time is a crucial factor in understanding the world around us.
Touch is just one in an array of senses that wouldn’t work without time. Making sense of the world through your eyes is also heavily dependent on time. Your eyes are constantly performing quick subconscious movements called saccades where they scan back and forth over an object several times a second. When you look at someone’s face, for example, your eyes aren’t seeing their whole face. Certain cells within your neocortex are registering the whole face, but only specific aspects of the person’s face are entering your eyes at any one moment. Your eye might fixate on the nose, then the lips, then the left eye, and then back to the nose, etc.
In part 1, we learned about the spatial tricks your brain uses to identify objects in the world. Recognizing spatial patterns is one part of what it needs to do. The second and equally crucial part is memorizing sequences of spatial patterns; without this ability, general intelligence would not be possible.
To really appreciate what this all means let’s perform a little thought experiment. Imagine that you and your family have just moved to a new neighborhood and your 3 year old son Timmy is getting restless because he wants to go to the playground. “Where’s the nearest playground?”, you ask your neighbor Jim. “Go down two blocks on Lincoln, take a right at Monroe, walk for about 5 minutes and you’ll see it on your right,” he shoots back. So you set off with Timmy. Never having had a great sense of direction, you get lost a few times, but finally find your way to the playground. Being the canonical 3 year old, Timmy wants to go to the playground every day, so you have to remember how you got there. With each trip to the playground, you get better and better at finding your way until you’ve got it plastered in your memory. Just imagine if every time you took Timmy to the playground you had to learn the route anew. That is in fact what life is like for non-mammalian organisms like reptiles because they lack a neocortex, the big wrinkly part of the brain and the seat of higher intelligence.
The neocortex allows an organism to predict the future and the ability to predict the future is what makes mammals smarter than other animals and humans smarter than other mammals. But the neocortex isn’t just integral to planning. It also drives our most basic sensory functions such as recognizing things. For example, you wouldn’t be able to recognize your father as he walks towards you if your brain couldn’t patch together the different images of him that land on your retinas as he lurches forward. At each millisecond you’d be confused as to whether you’re still seeing the same man you saw a millisecond ago. Object recognition is in fact a major problem in artificial intelligence, and the main reason why after decades of trying, we still don’t have robots that can do basic things that people find easy like throwing a ball or understanding language.
General Intelligence
Now that I’ve partially convinced you of the importance of time to intelligence, let’s revisit our little scenario from part 1 with Jimmy the bully to get a better feel for how the brain uses time at the neurological level.
As I stated there, if you’re the 98 lb weakling, there are two crucial things your brain needs to do for you to avoid getting beat up over and over again.
1) Identify the bully (through any number of your senses)
2) Learn the temporal patterns that have led to you getting beat up
The first one is a spatial problem, which we covered in part 1. Let’s now turn to the second one: learning temporal patterns. What do I mean by temporal patterns?
Have you ever experienced that feeling where you’re listening to the end of a song and in your mind you can already hear the beginning of the next song on the album? Even if you haven’t listened to that album in years, your brain still predicts what the next song is going to be. It happens to me even when I hear a song in a nightclub or restaurant.
All your brain is doing here is remembering what song came after that song the last time it heard it in a sequence. And it memorizes sequences like this for just about every aspect of sensory reality. In fact, your brain is subconsciously memorizing millions of sequences as you read this. Most of your predictions are also subconscious. Have you ever lifted an empty suitcase that you thought was going to be full of stuff? For a moment you feel like the strongest person in the world! Your brain was subconsciously predicting that the suitcase was going to be heavy because most of our experiences picking up suitcases involve ones with stuff in them. If from that moment on though, you pick up only empty suitcases, your brain will learn that pattern and you’ll be surprised if a suitcase is heavy.
What’s Really Happening in Your Head?
Let’s get back to Jimmy the bully. As you walk into the school and see Jimmy, your brain is stitching together sequences of scenes at each microsecond in time. It’s predicting that if he’s in front of you now, he’ll continue to be in front of you. If he all of a sudden disappears into thin air, your brain will go “Holy shit!” Your brain will also predict that if you walk into the same hallway tomorrow, you will probably see him again.
Remember that in your brain there is no little image of Jimmy. He is just an electrical pattern represented by a bunch of firing neurons.
The specific pattern of neuron columns and cells that represents him is called a sparse distributed representation(SDR; see part 1).
Let’s say that Jimmy beat you up this morning in the hallway. Tomorrow you are going to be walking through that same hallway and your brain should anticipate that it’s going to see him and that he’ll beat you up again. Let’s see exactly how it does that.
When Jimmy beat you up this morning, the entire interaction got stored in a bunch of neurons. We’ll say it happened at time = x. And tomorrow will be time = x + 1. So all the input from this morning’s interaction entered your brain and got converted to SDRs.
If you remember, in part one we talked about how the spatial patterns of light from Jimmy that are coming in through the eyes and landing on the visual cortex get converted into an electrical representation in the neurons that looks like this:
But this isn’t yet a sparse representation. The columns with the highest overlap scores (ie; strongest electrical spikes) will inhibit other columns around them. I mentioned a value called Desired Local Activity which ensures that any column with an overlap score that is higher than the xth highest column stays active and the columns that fall under that are inhibited. For our example let’s say that we’ll inhibit a column if its overlap score isn’t higher than the 3rd highest score, which is 2. So if a column’s score isn’t higher than 2, it gets shut down. That leaves us with just 3 eligible columns from the representation above, which would produce an SDR that looks like the diagram below(only columns with red cells are active; the others are silent).
What this diagram displays is powerful because it allows for the emergence of general intelligence. The electrical impulses that represented Jimmy were compressed into the three columns. These few chosen columns are packed with semantic information about Jimmy, and because of that compression, they are very resistant to noise. That means that if we were to shut down one of the columns, we could still recognize that pattern as representing Jimmy because the other two columns are packed with more than enough information to preserve the gist of the pattern. That’s why if you walked up to Jimmy from behind or from the side where his face was partially obscured, you could still recognize him. This leads to a very powerful phenomenon, which is the ability to predict the future based on incomplete information. Think about how you’re able to decipher incomplete words:
i*compl@te w#rds
What your brain is doing here is predicting what the missing letters are based on what’s around them. This is what allows you to understand a conversation in a loud room where you can’t fully hear all the words your conversational partner is uttering. No machine can do this.
Context
In order to have the ability to predict the future, you have to know how different events followed each other in the past. Then when you see parts of the sequence again, you can make accurate predictions.
Your task is to predict if Jimmy is going to beat you up again. Yesterday you walked into the hall and he whupped your ass, so you should be able to predict that if you walk through the hall today, he’ll do the same. Let’s say that the image of Jimmy as he stands right in front of you in the hall is the SDR above (at time = x). And we’ll say that the moment when his fist makes contact with your face is time = x + 1, represented by the diagram below:
We need to connect these two moments in time in the brain. For that to happen, the neurons at both time periods have to talk to each other. They do so electrochemically.
First though, let’s get an important detail out of the way. The SDR diagrams at time = x and time = x + 1 show entire active columns. In real brains, neurons within a column do often act as a single unit because they react to the same type of input, but generally once a column notices part of an input coming in from the world, it chooses a single cell within the column to fire. The reason it does this is to add context to the input. For example, the words “ate” and “eight” both have the sound ‘ayt’ and sound identical to the ear. How does your brain know that they mean different things? The only way it can tell the difference is by interpreting them in the context of the words that came before them.
The version of the sound ‘ayt’ that stands for the number is preceded by X whereas the version that means to eat is preceded by A. In your brain, the sound ‘ayt’ might be represented by the same columns of neurons but the different meanings of it are represented by a different cell in each column.
If you were to peer inside your brain right now, you’d actually see individual cells activated within columns. So going back to Jimmy the bully, the SDR diagrams for him at time = x and time = x + 1 are:
Predicting a Punch: Connecting Past and Future
Your task now is to connect the SDR of Jimmy standing in the hallway with the SDR of him punching you in the face. That way the next time the first SDR gets activated, you can predict the second one and try to avoid the event. How you would go about physically avoiding it lies in the realm of sensory-motor perception and is a whole other can of worms which we won’t get into in this post. We’ll just deal with sensory perception here.
So we’ll connect the past with current reality. This happens in your brain by connecting neurons activated by current input to neurons that were previously active. That way when those neurons that were previously active become active again, they connect to neurons in the current input and make them anticipate their own activation.
So if Jimmy punches you in the face right now, the neurons in the SDR for that connect to a subset of neurons from the SDR of him standing in the hallway in front of you seconds before. Next time you find yourself with him standing in front of you, the neurons for the SDR of him punching you in the face will become active.
How Learning Happens in Neurons
Learning, planning and anticipating simply consist of the subconscious strengthening and weakening of connections between neurons in the brain. Let’s say you’re walking down the hall, Jimmy confronts you and wallops you in the face. The neurons that represent both events connect to each other through synapses and strengthen their links by incrementing their permanence score by +.1 each time the two events follow each other in sequence. A permanence score represents the strength of a connection between two neurons. It runs between 0 and 1.
If tomorrow you meet Jimmy in the hallway and he doesn’t hit you, the synaptic connections between the two events get weakened. If you meet Jimmy 50 more times in the hallway and he never hits you, you’ll start predicting that Jimmy standing in front of you will lead to him not hitting you because the connections between him standing and him hitting you get weakened to the point where the neurons representing those two events disconnect from each other.
You can view your brain as a massive network of connections between neurons that represent things and relationships between things over time. Learning, understanding and consciousness emerge from trillions of these connections being made and unmade all the time. In fact, what we call the soul and the sum total of who you are as a free-willed intelligent agent are contained within these electrochemical connections.
How A Computer Could Learn Like Your Brain--Part 1
When Bill Gates was still CEO of Microsoft, he spoke at a middle school and one of the students asked him if it would be possible to build a company as successful as Microsoft again. He replied quickly and said “If you invent a breakthrough so computers can learn, that is worth 10 Microsofts.” Gates was clairvoyant enough to foresee the future, but didn’t know how to get there.
All computers, including the iPhone in your pocket have been built on a programming paradigm of computation that has been around for about 70 years. From an architecture standpoint, little has changed since the first computer was built. These magical machines have made our lives so much smoother and more comfortable. From Amazon.com to smartphones, the computer revolution has made possible a world that people from eras past could only dream about.
But modern computers have one huge limitation; they can’t learn on their own. If you really want to know what singular quality separates us from the most powerful quantum computers, that’s it right there. After decades of attempting to get machines to be like people, we haven’t gotten very far. How come?
The way your brain works is extremely clever. And simple clever solutions often hide from us in plain view. But there is no reason we can’t unravel the deepest secrets of how the brain works. We know how our livers and hearts work and they’re made of cells just like our brains, so we should be able to develop a theory of how the brain works too.
There is, in fact, a lot of research in the world of neuroscience that is shedding light on exactly how this magical lump of flesh in our skulls learns about the world. If we ever want to have hope of building smart machines like cars that can learn about the world on their own, we have to copy what the brain does because it’s the only example of a learning machine that we have.
Onward we go into the secrets of our 3 lb universal learning machine.
But first, let’s ask ourselves a basic question: What is learning? What actually goes on inside the brain when your 3 year old learns a new language or when you learn how an engine works. It’s not just about rote memorizing obviously because if you know how a car engine works, how is it that you’re also able generalize about how an electric motor works?
Avoiding Bullies
Let’s try to understand learning with something we’re all familiar with: bullies. If we can understand the neuroscience behind what it takes to avoid a bully, we could build a machine that could theoretically do the same. Though here we’re going to focus on just the learning part and not the reacting part, which is motor output and a topic for another day.
If you’re the 98 lb weakling, there are two crucial things you need to have happen at the neurological level for you to avoid getting beat up over and over again.
1) Identify the bully (through any number of your senses)
2) Learn the time-based patterns that have led to you getting beat up
Learning What a Bully Looks Like
Imagine that you’re 11 years old and it’s your first day of middle school. As you saunter down the dimly lit hallway to social studies class, you see him; Jimmy is the toughest and most feared boy at school. He’s taller than you and outweighs you by a good 40 pounds. But it’s your first day of school, so you don’t know how fearsome Jimmy is and you obliviously walk past him. “Hey dipshit” he says as he walks over to you with a puffed out chest and a smirk on his face. You can sort of sense that he’s up to no good, but he hasn’t done anything to you yet so you give him the benefit of the doubt. “Give me your lunch money,” he yells. Your inner voice screams “Don’t back down,” so you say “Get your own lunch money dude.” Pow! Next thing you know his clenched fist meets your face. You hit the ground like a stack of dimes.
As you lie down on the ground with a bloody nose, you look up at Jimmy and a million pieces of data rush in through your senses, all of which will help you avoid Jimmy tomorrow.
So what actually happens in your brain when it’s registering Jimmy? Within a split second, you’ve learned that Jimmy is a bully. Even if Jimmy puts on glasses, a different colored shirt and a hat tomorrow, you’ll still know that it’s the Jimmy who sucker punched you. Yet, you also intuitively know that a guy who looks like Jimmy isn’t necessarily a bully. These are extremely challenging engineering problems. If you wanted to build a robot that could learn about Jimmy, you could try to use complex math and statistics, but it wouldn’t work very well because there are too many conditions in the world. We’d need to use a different technique, the same one your own neocortex does.
Your understanding of Jimmy happens through what he looks like, how he sounds and perhaps how he smells. To keep it relatively simple, let’s just focus on what he looks like. Imagine you’re seeing Jimmy walking up to you in that dim hallway for the first time. With his bald head, muscular physique and 1,000 mile stare, he cuts an imposing figure.
As he walks up to you, your cornea takes in the light, which makes its way through the vitreous and on to the retina, which transforms the image of Jimmy into electrical impulses which get transmitted to the optic nerve and then on to the “seeing” part of the brain, the visual cortex.
Jimmy is just an electrical pattern waiting to be registered in your brain at this point. This has to happen so that next time you see him, you can recognize him. One huge problem that’s plagued artificial intelligence since the beginning is that of how to create accurate internal representations of objects in the world. To build a machine that can recognize Jimmy the way you do, we’d have to mimic what happens in your visual cortex after it has received the electrical impulses from the eye. The visual cortex is just a large sheet of cells that spans both hemispheres at the back of the brain. The cells in the visual cortex are no different than the cells in other parts of the brain; they just happen to specialize in visual input.
Neurons conduct these electrical impulses in the brain. They are stacked on top of each other to form layers and the impulses get conducted up these layers to other regions of the brain.
Theoretically, neurons only have two states. They are either firing (on) or not (off). We could represent this in a machine brain with 1s (on) and 0s (off) in the form of a bit map vector. A column of neurons three neurons deep and ten neurons wide might look like this:
0000000000 0000000000 0000000000
So just imagine we’ve cut open your brain and sliced out a tiny portion of cortex of that size and forced it to recognize Jimmy. And we’ll also imagine that’s it’s devoid of information (since it’s all 0s), it hasn’t learned anything yet about the world and is eager to do so. How would we make it learn?
Remember that at this point Jimmy is just a bunch of electricity. What we have to do is form an electrical representation of him which is unique to the visual pattern that is Jimmy and we also have to encode this information in such a way that even if Jimmy grows his hair out, puts on a sweater and wears glasses, we can still recognize that it’s him. Hell, this neural representation of Jimmy should be so good that if somebody made an oil painting rendering of Jimmy, we should be able to recognize that it’s him. It’s such a difficult engineering challenge that 60 years of AI research hasn’t solved it. So let’s look at the trick your brain uses.
Remembering Jimmy
So here’s the task: get that 10 neuron wide, 3 neuron deep layer of brain represented by the bit vector above to recognize the electrical impulses that form the representation of Jimmy. We can represent this electrical impulse using a binary vector too. Remember, this electrical impulse is coming in through the optic nerve and hitting the cells of the visual cortex. For argument’s sake, here is the vector that represents Jimmy as it lands on the visual cortex:
0 1 1 1 1 1 1 1 0 0 0 0 1 1 0 0 1 0 1 0 0 0 0 0 1 1 1 1 0 0 1
If you’re still confused about how a bunch of 1s and 0s can represent something like a person, just imagine that each of those represents some characteristic of Jimmy. So the first position represents his nose, the second his jacket, the third his ear, etc. In the real world it doesn’t quite happen like this, but this is good enough to get the point across.
So at this point the electrical impulse hits the 3 layer column of cells of the visual cortex that we have above. Each column acts as a single cell and the bottom cell in each row has a dendrite that branches out from underneath it and the dendrite has a bunch of synapses on it that randomly connect to a certain subset of the electrical impulse of Jimmy. A dendrite is just a branched extension that comes out of a neuron to connect to other cells and the synapses on it do the actual passing and receiving of electro-chemical signals in the brain.
The dendrite sticking out from underneath each column randomly picks a subset of the electrical impulse and each bit in that subset attaches itself to a synapse on that dendrite. Each synapse has something called a permanence score that runs between 0 and 1. The closer the score gets to 1, the stronger the connection between the synapse and the input bit. But if that score falls below a certain number, the synapse detaches itself from the input bit and no longer recognizes it. For our example, we’ll set the permanence score threshold for an attachment to 0.5. So if a score fell to 0.4, the synapse would detaches itself from the input bit.
To make it easier for us visually, we’ll convert our 3 layer slice of brain to blue cells.
The Crucial Ingredient for General Intelligence
At this point the electrical impulse of Jimmy is connected to cell columns in the visual cortex and has to be converted to what’s called a sparse distributed representation (SDR). Without SDRs, general intelligence is likely not possible. I’ll explain how they work a bit later.
First, let’s take a look at the overlap scores in the diagram above. These are just the summations of the on (1) bits of Jimmy’s electrical representation that a column attaches to. To get the overlap scores, all we did was add up all the 1 bits that have permanence scores of .5 or above. The next thing that happens is that the columns compete with each and inhibit neighboring columns. Columns that activate first and fire the most strongly inhibit the others around them.
There is a value called Desired Local Activity which says that any column that has an overlap score that is higher than the xth highest column stays active and the columns that fall under that are inhibited(in your brain that means they’re cold; no electrical impulse runs through them). For our example let’s say that we’ll inhibit a column if its overlap score isn’t higher than the 3rd highest score, which is 2. So if a column’s score isn’t higher than 2, it gets shut down. That leaves just 3 eligible columns, the two on the left and the second one from the right. Since the column can either be active or not, we can turn the columns into a binary vector. It would be the following:
1 1 0 0 0 0 0 0 1 0
So Jimmy is now represented by that vector. It’s an SDR. The reason the brain sparsifies the input this way isn’t for energy conservation, but so that it can ensure that if it sees two different SDRs which share bits, it can compare them and be certain that they are semantically related. Also, even if most of the bits (ie, neurons) die, you can still match up the live ones and ensure a match. The more information about Jimmy you pack in each bit, the more meaning that bit has and the fewer of them you need.
This is a crucial point because one of the challenges any living organism faces is the need to recognize things in the world. Doing basic things like getting in a car on your own wouldn’t be possible without SDRs because as you walk towards your car, a different image of the your car enters your eyes every time your position around it shifts (the lighting, shadows, etc all change). So how do you know it’s the same car?
The only way to solve that problem is to have a form of intelligence that is able to detect with high mathematical precision that an object at two different points in time with two different representations contain enough similar semantic features to be recognized as the same object. If you see Jimmy again tomorrow and he is in the same spot and looks the same, his SDR will be identical or extremely similar to the one yesterday.
But what if you run into Jimmy in a dark room at a Halloween party when he’s wearing a costume? You should still be able to recognize him unless his face is concealed or very heavily caked with makeup. Let’s say you see him at the party and you recognize him with a pirate outfit on. That means he’s transmitting a different visual pattern on to your eyes which is creating a slightly different SDR of him. Maybe something like this:
Notice now that the columns with the overlap scores that meet the required threshold has changed slightly. It’s now the second from the left and the two on the right, so the SDR for Jimmy at the Halloween party is now:
0 1 0 0 0 0 0 0 1 1
The way you can still tell that he’s the same Jimmy is because pirate Jimmy shares bits with hallway Jimmy. They share two bits in common.
1 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 1
And remember, each of these bits is packed with semantic information so if two bits on two different SDRs match up, you can be sure that they represent related objects.
Learning
There’s another crucial thing that’s happened. Take a look at the permanence scores in the two diagrams. For the left-most column we’ve gone from (.2, .7, .6, .5) to (.1, .8, .5, .4). Every time a synapse that is connected to the electrical input of Jimmy sees a new 1 bit, it gets incremented by 0.1. Each time it sees a new 0 bit, it gets decremented by 0.1. Same with the right-most bit. It went from (.4, .4, .7) to (.5, .5, .8). If it goes below .4, the synapse disengages and forgets about that pattern. This is how your brain learns spatial patterns. It reinforces patterns that it sees a lot and forgets patterns that it doesn’t see a lot. If Jimmy wore a pirate outfit every day, you would begin associating him with pirate outfits. If he stops wearing pirate outfits for years, your brain might not even remember that you once saw him at a Holloween party with a pirate outfit.
The Algorithm Behind Human Male Status
Male competition in humans tends to be direct and physical. One of the reasons we know that there are strong biological roots for this is because we see the same behaviors in our primate relatives. Male baboons, for example, are constantly jockeying for power; always keeping track of who's up and who's down because status in the male baboon hierarchy has significant implications for mating opportunities. The highest status male often gets to mate with all the females, leaving none for the other guys.
Human males historically have not been exempt from this dynamic either. Genetic studies reveal that only about 40% of men throughout history have reproduced versus around 80% of women.
The reproductive stakes are high, which is why all male primates are so obsessed with social status. But of course, status begets competition and competition begets confrontation (direct or indirect). However, confrontation many times requires risking life and limb, along with using up precious energy that could be used elsewhere, so from an individual male’s standpoint, it doesn’t make sense to compete with every other male. It’s better instead to watch two other guys fight it out and deduce from that where you stand in the social hierarchy. That tendency ensures that actual physical confrontations between males remain relatively rare, yet everybody still knows where they stand.
But doing that requires the ability to compute logic, which even monkeys do, believe it or not. All you need to do is watch a few interactions between other males to know where you stand.
Male primates use the simple logic of transitive inference to know where they stand in the status hierarchy. It goes something like this:
If A is superior to B and B is superior to C, then A is superior to C.
So if Jim kicked your ass and Ricky kicked Jim’s ass, then Ricky’s better than you and you’d better stay out of his way. It can be applied to any situation whether we’re talking about a business boardroom or a game of basketball. And because we humans use language, the news about who beat who ripples through the neighborhood quickly.
This is one reason why in the small outlaw towns of the Wild West, few things could get you six feet into the ground faster than calling a guy a coward. Calling a guy out like that threatens him by potentially causing other men to perceive him as weak and attempting to challenge his status. The number one cause of male-on-male homicides around the world is trivial altercations; one guy disses another, they argue over a game of pickup basketball, etc. They may seem trivial on the surface, but other guys (and girls) are watching and nothing less than resources and reproductive potential are at stake.
Big Data Is A Big Fat Waste
Big Data is one of the most over-hyped terms of the last decade. Any time a deep technological trend becomes a super meme the way Big Data has, it usually means that it’s been hijacked by people who don’t understand it. Journalists, bloggers, business consultants and other non-technical types wrested control of Big Data, pumped it full of jargon and turned it into a big fat bubble. Complex things become trendy when large numbers of people who don’t understand the technical details act like they do. Computer science and tech startups started becoming trendy over the last several years because of the meteoric rise of media startups and movies like the Social Network.
You’ll have a bunch of guys who majored in English Lit watch The Social Network and say “Gee, wouldn’t it be cool if we started the next Facebook? How hard could it be?” But it’s going to be a temporary trend as starry eyed dreamers discover that computer science, programming, and tech startups are actually hard. The fact that nobody seems to understand what Big Data is means that our explanations are too complex, which means they’re probably wrong. The unifying idea behind complex phenomena tend to be dead simple, but take a lot of rigor to discover; like Darwin’s theory of natural selection. So where can we find a simple unifying principle that can help us all understand what Big Data really is? How about our brain? It’s a 3lb lump of flesh, water and electricity whose purpose is to find patterns in the world. It doesn’t use complicated math, it just finds relevant patterns and forgets everything else. So here is the simple unifying principle that explains Big Data: Hidden within data are patterns; find them and forget everything else. So the next time you’re at a cocktail party and some lampshade Larry with a PhD tries to impress you with talk about exabytes and databases, just tell him “It’s nice we have all this data out there and all, but most of it is useless and what really matters is finding the right patterns in the data and doing something with it.” Solving the Big Data problem boils down to three simple things: 1) Analyze data in real time 2) Find relevant patterns in the data 3) Act on those patterns Tackling Big Data will involve changing our view of computation, which requires a shift away from a computer science to a more neuroscience-based model.
Why Almost Everyone's Doing Business Intelligence And Analytics Wrong
Modern business intelligence and analytics are broken. Most people in the business and data science world are blind to it because they have no context for thinking about what analytics should be. That’s mostly because our current model for understanding data is largely influenced by computer science, which poses a number of problems which I explore below for making data truly useful.
A good starting point to get a perspective on why things are this way is Mary Shelly’s classic novel, Frankenstein.
Frankenstein is a grotesque creature patched together from the body parts of corpses and brought to life with a mélange of electricity, chemistry and alchemy. Today we acknowledge how ridiculous it is to assume that you could create a living, breathing person through chemistry and electricity alone. But we can’t blame Mary Shelley for getting caught up in this fantasy. The great thinkers at the time were people like the physicist Michael Faraday and inventor of the combustion engine, Samuel Brown. Their ideas and creations had powerful grips on the worldviews of the literati. Because of them, people back then had a much more electro-chemical view of the world than we do now, so it was natural for them to believe that we could build a person purely on those principles.
Today, our thinking is based heavily on a computational model of the world influenced by the work of people like computer science pioneer Alan Turing and mathematician John Von Neumann. Just look at the characters in our most popular science fiction fantasies; The Terminator, Robocop, Skynet, etc. They’re all based on a computational view of the world. But attempting to build a Frankenstein on computational principles is just as futile as building one on purely electro-chemical principles. Witness the complete failure of AI to generate human-like intelligence over the past 60 years.
The lesson in all this is that intellectual paradigms are powerful. Until Copernicus came along, nobody bothered to question the assumption that the Earth was the center of the solar system.
Nicolaus Copernicus
And until Darwin came along, nobody bothered to think that maybe all species are related. Intellectual paradigms tend to influence the way we think about technology and often blind us to the right solutions to problems.
Analytics the Wrong Way
Modern business intelligence and analytics are good examples. The way analytics is done today is based on the Von Neumann model of digital computer architecture. It’s the model that all computers have been based on since they were first built.
It posits that a computation machine is a device where a Central Processing Unit (CPU) fetches data from some kind of memory storage, performs operations on it and writes it back to storage.
John Von Neumann
All analytics today works more or less like the following: Data is stored in a database somewhere and some computer code retrieves it and spits it out in a form that’s easier for people to interpret like charts and graphs.
But there are two big problems with this model:
1) Because data is stored in a database, by the time it’s used it is often too dated to be useful to people.
2) The computer programs that analyze the data can’t themselves make predictions about the future based on the data. We still need humans to do that.
So how do we solve these problems? Well, first we need to ask: where can we find an example of data analytics being done well and efficiently? Nature usually concocts ingenious and simple solutions to complex problems. The Wright Brothers looked to birds to inspire the design of their airplanes. Perhaps we could find an example in nature to inspire our analytics redesign.
How about brains? After all, your brain is fundamentally a streaming data engine. What we need to do here is develop a new data analysis paradigm that models the brain as closely as possible because the human brain is the most efficient data analysis engine around. That means that we have to make the computational layer that analyzes the data for humans as much like humans as possible. Anything less than that won’t cut it. Let’s see what lessons we can learn from the brain.
Towards Cognitive Analytics
I think cognitive analytics is an appropriate term for this new kind of analytics because it would operate on brain-inspired principles. This, by the way, is the only way to tackle Big Data. And it would be based on three principles:
1) Analyze data in real time
2) Create automated models of the world in real time
3) Predict the future
A good way to understand why it needs to be this way is with a simple thought experiment. Imagine you’re walking through a parking lot. As you walk along the row of parked cars, you hear a car turn on. You don’t see it yet, but you hear the idling engine. You anticipate the car. Then you spot the white backup lights and you immediately stop in your tracks to avoid getting hit. There’s a lot going on in that 10 second scene that can teach us what next generation business intelligence needs to look like.
Ditch the Database
The real world changes by the second. As you approached that car in the lot, you heard it first and then you saw its lights. Over the course of those few seconds, millions of pieces of input entered your brain through your eyes, your ears and probably your nose which told you that there was a car there. Could you imagine if someone had to store all that data in a database somewhere and then you had to hook your brain up to it in order to use the data? Splat! By the time your brain got it all, it would be too late.
To make realistic sense of the world through data, our computers and machines will need real time, always streaming data. It’s the only way to make sense of the constantly changing nature of the world. The problem with the way we think about Big Data and analytics today is that we often ignore the time aspect of data. The older data gets, the less useful it is. Especially if we have to use it to inform behavior quickly and accurately.
Forget Data Scientists
Another principle that the analytics of the future must incorporate is self-learning. Our programs will need the ability to constantly create real-time models of the world on their own. Let me illustrate what I mean by referring back to our little thought experiment. Can you think back to a moment in your childhood when you first learned that cars can be dangerous when they back out of parking lots? Probably not. But you learned it somehow, right? It sure wasn’t programmed into your head by anyone. Even though you may have never been hit by a car yourself, your brain is smart enough to generalize about what might happen if a car backs out into you based on what you may have seen in a movie or heard from someone else. It created its own model of that situation based on second-hand information.
The reason you know so much about the world and can react so quickly to new situations is because your brain is constantly creating models of the world based on what it senses and it then compares these mental models to what your are currently sensing. That’s the secret to how your brain figures out that a backing up car is dangerous despite the fact that you’ve never been hit by one. Could you imagine if for every new situation in your life, some guy with a PhD had to create a model of it for you to store in memory? You wouldn’t be able to react in time to anything. And unfortunately, that’s the model we’re using today.
Analysis and action are impotent outside the context of time, which is why human data analysts and data scientists can’t be a part of the business intelligence and analytics of the future. We need systems that learn on their own.
Predict the Future
The most interesting aspect of cognitive analytics is going to be the automation of prediction. Rather than people making data-based prediction about what may happen, software will be doing this.
The essence of intelligence is the ability to predict the future in order to influence it. You aren’t aware of this most of the time, but a large part of your waking life involves making predictions; most of them subconscious. You’re constantly predicting what you’re going to hear, feel or what word you’ll see at the end of this……..sentence. Or that if you see backup lights and hear an idling engine in a parking lot, that means that a car will be backing out soon. Your brain is a streaming data prediction engine.
For data analytics to really be useful, it will have to be built on the same memory-prediction framework that our brain is.
How do we make this a reality? We do it by developing computer programs that create models of the world out of time-based data patterns. Then we use those to make predictions. This will demand a shift from a computer science to a neuroscience-based model of computation.
The companies that figure this out will be worth billions.
Why Computer Science Is The Biggest Obstacle To True Artificial Intelligence
Do you remember Rosie the robot maid from the Jetsons? That cartoon and other science fiction shows like it gave many people hope that some day we’d have robots doing our laundry, washing our floors and tending to a hundred other things in our lives. But that future never came.
Even if I didn’t need a Rosie, just having some machine zipping through my house that could do all the things Rosie did without bumping into furniture would blow my mind. But after decades of the sharpest minds in the world attacking the problem of recreating human-like intelligence, we still don’t have anything that comes close to doing what a house cat can.
What ever happened to our collective dream of Rosie? Well, it turns out that creating Rosie is more complex than we anticipated. Being a maid may seem simple to you, but cleaning tables, folding laundry, and doing things that are easy for us humans are all actually extremely challenging engineering problems.
Folding laundry and cleaning the coffee table are challenging because they require that you know an awful lot about the world. I know that may seem weird because every day chores are so easy for you to do, but your brain uses a lot of subconscious knowledge about the world that you aren’t even aware of to solve every day problem. Don’t be fooled. Everything a maid does is astoundingly difficult. We still can’t even build computers that can distinguish between a cat and a dog, let alone go into any house in the world and clean it without bumping into furniture.
Rosie is surely a thing of fiction. Or is she?...
Come with me on a short journey into the depths of the human brain, to discover one of the crucial ingredients of human intelligence and a key building block for human-like artificial intelligence.
Is That My Table?
The big problem with building a real life Rosie, and one that’s plagued artificial intelligence since its inception is one of knowledge representation. That is, how do you learn about, recognize and identify objects in the world? It sounds trivial, but it’s actually a highly complex and impossible problem to solve with logic and computation.
Let’s do a simple exercise to illustrate one very salient aspect of this problem. Get up from wherever it is you’re sitting and walk towards the table or some other piece of furniture in the room.
That was easy right? Well, for you.
What you don’t consciously realize is that at every micro-moment in time that you were strolling towards the table, your eye was seeing a different image of the table. You didn’t consciously notice it, but as you approached the table, its angles kept changing, the light hitting it changed, the positioning of the other objects surrounding it changed in relation to it and you. Every second that you were walking towards the table, a different image of it entered your eyes.
So how do you know it’s the same table?
“Get off your damn levitating high horse,” you say. “Of course it’s the same table you fool. How could it not be?!” But there you are again taking your brain for granted. It’s performing a lot of magic under the hood. Something in your brain had to patch together the different configurations of the table through time in order to deduce that all those different scenes were coming from the same source. In fact, your brain has to do that with everything you sense whether we’re talking about sight, smell, touch or hearing.
The issue isn’t that someone might have replaced the table with another one. It’s that the challenge of how to identify an object as a single object through time absolutely has to be dealt with otherwise a perception of the world that is consistent with reality is not possible.
If you had to program a bunch of rules into Rosie to accomplish this, her program would be unimaginably long and complicated because it would have to account for every possible changing condition in the world around her. Impossible. The statistical machine learning route is better but not by much because not only would you be using outdated data to train her, but you’d have to keep retraining Rosie to account for all the possibilities of things that might happen as she approached the table.
And that’s just to move towards a damn table. What about teaching Rosie to catch a baseball or run a company? Hell, maybe she has secret ambitions to take over George Jetson’s company, Spacely Space Sprockets.
It would be impossible to teach her how to do this using programming logic and mathematics alone.
Robots are so brittle because they’re programmed to do one thing well. Any deviation stumps them. Google’s self driving cars can’t also learn chess and IBM’s Watson can’t learn Italian. But with a bit of effort, you could learn to do all those things because you have general intelligence.
So what is this thing we call general intelligence? And how would we give it to a computer? One clue to all of this is to look at our own brains. In fact, I would argue that the only way to create true AI is to model it on the human brain. The closer we follow biology and the further we move away from traditional computer science and Von Neumann thinking, the better off we’ll be.
Advantage Brain
Like with a digital computer, we can represent knowledge stored in your brain as bits. You know, the famous 1s and 0s. But the bit assignments in your brain are very different from the ones in a computer. Your computer uses bits to arbitrarily assign a code that represents data whether numbers, text, image pixels, or program codes. For example, in your computer, the letter S is represented by the 8 bit binary code 01010011. If you asked me what that second 1 stands for, I’d look at you funny because it doesn’t stand for anything. It’s devoid of semantic meaning. The entire code is just an arbitrary assignment. It could just as well have been 01100110. But a bunch of people got together one day and agreed it should be 01010011.
But you brain doesn’t have a little committee of diet coke drinking, Cheetos breath-having geeks deciding what it needs to know. It learns everything on its own.
The bits in your brain are actually nothing more than huge collections of neurons arranged in columns. Each neuron can represent a bit and it stores semantic meaning. When you experience the world through your senses, the input that comes in through your eyes, ears, nose and touch gets stored in collections of neurons called sparse distributed representations.
If you were to take a slice of your brain and look at it under a microscope, you’d see a column that looks something like this:
The neurons (black things) store information about the world. Active neurons are 1 bits and inactive neurons are 0 bits. The concept of a dog, for argument’s sake, could be 0101 where the second 1 stands for fur and the fourth 1 stands for four legs. Of course, the sparse distributed representations in your brain are huge (thousands of bits long). The way your brain figures out if two objects are related to each other is by comparing their sparse distributed representations to see where their 1s match up. Wherever they match up is where they share semantic meaning.
Bird watching
Ok, we’re on vacation in the Poconos to do some bird watching and doggone it if it isn’t our lucky day because a beautiful egret decides to fish right in front of us. Like two lasers, our eyes lock in on the fowl. Let’s zoom in on 3 successive micro-moments in time and say that these three images of the bird hit our eyes as it fishes.
Notice that we’re dealing with the same problem Rosie had to deal with as she strolled towards the table. How do we know it’s the same bird in all three moments?
Let’s apply our newfound knowledge of our brains to see how it figures out that all three instances of the bird are actually the same bird.
Here are the different sparse distributed representations for each image of the bird.
00000000000000000000100011110000010000000000011100
00000000000000000000100011110000010000000000011000
00000000000000000000000011110000010000000000011000
Remember that the 1 bits hold semantic meaning. In this case, they might hold information about qualities of the bird and its environment such as the color of its feathers, the lighting, the water and a million other things. To understand that it’s looking at the same bird at each instant, all your brain has to do is compare the different sparse distributed representations to see where the 1s match up. The more their 1s match up, the more semantically related two objects are to each other. Let’s compare the three sparse distributed representations to see where their 1s match up.
00000000000000000000100011110000010000000000011100 00000000000000000000100011110000010000000000011000 00000000000000000000000011110000010000000000011000
The basic idea is that if the 1s match up in enough places, then we can be certain that this is the same bird in all three situations. This is the principle behind how you’re able to recognize your friend’s makeup-laden face in a dimly lit room at a Holloween party or how you’re able to recognize that Snoopy is a dog. In fact, it’s how you recognize everything around you, whether through sight, touch or smell.
Without this way of representing things in the world, generalizing wouldn’t be possible.
Now mind you, these representations are firing off in your neurons billions of times every waking second of your life. Your brain has about 30 billion neurons and they form about 300 trillion synaptic connections to keep you intelligently aware of the world and out of trouble.
Let’s get back to Rosie. For her to clean your house like a real maid, she would have to use sparse distributed representations to identify objects well enough to avoid bumping into furniture.
Forget programming. All the clues we need are in our brains.
Gary Vaynerchuk And The Perils Of Ignoring Data
I like social media guru Gary Vaynerchuk. He seems like he’d be a fun guy to have a beer with. His passion is infectious, but I think his overly feely and personality based approach to social media is a bit problematic because of the example it sets for how companies ought to approach social media. His techniques might have worked well for his personal brand, but personality doesn’t scale. When it comes to advising the business world, Vaynerchuk and social media people like him are handicapped by the fact that they don’t have technical backgrounds. There’s an algorithmic way of thinking about the world that he probably doesn’t have access to, which is required to notice the data patterns that underlay human activity. Patterns The world is structured on patterns. Mathematics thrives on them and derives its beauty and elegance from them. Unraveling these patterns is the only way to truly understand the world at a deep level. One way to access them is through data analysis. But there are many more mundane places where you can see patterns. Take a look around the room you’re in right now. There’s probably a table in it. Look at it. Ignore the objects on it for a second. Just focus on the edges of the table. Now look at a corner of the room you’re in. That is an edge too. And the room is in a house or building which also has edges. If you were up in a plane, you might spot the block the building is on. That also has edges. Edges are a very common feature of the world around us. And they tend to repeat themselves over and over. The edges within edges within edges pattern is a recursive one. Recursion happens when there are repeating patterns of objects within themselves, like when two mirrors are placed parallel to each other and create an infinite series of images of themselves. Patterns are ubiquitous in the world around us, but we often don’t notice them because we aren’t looking.
(image source http://bit.ly/1g5ImAy) The language that comes out of your mouth when you speak is also recursive…phrases within phrases within phrases. Why is so much around us structured this way? Because it’s easier to build things by reusing. Whoever put the whole shebang together was clever in discovering this. All this stuff is normally hidden to you. The only reason you’re thinking about it now is because I pointed it out. Most of us would have a much better understanding of the world if we thought about patterns more. It takes a trained, attentive mind to recognize them. Data patterns and social media One great thing about patterns is that they often contain meaning that we can use to discover hidden truths about the world. One area that’s rich with useful patterns is the reams of data being produced around us by things like machine sensors, the Internet, etc. It’s all out there waiting for us to mine it. I’m going to go out on a limb and say that the only people qualified to give advice on social media are people who understand data because that’s where the meaningful patterns that can inform strategy lie. The next generation of social enterprise software companies are going to have to be heavily focused on mining data to find useful insights. It’s the only way to do useful business intelligence and consequently conquer social media. But mining for insight is only the first part. You then have to use it to make predictions that people can use to guide strategy. We have to shift away from the paradigm of “all data is good ” because most of it is trash.
A Few Layers Of Neocortex
A few extra layers of neocortex is what keeps you on the right side of the cage bars.
A chimpanzee brain
A human brain
The Link Between Social Media and Trust
How much is a good reputation worth? Apparently a man’s life, according to the epitaph on an old Colorado tombstone. It reads:
“He called Bill Smith a Liar”
Back then, if you lived in a small town, having a reputation for dishonesty could cause you to quickly fall out of favor with people and lose business. And since many people lived outside the reaches of the law, they often took the matter of defending their reputation into their own hands.
We tend to think we’re more civilized these days, but guarding one’s reputation is still alive and well. Trust itself is built right into the fabric of human nature because it’s intricately tied to reputation. And reputation has a monetary value that people will pay for in money or blood.
There’s a valuable lesson in hair-trigger retaliations against insults because the motivation to protect one’s reputation comprises a large part of the impetus behind the global adoption of social media in the business world.
(Image source: http://bit.ly/1jWyobM)
Trust and a Social Media
The clever companies have figured out the intricate relationship between social media engagement, reputation, trust and profit. A company is just a group of people whose bottom line is intricately tied to their reputation. It just so happens that social media is becoming the hub for reputation management because that’s where information gets disseminated and opinions get shaped the fastest now.
The powerful desire that companies have to control their reputations is based on the same psychological principle that compelled Bill Smith to violently guard his. A personal brand is so valuable that one can’t risk having it defined by others.
If you run a mafia racket, you can act like Bill Smith, but the CEOs of legitimate companies obviously can’t go around shooting people who insult them. The way to deal with public opinion is to control the conversation before it starts; by disseminating information that will encourage people to say things about you that will make other people like and trust you. It’s basic persuasion and companies don’t really have a choice of whether to get involved or not because the default in people’s minds is to not trust other people or companies.
It’s like being on a treadmill. The default is broken bones, unless you start moving your legs. When there’s no information to form a good opinion about you, people will automatically assume you’re untrustworthy. Since the belt is already spinning it’s necessary to start planting the seeds of information for people to spread good things about you or you’re going to inevitably fall and crack some bones.
Another good way to think about all this is to think about how information spreads in a small town.
Let’s get back to Bill Smith. My guess is he probably wasn’t a very nice guy. Perhaps a swindler. Who knows? But if he was a swindler, what would have been the best way for him to go about conning enough people to retire comfortably? Put yourself in his shoes. You’ve just swindled a guy out of several hundred dollars. The first thing he’s going to do is tattle on you. But you’ve already skipped over to the next town to find your next uninformed, gullible victim. The two crucial factors to your likelihood of getting caught are the size of the population and how fast information travels. The ideal town for you is one where there are a small number of people who are geographically spread out and don’t have access to technology to spread the word about you. The worst situation for you is one where there are many people who live close to each other who can exchange information very quickly. That’s when everybody gets the memo that reads, “Watch out for Bill Smith. He’s an untrustworthy fellow.”
The virtual world of social media today offers that thick web of connectivity that would make it a nightmare for a guy like Bill Smith. It’s where reputations are built and destroyed in the blink of an eye. In this kind of a world, every company is a Bill Smith. The trust level the public has with you ebbs and wanes with the easy flow of often unsubstantiated information. To be on the right side of people’s opinion filters, you want to make sure that the public has a positive perception of you. And because the medium of information flow is the Internet, you have to control the conversation to the best of your ability by planting the seeds of positive opinion through engaging content. If people say good things about you, other people will say good things about you. People are like sheep when it comes to forming opinions. That’s what the big deal is with social media. It’s not about the inane chatter. It’s about that insatiable drive humans have to gossip about other people in order to assess their character and avoid getting swindled. You can either control the content of the gossip or let others do it for you.
Predictive Analytics Is The Future of Social Media For Business
I was chatting with the social media manager of a large bank recently and asked him what he thought the future of social media for business is going to be. He said predictive analytics.
My thoughts immediately turned to my elevator ride that morning. As soon as I hopped on, a mid-fifties guy with a cheap polyester suit on reached for the buttons and said “Which…” Before he could complete his question, I shot back, “Ground floor please”. As I watched his fingers jab at the buttons, my inner voice asked, “How did I know he was going to ask me what floor?” The answer is obvious to you, of course. Duh! Everyone knows when you’re on an elevator, the person closest to the buttons is supposed to ask everyone else what floor they’re going to.
But it’s actually a profound question that cuts to the heart of what intelligence is. By anticipating his question, I predicted the near term future and reacted to influence it. That is the essence of intelligent behavior and an extremely challenging problem for machines. Without the ability to predict the future and the action to change it, we can’t really solve hard problems.
That takes me back to the bank social media manager and his prognostication about predictive analytics. What he was implying was prescient: social media will never pay off for businesses until they can:
- Use it to predict what their consumers want
- React to those predictions
To make this a reality, we need technology that systematically extracts relevant patterns from the data around us the way humans do. The big social enterprise software companies like Sprout Social and Sprinklr aren’t doing this because they don’t know how.
Predictive analytics is the holy grail of social media. And here’s how we solve it:
Find patterns in data
Imagine you’re at Reagan National, preparing for your flight to Newark. You look at the departure board. It’s chock full of data, most of it is irrelevant to you. Extracting the right data points and using them to anticipate what gate your plane is going to be at is what’s going to get your ass on the plane and in New York for that make or break meeting.
How useful would it be to you if your brain stored every single departure time on the board? Modern analytics works kind of like that. Collect everything. Then spit it out in graphs and charts regardless of whether it’s useful or not. “But it looks pretty,” you say. Bullshit! Most data is white noise, less than useless and a detriment, in fact, because it distracts from what’s useful. It’s the main reason why the Big Data revolution has largely been a failure.
The world is built on patterns, and behavior becomes intelligent only when it takes advantage of patterns.
(Image source: http://bit.ly/1iC9zyd)
Predict the future based on those patterns
Let’s say you’re driving down the highway and you see a cloud of smoke in the distance. What’s the first thought that comes to mind?...that there’s a burning car off in the distance. And what’s the first thing you do?...slow down, of course. That’s a very simple example of usefully anticipating the future and we do hundreds of these types of predictions every day because our lives literally depend on it.
To stay alive, companies also have to make all types of predictions about what’s going to happen. And with the deluge of data available today, it’s easier than ever to do so. However, companies have to be very discerning about what they look at. Just as your brain was able to block out all the irrelevant sensory data coming through your eyes in order to focus on just the billowing smoke, companies need to extract the relevant data points in order to find the right patterns that can help inform their behavior.
Act on the predictions
So there you have it. Find relevant patterns in data, make predictions and finally act on them to get results. What I’ve just described is a simple model of a human brain and it’s the future of social media for the business world. The social enterprise software companies are way behind the curve on this. But it’s the only path to true insight and meaningful action. You can bet your lunch money on it.
No, Evil Machines Won't Take Over the World
I was talking to a friend recently about Amazon’s plans for drone delivery and he said he thinks it’s creepy. “It feels like machines are going to start taking over”. This guy also believes that 9-11 was a government conspiracy so I can see why he might think drones are scary. But, thanks to Hollywood movies, a lot of normal people also think this way about new technologies, especially AI.
I explained to my friend not just why smart machines wouldn’t necessarily be evil, but how complicated being evil actually is. I then laid out how difficult it is to build a machine that does something as basic as recognize that Snoopy is a dog. If you’re going to create something like SkyNet that’s going to wage war on mankind, it would presumably first have to be able to identify objects like people. To build anything like that, we’d at least have to know what principles allow our brains to effortlessly identify objects in the world, and then try to apply them to machines.
I mentioned in a previous post that the biggest mistake the field of artificial intelligence made was attaching itself to computer science rather than biology. We might have had intelligent machines by now had AI paid more attention to the research coming out of the world of neuroscience rather than figuring out how to build Watson. It astounds me too that many of the sharpest minds never bothered to think that the solution to the Snoopy problem might be hiding right insight their own skulls.
So why is the Snoopy problem so God damned hard for a computer programmed by geniuses but so easy for my 4 year old nephew Timmy? The simple answer is that brains work almost nothing like computers.
The long answer is more interesting. Look at the picture above. It’s obvious to you that those animals are dogs. The way you’d get a computer to recognize that is to train it on large sample sets of dog images that it could use to create a model in order to identify future instances of dogs. It’s similar to how we get computers to “understand” language.
“But wait a minute”, you say. “There’s a big problem. The number of possible images of dogs is infinite. You can’t possibly train a computer on every configuration of a dog.”
And you’d be right. There’s theoretically an infinite number of things in the world. Your eye, in fact, never sees the same thing twice. Ever. Every second your eyes are open, they see micro-patterns that they have never seen before. Your eye only knows what it’s seeing because your brain generalizes. Your eyes might see something that kind of looks like a dog and then your brain comes in and fills in the gaps. Your eyes say “I’m seeing something here that kinda’ has the features of a dog, but I’m not sure.” Your brain comes along and says “Hmmm… it’s weird looking, but yeah, that’s a dog alright.” Computers can’t generalize like that. Every image they see has to be compared to some full prior image in all its detail. And there’s an infinite number of images theoretically, so it’s shit out of luck if it sees something it’s never seen before. Computers are bad at making predictions, which is the essence of intelligence.
(Image source: http://bit.ly/1jMINCz)
Snoopy is obviously a dog to you. You could train a computer to recognize Snoopy. But if it saw a pic of Snoopy, could it then look at this fellow below and know it’s a dog? No it couldn’t because it’s new and looks very different from Snoopy. But you can spot that it’s a dog in a millisecond.
(Image source: http://bit.ly/1mvQgKP)
Nature is ingenious because it’s recognized that there are too many things in the world to remember, and the only way for an intelligent creature to make sense of everything is to generalize. But how the heck does your brain do it?
Well, one of the big differences between the way your brain works and the way a computer works is that your brain doesn’t compute a series of steps to solve a problem. Instead, it’s a memory system that finds patterns in the structure of the world and makes predictions about new experiences based on memories of those patterns. But it doesn’t use statistical analysis, which is one of the flaws of modern AI. Nor does it need to have all the information about something to predict what it is. For example, you can easily predict the word at the end of this ____.
You might be looking at a person with an amputated limb and your brain will say “Well, what I’m seeing in front of me has two legs and just one arm…and even though most people have two arms, this thing we’re seeing is definitely a person”. It’s hard for a machine to do that. Your brain breaks the world down into parts and sub parts and subparts of subparts because that’s how the world is actually structured.
A person is made up of legs, arms, a face, etc. A face is made up of eyes, ears, a nose, etc. A nose is made up of…you get the point. And your brain uses that one single algorithm to make sense of all sensory input, whether it’s sound, smell, touch, whatever. How amazing is that. Everything you sense and do is based on a single algorithm!
A child sees a picture of Snoopy and knows he’s a dog without ever having seen a picture of Snoopy because his brain is breaking Snoopy down into all the individual components that make something dog-like.
Let's turn to faces. We all look at them every day. Let’s see how your noggin breaks a face down.
The lines you see on the image of the girl’s face on the right are traces of what are called saccades. They are rapid movements that your eyes make as they scan objects. When you look at someone’s face, you might feel like your eyes are still, but they’re actually making many movements each second, scanning back and forth to form a pattern. One pattern might be “eye, nose, eye, lip”. Another one half a second later might be “nose, eye, lip, eye”. Your eye sends these spatial pattern sequences to your brain, which uses them to create a model of what a face is. The next time you meet a person and look at their face, your eyes will scan for those sequences. If it finds them it knows it’s seeing a face. If you meet someone who’s got a nose where an eye should be, it violates the pattern that your brain expects and it says “Whoa, hold on there. Something’s not right here!” And that’s when you begin to stare.
Your neocortex is the wrinkly part of your brain at the top where all this stuff is happening. It’s actually made up of different layers—kind of like an onion— and these layers pass input back and forth to each other. Imagine that I put you to sleep, open your head up and cut out a tiny portion of your neocortex. We’d see six layers like so:
(Image source: Node Motion)
So when you look at someone’s face, here’s kind of what’s going on in these layers:
- Layer 6 receives sensory input from your eyes and looks for general patterns of lines.
- Layer 5 receives input from layer 6 and looks for patterns of edges from those lines.
- Layer 4 receives input from layer 5 and looks for patterns of components of the face like eyes, lips and noses.
- Layer 3 recognizes the object as a face
Each layer is passing off a clearer image of a face to the region above it the way a track runner hands off a baton. But while all this input is coming up from layer 6, layer 1 is pushing down a stored memory of what a face looks like, which meets with the pattern sequence at layer 3. When they meet, your brain gets that aha moment where it says “Yup, that’s a face!” That’s how you’re able to recognize a face in a millisecond. There’s no master program dictating from above. There’s just one algorithm that says “Find the patterns and try to match them up with memories.” And every layer does the exact same thing. The higher up you go in the layers, the more complex the patterns get.
So to recap, your brain breaks the world down into pattern sequences, then it finds patterns of those, then patterns of those patterns…and so on. That’s the only way to intelligently make sense of the world because that is how it’s really structured.
Let’s get back to Snoopy. Your brain doesn’t need to store a complete image of snoopy in a database somewhere to match it up with the pic of Snoopy or compute a series a steps about Snoopy’s features. It just needs to find the right pattern sequences in a picture of Snoopy to know that he’s dog-like and therefore probably a dog. Computers can’t generalize like that. And until they can, they will never solve the problem of how to effortlessly recognize objects the way little Timmy does.
So all you folks out there who are afraid that machines are going to take over the world, don’t despair. Until we can get computers to think like little Timmy, they’re never going to be smart enough to be able to be evil.
The Truth About Money
You have to be careful about getting lulled into the false belief that money isn’t one of the most important things in life because money is an unconscious proxy in people’s minds for so many things that do matter which have real world consequences on the quality of your life, like respect and attention. One of the unusual things about money and a great source of its power is that unlike a lot of other things in life, you don’t get to decide if it’s important to you . Whether you like it or not, it sucks you into its gravitational field.
If you don’t have much money, you may not be comfortable with the reality that money matters a lot, but you can’t escape it because most things around you are there because somewhere at some point, people exchanged money in some form to make it possible. You can’t wish away the reality that money is super important because it’s built right in to human nature.
I wish more parents would talk to their kids about why money runs the world, but many of them don’t really understand it themselves. They just know that it’s this thing you need to survive that goes into your bank account every two weeks. Basically, most everyone spends the first 22 years of their life being shielded from the harshness of not having it. Then they finish college, get a real job and they see how much of a grip it has on life. They start to dislike its power and begin to resent people who have lots of money. That creates an environment like the current one in America where you’re never supposed to admit that you like money or show that you have it.
The first thing that comes to mind when most people think about money is cold hard cash. But money is just a way for you to have things that you can’t make yourself right now. If everybody decides that seashells are the best way to do that, then seashells become money. But the point is that it’s impossible to eradicate that thing deep inside people’s brains that makes them assign value to things and to their work and time . That’s what money is.
And if you hate the fact that money is so important, you have to understand what the alternative is: me clobbering you over the head because you have something that I want that I can’t make myself. In fact life for most people used to be a lot like that up until the modern commerce system was invented .
To get rid of money, you have to get rid of people’s desire to want things that they don’t have. And to get rid of that, you have to get rid of people. But of course that’ s not going to happen.
Something I hear people say often to comfort themselves from the fact that they don’t have money is that the more money you have, the more problems you have. I have never met a person with money who believes this.
Once you understand what the game is all about, you’re better off mastering it. Of course, money isn’t the most important thing in life, but it’s probably the second.
Why Failure Usually Comes Down to Personality
One of the ironclad rules of the human condition is that anything difficult takes more than one person to accomplish. In reality, what makes difficult things so difficult is that there are so many variables involved that require the right kinds of people to understand them and to work together.
If you spend enough time working towards something, chances are that you'll get what you want in one form or another...granted that within that period of time you manage to find the right ways to work with the right people. Your success at that will depend to a large degree on your personality.
Your personality is the number one factor in determining your success in life...hands down. Personality encompasses a lot though; drive, persistence, openness, etc.
That means that if you don't have the type of personality that allows you to do things like change your mind when evidence presents itself or subdue your ego to the common good, it's highly unlikely that you'll get far at anything at all.
That's a pretty sobering statement. The good news is that the execution of your personality is something you have complete control over. In other words, you can change the way you do things to increase your chances of succeeding.
Let's take something like starting up a successful business. These days, it's gotten more complicated to start one because of all the new moving parts. If you lead any type of a enterprise, you've got to think about all of the people who are going to execute on all the new business realities out there; the new stuff like social media and blogs and the other stuff like website maintenance, creative direction, business management, etc.
(Image source: http://bit.ly/1jwPIDX)
With so many variables involved and with the multitude of people required to execute on an idea, it's no surprise that most things that get started don't last because of conflicts of interest.
What I've discovered is that for the vast majority of people, their ego is often times more important to them than the goal of the group or of the project at any given point in time. It's even worse with highly skilled and creative people. Sometimes, even if they know that an idea is right, they reject it because they didn't come up with it. Or they just won't fully buy into it and it slows down progress. It kills group morale.
One thing I've learned is that the best way to deal with the human ego is through what I call the Law of Positive Reinforcement. It sounds like fluff, but it's not. And it's not an entirely new idea either.
It basically states that, given that people are driven by ego and that you'll never change that, the best way to make progress is by stroking people's egos. And you do that by compromise through logic and persuasion.
If you have an idea for something, it's more likely to get accepted by people if you detach your own ego from it. People are status driven. The minute they perceive that your idea is tainted with your own ego, they dig in their heels. If you argue in a nuanced way with facts, they can't attack you.
It works the other way too. If someone proposes an idea and you have a better alternative, you pick their idea apart with facts, not opinion. Progress is made when people agree on the facts.
Politicians do that all the time. Barack Obama's a master at it. He has the ability to attack any angle of an issue without injecting himself personally. That's why everybody thinks he's such a nice guy. I don't doubt he is, but he's also a sharp-elbowed politician. He just conceals it well.
This is all common sense stuff, but hard to act on because our egos often get in the way of measured action.
So, if you want to get things done, realize that the world is full of flawed human beings. You need them. And you need to stroke their egos.
What We Need To Make Machine Intelligence a Reality
If you asked me what I think is going to be the most important technology of the future, I would have to say machine general intelligence. In fact, I would say it’s going to be the most important technology of all time. It’s hard to appreciate how great of an impact it’ll have on our lives until you think about how our own intelligence lets us shape our environment. The opportunities it would open up and the commercial applications it could be applied to are vast.
Computers are good at solving problems that are hard for people like multiplying huge numbers, but they’re notoriously bad at solving problems that are easy for us like looking at a picture of Snoopy and knowing it’s a dog. Any child can look at a picture of Snoopy and another one of a real dog and know within milliseconds that they are both dogs. We don’t have any machines that even come close to doing that. Even the computers that can beat people at chess don’t have what we’d call intuition about what they’re doing and they can’t learn new things about the world on their own. They don’t have general intelligence. Why is that?
It turns out that AI has been built on an inaccurate model of intelligence defined by Alan Turing. He argued that if a machine is able to exhibit behavior equivalent to a human, then we should assume that it’s intelligent. Because of this we’ve built computer programs with the assumption that their most important attribute is their output, regardless of whether they actually understand what they’re doing.
The biggest mistake that AI made was latching itself to computer science rather than biology. True AI is impossible with Von Neumann machines.
Behavior is a manifestation of intelligence, but not the central characteristics of it. It doesn’t take a stretch of the imagination to see why. I could be sitting quietly in a dark room by myself thinking up a clever plan for how to invade Sweden. I may be too lazy to actually act on it, but that series of patterns and predications about the future that I created in my mind is a useful piece of intelligence that someone else who was more motivated than me could use to do something with.
The essence of intelligence is the ability to predict the future, regardless of behavior. And your brain does it by constantly noticing patterns in the world and creating memory models of those patterns in real time. A rat uses it to remember where the cheese is in the maze. But a lizard can’t do that because reptiles don’t have neocortexes, which is where this kind of higher intelligence is manufactured.
So the solution to machine intelligence isn’t to throw more memory and computational speed at the problem, but to model how real brains make predictions about the world. The brain isn’t a computer nor does it work like one. And there are some very smart people like neuroscientist/ entrepreneur Jeff Hawkins and Stanford computer scientist Andrew Ng who have begun to take the neuroscience approach to AI. They think that all of the brain’s intelligence can be reduced to a single cortical algorithm and believe that if we can model this in a program, we can create machines with real general intelligence. This technology is going to permanently change the face of computing within the next 5-10 years and will dwarf computers in the impact it’ll have on our lives.
Companies and Their Personalities
The modern expectation that companies should have personalities seems like an anomaly. If you take a long-term view though–say 100 years– it doesn’t seem as odd. In the days before sprawling metropolises and giant corporations, a business’ reputation was much more accessible to the public because of physical proximity. Towns and villages are smaller than cities, so in those places it’s much easier to know what the people inside a business are up to and to hear about it from other people.
We lost that for the better part of eight decades or so. It’s not because people stopped caring about reputation. We’ll always care about that because it’s built right into our nature. It really had more to do with scale. Towns became cities. All of a sudden, people consumed products from companies they had little physical or psychological proximity to. And psychological distance permits a seducer to manipulate a subject more easily by allowing it to intentionally create a narrative around itself that may not be entirely consistent with reality. But physical distance between a company and a consumer alone doesn’t necessarily lead to a psychological disconnect, as long as there is some form of communication that lets consumers know what the people in the company are up to. During much of the 20th Century, mass advertising filled the psychological chasm between company and consumer. But the relationship was very lopsided. That made it nearly impossible to assess the character of the people inside of a company. Companies like Enron and MCI showed the dark side of that imbalance.
Human Nature and Mass Advertising
One of the reasons mass advertising loses effectiveness in a world where strangers can easily talk to one another is that it’s been largely founded on what evolutionary psychologist Geoffrey Miller calls the Fundamental Consumerist Delusion. It’s the notion that people purchase branded products because they want to identify with aspirational traits like wealth, status or taste. The reason you buy an S Type Jaguar or Phillipe Patek watch, according to this theory, is because you want to flaunt your wealth. And there is some truth to that. But it’s not the full story.
The problem with this view of marketing is that it’s too vague. It treats abstract qualities like wealth or status as one dimensional things. The other problem is that it’s based on a somewhat flawed view of human nature. It doesn’t take into consideration the multi-faceted approach we take to assessing other people’s qualities. For example, any normal human adult knows that wealth isn’t all morally equal. Wealth takes on different connotations depending on who owns it and how they acquired it. We react differently to someone who’s inherited their wealth as opposed to someone who’s earned it through hard work. And even meritocratic wealth isn’t all the same. A rich brain surgeon isn’t the same as a rich drug dealer and we rightfully react differently to each.We naturally make different judgments about the intelligence, morality and personality of people based on the circumstances of their wealth.
The gradually emerging zeitgeist around marketing is that companies should pour out their hearts and souls. They should interact with consumers and create content that genuinely reflects their true character. It’s an idea that’s begun to calcify ever since the social web began to take off several years ago. Still, there’s an element of fadishness to the whole notion that makes many older people shudder. Part of that stems from the transient nature of the Web 2.0 economy. Companies sprout up and rocket to fame, then flame out like dying stars in the relative blink of an eye. Lots of brick and mortar companies fail too, though generally not as spectacularly because the physical world poses challenges to scaling that preclude most “old-style” companies from achieving record growth in record time.
Gradualness and Personality
But there are compelling reasons to believe that personal branding based marketing isn’t a fad. One of its pillars is gradualness itself; it can be powerful. Paul Graham asked in one of his essays how it is that people with hideous combovers can ever come to think that they look normal. I myself have always wondered why it is that these people don’t just shave their heads or buy wigs. Graham makes the observation that odd things become less odd incrementally, until they become normal. A healthy head of hair transforms unbeknownst to its owner into a monstrous combover over decades the same way a fat person gets fat incrementally. Even evil ideas like slavery can turn into orthodoxy by simply surviving. But wonderful moral victories have also been achieved by that process working in reverse.
Gradualness can be extremely powerful. But for any idea to survive indefinitely, it has to have roots in human nature. You think Scientology is going to be big in 100 years? Probably not. It isn’t organic enough. But Christianity, Judaism, and Islam will be around for a very long time to come because they all realistically addresses fundamental conundrums about human nature and the human condition.
It’s likely that personal branding in business is here to stay and will become ever-more important. One of the main reasons is simply because it can, the same way a river finds its way downhill(instead of uphill) because it can. In other words, deconstructing other’s personalities is the default. It’s just what we do naturally. And the Internet is excellent at allowing strangers to provide windows into their souls. Before the Web came along, it was harder to do that. Technology now allows us to address the most important immediate question there is to ask about another person or group: what’s your personality like? (in the sense of character, intelligence, moral virtue, patterns of intentionality, etc).
As Geoffrey Miller notes in Spent, there are a few basic questions we all try to get answers to whenever we meet a new person: How intelligent is this person? Is this person nice or nasty? Is he extroverted or shy? Is this person flaky or reliable? Is she stable or crazy? Not surprisingly, we hold similar questions in our heads when deciding whether to buy something from a company. Do they have reliable service? Do they have smart customer service people who can answer my questions? And we quickly form opinions about groups of people based on the interactions we’ve had with members of those groups. “Never trust a Nigerian. They’re all scam artists.” Or, “You can’t trust a woman to run a company. They can’t make up their minds.”
The personality questions above are universal across all cultures and times because there are and will always be consequences and powerful survival advantages to knowing who you’re dealing with. They’re based on the central six dimensions of variation that predict human behavior and distinguish individual human minds. We call them the Central Six and they were developed through a century of research in human psychology. They’re largely genetically heritable, stable across the life span, universal across cultures and even other species, predictive of behavior across various settings (school, work, romance, family life) and are measurable. If you know how somebody scores on the Central Six, you can infer a great deal about their abilities, character and morality. Consequently, you’ll know how to/or whether you should interact with them.
The Central Six are general intelligence, openness, conscientiousness, agreeableness, stability and extraversion(remember them with the acronym GOCASE). When it comes to general intelligence and conscientiousness, people prefer other people who score higher than average. Preference levels for the other traits tend to be more dependent on where an individual scores himself.
General Intelligence
Many people will tell you that there’s no such thing as measurable and meaningful thing called general intelligence. It isn’t true. And the interesting thing about it is that most ordinary people recognize differences in intelligence between people, but really bright people who work in sociology departments at places like Columbia and Harvard deny that meaningful intelligence differences exist between individuals. These differences exist even in other species like dogs, where certain breeds are brighter than others. Poodles, Border Collies and Australian Sheepdogs are noticeably smarter than Pugs, Bulldogs and Shih Tzus. General intelligence is better known as IQ. Higher general intelligence strongly predicts success in just about every area of life: romance, work, school, social status, parenting, etc. Higher average levels of it are found at places like MIT and Google. Lower average levels of it are found in prisons and halfway houses. It predicts how likely you are to not go to jail, do drugs, get killed, etc (though some other personality traits correlate too). It’s predictive power is very high and its distribution across individuals very unequal. It’s also one of the most sexually and socially attractive traits across cultures, races and ethnic groups.
Openness
This refers to openness to experience. People who score high on openness are curious, broad-minded and interested in culture and ideas. These people prefer spontaneity over predictability and are socially tolerant and usually politically liberal.
People who score low on openness tend to be politically conservative. Higher average levels of openness are found amongst people who watch the Daily Show. Lower average levels of it are found amongst people who attend tea party rallies and watch Glenn Beck.
Conscientiousness
This is self-control, reliability, dependability, willpower, and the ability to delay gratification. It’s even more predictive of success(especially financial success) in life than IQ because being smart but non-persistent will get you fewer just rewards than being a little less smart, but persistent. People with high conscientiousness levels seek perfection, reliable social networks, resist temptation, are ambitious and look down on people with bad or slovenly habits. Like general intelligence, it’s a sexually and socially attractive trait. If you’re smart and conscientious, you automatically occupy a perch in the coveted upper crust of human phenotypes. It follows that you’ll likely be in good physical shape and be able to discuss a bit of everything intelligently. Higher average levels of conscientiousness are found at entrepreneur meet-ups. Lower average levels of conscientiousness are found amongst obese people and drug addicts.
Agreeableness
Agreeableness is kindness, warmth, empathy, benevolence and trust. Agreeable people tend to adapt to other’s needs and think a lot about being “good”. Psychopaths and suicide bombers score low on agreeableness. Agreeable people tend to make good long term sexual partners, friends and coworkers. Low average levels of agreeableness are found amongst MS13 gang members and serial killers. Higher average levels of it are found amongst people who volunteer for Habitat for Humanity.
Stability
Stability is level-headedness, emotional stability and stress-resistance. Stable people are usually calm, unfazed and quick to recover from setbacks. They don’t get easily depressed, anxious, mull over things, anger quickly or cry easily. High stability is strongly correlated with good mental health. Higher average levels of stability are found amongst men all over the world. Lower average levels of stability are found amongst women all over the world(research actually proves this controversial point, in case common sense hasn’t already convinced you and/or political correctness has convinced you otherwise).
Extraversion
After general intelligence, this trait is one of the easiest to assess in another person. Extroverts are gregarious, talkative, active and generally need to be around others to thrive. Low exraversion and low stability lead to what’s commonly known as shyness. People with low extraversion tend to prefer working alone, are usually more physically passive and usually have fewer friends and sexual partners. Low average levels of extraversion are found amongst computer programmers and novelists. Higher average levels of extraversion can be found amongst groups of loud, annoying people…and, of course, serious minded people like Fortune 500 CEOs.
You can do a pretty accurate self-assessment of where you rank on five of the Central Six (excluding general intelligence) by taking a 1 minute test called the BFI-10. People’s scores on this 10 question test correlate very highly (about .82) with their scores on much longer personality tests. Try it out. After each statement below, write a number from 1-5 where:
1= disagree strongly
2= disagree a little
3= neither agree nor disagree
4= agree a little
5= agree strongly
I see myself as someone who
1. has an active imagination
2. has few artistic interests
3. does a thorough job
4. tends to be lazy
5. is generally trusting
6. tends to find fault with others
7. is relaxed, handles stress well
8. gets nervous easily
9. is outgoing, sociable
10. is reserved
Here’s how to score yourself. Statements 1 and 2 concern openness, 3 and 4 concern conscientiousness, 5 and 6 concern agreeableness, 7 and 8 concern stability and 9 and 10 concern extraversion. Scores range from -4 (low on the trait) to +4 (high on the trait), with 0 being average. For each successive pair of statements, subtract the number you wrote for the even-numbered statement from the number you wrote for the odd-numbered statement. That’s your score. For example, if you wrote a 4 for statement 1 and a 1 for statement 2, you would subtract the number from the even-numbered statement (1) from the number for the odd-numbered statement (4), which would give you 3. That means you’d score high for openness, given that the average is 0.
I speculate that you could administer such a test to every key player at a company, average out the scores on each trait, and that would give you a pretty good indication of how the company interacts with its customers and the public. Of course, you can’t just go around giving strangers personality tests to decide whether to interact with them. So what do you do? It’s easy. All you have to do is talk to them. When you talk to people, they usually talk back. And there’s a wealth of information you can gather about a person’s personality by talking to them for just 10 minutes That’s why face-to-face meetings will never go out of style, even in the digital age.
Because personality and intelligence are largely genetically heritable, they are nearly impossible to fake for long. Eventually you’ll get called out. Even a 1 page essay, blog post or Facebook page provides enough information to assess someone’s personality relatively accurately. The quality of the ideas that come across in the writing reflects their general intelligence level. Carefully crafted and crisp (as opposed to sloppy) writing signal conscientiousness and even agreeableness. If you write clearly, you care about what the reader is going to think and feel and probably spent time editing and re-editing. That’s why most people get annoyed by poorly written emails. It isn’t that the writing is annoying to read. It’s that the writer subordinates the reader by not putting the time and effort to write coherently. Exclamation marks everywhere could advertise an emotional and possibly moody personality(am I the only one who notices how much more liberal women are than men with punctuation?).
Human beings everywhere instinctively look for signs of personality in other people, groups and even pets. Not knowing who or what you’re dealing with has always carried the risk of humiliation, getting swindled, raped or killed. Think of those Nigerian scam artist emails you get in your inbox every once in a while. Does the poor quality of the writing give you confidence in the intentions of the person behind it? Not if you’re smart. Content can communicate a great deal about intentions and personality. And personality strongly predicts how someone is going to deal with you now and in the future.
I’m a big fan of the software company 37signals because of the things founders Jason Fried and David Hansson write in their blog. I agree with them on a lot and I admire their courage to go against the grain. If I need group collaboration software, I’ll go straight to Basecamp because I personally like these guys. That’s the power of personality-based marketing.
Content
Most companies still don’t understand what the big deal is with content and personal branding. The interesting thing is that not cultivating a personal brand is actually the historical anomaly. Remember when I mentioned earlier that the default is to be curious about personality? Well, human nature hasn’t changed in the last 100 years. We’ve always wanted access to the souls of the people we conduct economic transactions with because the stakes can be high.
The world of personal branding follows a simple algorithm: If a communication channel exists, then talk. Creating content(in the form of unique and interesting text, audio, video, graphics, etc) won’t necessarily bring you outsized returns of any kind. But not doing so may yield outsized diminishing returns in the long run. Like on a spinning treadmill, the negative consequences of not running (bruises, fractured bones) are greater than the positive rewards you reap from moving your legs.
If you’re a huge company that’s managed to muscle your competition out of the market, then your name alone may be enough to keep you safe. But most of the business world isn’t made up of big corporations that operate on the principle of internecine warfare. Instead, much of it resembles an open air market where there’s room for lots of people selling the same thing. But the guy with the best reputation gets more customers because people like him.
U.K. scientists find the first biological gears on a jumping insect half the size of a fire ant.
You're the Average of Your 5 Best Friends
Not only are you the average of your five best friends, but you are the average of your thoughts throughout the day. If you’re over the age of 25 and none of your friends discuss original ideas or actively build original things, change your friends, or you’ll be in danger of becoming a mediocre person who’s life didn’t matter.

