Department of Happiness Studies
I chose to go to university at the age of 18 because I thought heaps of useful knowledge was stored there. I thought to myself: Old people know a lot of stuff. I want to learn what they know. Because I will probably face challenges similar to the ones they have faced and I would rather learn from their mistakes than have to make them myself.
So I was surprised, after spending a number of years there and graduating, that I didn’t really learn a lot of practical life advice. I learned a lot of interesting scholarly things like the propositional calculus, fuzzy logic, decision trees, quantum mechanics, slack vectors, regressions; learned about other cultures, writing, constitutions,—and read and pretended to understand Ulysses. (update: check this out) And I still admire and appreciate the people who taught me those super-interesting things.
But to me, the most basic question: How can I think about life in order to be happy? was not answered. Actually it was barely even broached.
My sense is that people think: “Well, happiness, that’s not really a scientific subject is it?” Here’s my response: It’s only not scientific because we don’t apply science to it.
We live in a time of unprecedented respect for science.
Let’s not underestimate the power of 1,000 scientists, given resources + time, to answer questions about human happiness and its causes. We have statistics, we have double-blind experiments; we have causal graph models, topological machine learning, functional data analysis, robust algorithms; we have item response theory, sampling theory, supercomputers in our pockets, and worldwide communication networks. We have tens of billions of dollars every year already funding science research. We have machines that can look inside of people’s brains, for Chrissakes. I think we can do this.
In economics, happiness research is treated like a subfield of behavioural economics, which is itself a subfield. But the utilitarian philosophy that justifies cost-benefit analysis, the Lagrangian model of microeconomics, and ultimately the entire financial system is undergirded by this very weak understanding of “utility”, the pursuit of which is supposed to be the whole point of capitalism.
No wonder people outside the econ/finance intelligentsia keep saying “We need a [financial system | economic theory] for humans.” Other than the vague idea that health, wealth, and freedom are worth attaining (except maybe not always), our scientists really don’t know many specific consejos about the pursuit of happiness.
Out of all the broad-topic, cross-category departments in universities:
- business (= how to do stuff),
- history (= what happened),
- archaeology (= stuff we dig up),
- physics (= things that occur)
— why isn’t there room for one called How to make decisions and think about life?
Behavioural economists and psychologists who do study this kind of thing have indeed come up with practical advice:
- Happiness increases (ceteris paribus) as the log of personal income.
- Except maybe country-wide economic growth doesn’t increase happiness and only being richer than your within-country peers makes you happy. Hmm. This sounds like an argument whose resolution we should be funding.
- Buy things with cash instead of plastic.
- I, II, III, IV, V, VI, VII, VIII, IX, X
- Experiential goods have a more lasting effect on happiness than property.
- Happiness-now and happiness-reflecting-on-your-life are distinct (not equal).
- We can maybe separate happiness into 6 causes—with health, wealth, and inherited genetic setpoint being the top 3 causes.
- Hedonic adaptation reduces the satisfaction derived from material consumption. But hedonic adaptation does not reduce satisfaction derived from spending time with people you care about.
How about we quantify the benefits of
- feeling like you have a high social status
- making other people laugh
- diminishment of ego
- thinking about people who are worse off than you
- (or conversely, the dis-benefits of envy / jealousy of people who are better off than you)
- playing music
- sex (my only evidence that people care about this is that it seems to appear on covers of Cosmopolitan)
- listening to comedians
- number and kind of friendships
- chanting Hare Krisna
- programming (obviously many of these would require casewise time series to quantify; not just one number)
- marrying the wrong person
- charitable donations (lump-sum or many chunks?)
- knowledge of category theory
- time spent philosophising or … blogging
- eating according to a moral regimen (vegetarian, kosher, halal)
- working hard now for enjoyment later vs. living in the present (some kind of Ramsey-respecting tradeoff, of course)
- drawing & painting
- careers outside an office
- actually obtaining your ideal career (e.g., quant) versus learning humility and accepting what you can actually get paid to do (e.g., wash dishes)
- getting sun on your bare skin
- having children
- staying out late vs not being tired at work the next day
- eating pizza
- smoking cigarettes
- eating bland food every X days to fight the hedonic treadmill
- or—I’m sure there are a jillion hypotheses about strategies to be happy from self-help books?
and how about we spend money funding people who are going to come up with or test ways of thinking and acting in life that are going to make people happier? I mean we fund research on quantum communication. Isn’t happiness research possibly more important?
Forget the research money that goes to the engineers extending the battery life on my Handy. People complain about sitting on the runway and I’ve become so accustomed to 2 billion clicks per second on my computer that I get angry and throw it out the window whenever my YouTube videos won’t load.
Forget a trillion dollars wasted on development efforts that ends up going to fund despotic regimes instead. Rather than guessing whether it’s mosquito nets, dams, or pure cash that poor people need most, maybe we should be investigating how to be happy with what you have—just in case, you know, the direction the rich world has gone is not the best direction to go.
Think about how detailed a knowledge we have about a scientific topic like materials. There are, like, many 1000-page manuals with detailed measurements like the optical properties of tungsten-rubidium alloy at 13,000 kPa and 2700°C. Imagine if we had that kind of detail about, like, life choices. Picture this: Career Engineering Handbook. Tables of days spent in a depression doing psets by INTJ realist mechanical engineer, contrasted with payoffs and path dependence of the later-life happiness. I’m sure any kinds of conclusions would be disputatious — that’s how science moves forward, isn’t it? — but if Happiness Studies were acting like science, those disagreements would be based on lots of measurement, data, facts, observations—rather than “A girl my brother knows said she regrets being a lawyer, so I guess I shouldn’t do that and start an organic egg farm instead.” Which is pretty much how it works now.
According to my logic, this should be a top research priority. Not that medical technologies or knowledge of asteroids that might hit us aren’t good, but seriously—3 centuries since the Enlightenment and we still haven’t figured out some good advice to tell 18-year-olds?
You can take a personal finance class in school, but you can’t get the very most basic kind of advice about life. That seems messed-up to me.
Obama's Religion Phones Commit Brain Crimes
Hard Times, Fewer Crimes by James Q. Wilson (Wall Street Journal): By most estimates, crime should be rising in the US. I mean, depressed economy, high unemployment? But strangely crime rates are going down. And not just because crime departments are massaging the figures, The Wire-style: it seems to actually be going down, at least according to polls of black communities. And this is partly because there are a hell of a lot of people in U.S. prisons, and partly because police have learnt better policing techniques. But a surprisingly high amount of the reason why crime has gone down is because of the decrease in the use of leaded petrol in cars. It seems as if the lead in old-style leaded petrol attacks the brain’s ability to avoid being impulsive; people with more lead in their blood are on average more aggressive. [also see Jonah Lehrer on this article, which I could have posted instead of Wilson’s article, as Lehrer goes into more detail about how lead affects the brain]
The Brain Is Made Of Its Own Architects by Carl Zimmer (The Loom): Your brain is ‘wired up’; each of your neurons (the cells that make up the brain) has axons, which look for connections to other neurons, and dendrites, which receive the axons. By strengthening and weakening these connections, your brain can help you learn things, see patterns, etc. And it looks as if the way that it figures out which axon goes with which dendrite is surprisingly simple.
Commit Yourself by Daniel Akst (Reason): We shouldn’t kid ourselves that we have self-control over all our desires, because we probably don’t. Humans just aren’t built that way. The best way to exercise self-control could be to precommit yourself, to make sure there is a fiendish penalty for doing the action, or to put yourself in a place where you can’t be tempted. And so there are actually websites (set up by economists, of course) that do this - stickK.com, for example - where you pledge a certain amount of money to a despised cause if you are unsuccessful at fighting off your desires. This, apparently, has an 80% success rate. [via]
The Birth Of Religion by Charles C. Mann (National Geographic): Göbekli Tepe is a bit like Stonehenge - a bunch of stone circles that probably had some religious significance. What makes it different to Stonehenge is that it is dated to 9000B.C., and it may well be the first human-made building that we know of that is bigger than a hut. Does this mean that religion might have spurred us to become civilised? Possibly? But you do get the impression that it is in the interests of the people excavating Göbekli Tepe to say that we needed to be religious before we could settle down in cities, etc, because it makes their site much more interesting. [via]
World Health Organisation Verdict On Mobile Phones And Cancer by Ed Yong (Cancer Research UK): This is an excellent article about how science works, really, in the shape of what we know about mobile phones and cancer (cellphones for the Americans). There may be a link between heavy mobile phone usage and certain kinds of brain cancer, but the risk is fairly small, and, well, seeing as in the last ten years the amount of mobile phone users has exploded exponentially, we should expect to see an exponential rise in the amount of brain cancers? But this doesn’t seem to be the case.
Obama’s Young Mother Abroad by Janny Scott (New York Times): Now I have a little more time, I was going through some old articles I’d instapapered and never read, and this was one; this is an interesting portrait of Obama’s mother, Stanley Ann Dunham, who was an interesting, complicated person, and how her life informed Obama’s.
What's in a name?
‘Mitigating Myopic Loss Aversion: Groups & Communication’…read the title of the CeDEx (Centre for Decision Research and Experimental Economics) seminar. It did not exactly stir anticipation, but I went anyway…and I’m glad I did. Bear with me while I define some terms…(sorry).
Myopic Loss Aversion (MLA) theory was put forward in 1995 by Benartzi and Thaler, as an explanation for the ‘equity premium puzzle’ and is the product of the combination of two behavioural theories:
1) Loss aversion is the simple principal that individuals strongly prefer avoiding losses than they do acquiring gains.
2) While, ‘myopic’ refers to a shortening of perspective. That is to say, investors only look at the most recent, short-term results and, as a consequence, ‘risk’ less over a short time horizon (or when their returns are evaluated more frequently).
Shed loads of studies* on MLA have empirically verified that individuals risk less/invest less in risky assets, over shorter time horizons compared to longer evaluation periods. However – and this is the interesting bit, I promise – in 2007 Matthias Sutter ran some experiments on team decision vs. individual decision making under risk. The experiment showed that, while teams still experienced MLA, they made significantly higher investments than the individuals. He concluded, therefore, that team decision making ‘attenuates the effects of MLA’. In other words, groups take more risk than individuals, increase their expected value and get closer to a ‘risk neutral’ investment decision.
It’s not entirely known why this result occurs. One theory is a sort of wisdom of crowd’s outcome where a ‘truth wins’ scenario occurs. Whatever the reason – it would appear that in-terms of making the correct investment decision, more heads are better than 1.
As a follow up to this study, a group of behavioural economists are now looking at the impact of ‘communication’ rather than ‘collusion’ on this type of investment. The new study replicates the Sutter experiment, and then looks at the impact of communication amongst individuals who do not have a common pay-off, as they do in the group scenario. Will more information induce more risk neutral choices from individuals?
An interesting question - from a consumer/brand interaction point of view - would be to explore group vs. individual consumption choices. For Example; do decisions made as a family unit, on big-ticket items (TVs, cars, life insurance, moving house etc.) tend to be more ‘risky’ than choices made by individuals on the same things?
*Gneezy and Potters (1997), Thaler et al. (1997), Barron and Erev (2003), Langer and Weber (2003) or Bellemare et al. (2005)
Since people liked my last opinion piece on
#big data, here’s another one.
Imagine there was a technology that allowed me to record the position of every atom in a small room, thereby generating some ridiculous amount of data (Avogadro’s number is 𝒪(10²³) so some prefix around that order of magnitude — eg yoctobytes). And also imagine that there was a way for other scientists to decode and view all of that. (Maybe the latency and bandwidth can still be restricted even though neither capacity nor resolution nor fidelity nor coverage of the measurement are restricted — although that won’t be relevant to my thought experiment, it would seem “like today” where MapReduce is required.)
Let’s say I am running some behavioural economics experiment, because I like those. What fraction of the data am I going to make use of in building my model? I submit that the psychometric model might be exactly the same size as it is today. If I’m interested in decision theory then I’m going to be looking to verify/falsify some high-level hypothesis like “Expected utility” or “Hebbian learning”. The evidence for/against that idea is going to be so far above the atomic level, so far above the neuron level, I will basically still be looking at what I look at now:
- Did the decisions they ended up making (measured by maybe 𝒪(100), maybe even 𝒪(1) numbers in a table) correspond to the theory?
- For example if I draw out their assessment of the probability and some utility ranking then did I get them to violate that?
If I’ve recorded every atom in the room, then with some work I can get up to a coarser resolution and make myself an MRI. (Imagine working with tick-level stock data when you really are only interested in monthly price movements—but in 3-D.) (I guess I wrote myself into even more of a corner here, if we have atomic level data then it’s quantum, meaning you really have to do some work to get it to the fMRI scale!) But say I’ve gotten to fMRI level data, then what am I going to do with them? I don’t know how brains work. I could look up some theories of what lighting-up in different areas of the brain means (and what about 16-way dynamical correlations of messages passing between brain areas? I don’t think anatomy books have gotten there yet). So I would have all this fMRI data and basically not know what to do with it. I could start my next research project to look at numerically / mathematically obvious properties of this dataset, but that doesn’t seem like it would yield up a Master Answer of the Experiment because there’s no interplay beween theories of the brain and trying different experiments to test it out — I’m just looking at “one single cross section” which is my one behavioural econ experiment. Might squeeze some juice but who knows.
Then let’s talk about people critiquing my research paper. I would post all the atomic-level data online of course, because that’s what Jesus would do. But would the people arguing against my paper be able to use that granular data effectively?
I don’t really think so. I think they would look at the very high level of 𝒪(100) or 𝒪(1) data that I mentioned before, where I would be looking.
- They might argue about my interpretation of the numbers or statistical methods.
- They might say that what I count as evidence doesn’t really count as evidence because my reasoning was bad.
- They couldn’t argue that the experiment isn’t replicable because I imagined a perfect-fidelity machine here.
- They could go one or two levels deeper and find that my experimental setup was imperfect—the administrator of the questions didn’t speak the questions in exactly the same tone of voice each time; her face was at a slightly different angle; she wore a different coloured shirt on the other day. But in my imaginary world with perfect instruments, those kinds of errors would be so easy to see everywhere that nobody would take such a criticism seriously. (And of course because I am the author of this fantasy, there actually aren’t significant implementation errors in the experiment.)
Now think about either the scientists 100 years after that or if we had such perfect-fidelity recordings of some famous historical experiment. Let’s say it’s Michelson & Morley. Then it would be interesting to just watch the video from all angles (full resolution still not necessary) and learn a bit about the characters we’ve talked so much about.
But even here I don’t think what you would do is run an exploratory algorithm on the atomic level and see what it finds — even if you had a bajillion processing power so it didn’t take so long. There’s just way too much to throw away. If you had a perfect-fidelity-10²⁵-zoom-full-capacity replica of something worth observing, that resolution and fidelity would be useful to make sure you have the one key thing worth observing, not because you want to look at everything and “do an algo” to find what’s going on. Imagine you have a videotape of a murder scene, the benefit is that you’ve recorded every angle and every second, and then you zoom in on the murder weapon or the grisly act being committed or the face of the person or the tiny piece of hair they left and that one little sliver of the data space is what counts.
What would you do with infinite data? I submit that, for analysis, you’d throw most of the 10²⁵ bytes away.
Cognitive Dissonance - a brief definition
I’m perched at my desk and I’m snugly inside the late-afternoon fug that is usually occupied by the nasty little tasks that need to be done to bring near-finished projects to completion. Having run out of immediate motivation I’ve picked up an interesting article to read. It comes from a pile of documents next to me that colleagues have recommended and I’ve been meaning to take home for exciting weekend reading.
The Cognitive Dissonance of It All, by J. Kyle Bass is my latest read. He made his name in 2006 by predicting and shorting the implosion of the US mortgage market.
Cognitive dissonance, to give a Wikipedia definition, is the uncomfortable feeling created by holding two conflicting thoughts - e.g. smoking causes lung cancer but everyone wants to live a long and healthy life, or that excessive leverage created the asset bubble of 2006/7 meanwhile investors do not want to de-lever.
Then, owing to this uncomfortable feeling, humans tend to create justifications for juxtaposed knowledge - smokers will claim that the damage of smoking is less than scientifically accepted, or will justify their smoking. The uncomfortable feeling (dissonance) is then reduced by rationalising or excusing the contradiction away.
Source (as ever): http://en.wikipedia.org/wiki/Cognitive_dissonance
Where precisely the article will go should be interesting. I wonder if it, like Wikipedia, will mention the story behind ‘sour grapes’:
A classical example of this idea (and the origin of the expression “sour grapes”) is expressed in the fable The Fox and the Grapes by Aesop (ca. 620–564 BCE). In the story, a fox sees some high-hanging grapes and wishes to eat them. When the fox is unable to think of a way to reach them, he surmises that the grapes are probably not worth eating, as they must not be ripe or that they are sour. This example follows a pattern: one desires something, finds it unattainable, and reduces one’s dissonance by criticizing it. Jon Elster calls this pattern “adaptive preference formation.”
RIsk Neutrality in Economic Theory
“Even though subjects should be risk-neutral toward small lab gambles (Rabin, 2000), it would be useful to have a procedure for creating payoffs that subjects are risk-neutral towards (i.e., so they are indifferent to the dispersion of possible payoffs around a fixed mean).”
— Colin Camerer (Behavioral Game Theory, p.40-1)
Why? If there’s anything my financial mathematics prof has tried to stress to us repeatedly, it’s that rational, reasonable people are risk-adverse. I don’t know whether he’s right or wrong, but I’d say risk-neutrality is certainly something economists should be testing thoroughly. Of course, this book is ten years old. Things might’ve changed by now, but for a behavioural economics book to state that as fact is a little… uncool, I think.
… Especially since he goes on to talk about binary lottery procedures that are supposed to induce risk-neutrality. If you have to control that about participants’ preferences and you still get weak evidence it makes people any less risk-adverse/more risk-neutral, maybe the assumption the problem.
I dunno. Just saying.
Nudge From the Underground
I saw a great example of ‘nudge’ the other day when I was on the Tube. A rather rotund woman got on. She was wearing a ‘Baby on board’ badge. Thus - I deduced - she was pregnant!
LU (London Underground) came up with the badges after research showed that, of the public, “92% thought you should offer the seat to a pregnant woman without having to be asked”, but “only 16% of pregnant women had been offered a seat”.
The concept of ‘nudge’, propounded by behavioural economists, Thaler and Sunstein, means any small feature in the environment that attracts our attention and alters our behaviour. Two components of the nudge are: state the obvious, and; don’t force people. These badges are a good example of the nudge in practice. They’ve actually been around since 2005, so I’m surprised not to have seen them before.
What I like about nudges is how they can bring about such a disproportionally large effect. A rather humourous but compelling example: etching the image of a fly onto a urinal reduced spillage by 80%, because it gave men something to aim for!
In business and marketing terms, it makes me think that for a modest outlay, you could end up greatly improving conditions for your customer and generating a lot of goodwill and buzz around your brand.
Irrational? Who me?
“Know Thyself” -Socrates
A 2nd post in as many weeks. W00t! Does that make it a habit yet? From past experience, I probably shouldn’t get too carried away just yet… :)
So I debated titling this post ‘Behavioural Economics’. But I figured it might be better to wait till I used that label for those of you who might already be jaded. I hope you’ll come away with a bit of a new perspective at the end of this post.
In the last few months, I’ve been reflecting a lot on the following Bible verses:
15 I don’t really understand myself, for I want to do what is right, but I don’t do it. Instead, I do what I hate. 16 But if I know that what I am doing is wrong, this shows that I agree that the law is good. 17 So I am not the one doing wrong; it is sin living in me that does it.
18 And I know that nothing good lives in me, that is, in my sinful nature.[d] I want to do what is right, but I can’t. 19 I want to do what is good, but I don’t. I don’t want to do what is wrong, but I do it anyway. 20 But if I do what I don’t want to do, I am not really the one doing wrong; it is sin living in me that does it.
21 I have discovered this principle of life—that when I want to do what is right, I inevitably do what is wrong. 22 I love God’s law with all my heart. 23 But there is another power[e] within me that is at war with my mind. This power makes me a slave to the sin that is still within me. 24 Oh, what a miserable person I am! Who will free me from this life that is dominated by sin and death?
“Behavioral game theory is about what players actually do. It expands analytical theory by adding emotion, mistakes, limited foresight, doubts about how smart others are, and learning to analytical game theory…. Behavioral game theory is one branch of behavioral economics, an approach to economics which uses psychological regularity to suggest ways to weaken rationality assumptions and extend theory.”
— Colin Camerer (Behavioral Game Theory, 2003)