Avatar

Toggle's Bloggle

@togglesbloggle / togglesbloggle.tumblr.com

Mors stupebit et natura, Cum resurget creatura

Hello! Here are some of my beliefs:

  1. Passionate curiosity and a commitment to truth make us more empathetic, more kind, and more fully human- not less.
  2. The words we share with one another are the most dangerous and deadly tools at our disposal; for this reason, open speech and free expression should be everyone's birthright, rather than hidden away in the vaults of a powerful few.
  3. The essence of community is in the mediation of conflict, not the suppression of dissent.
  4. Names are a system of control, albeit a good and important one; pseudonyms and masks are the necessary counterweight, and without them we cannot make the messy transition between who we are and who we are becoming.
  5. The sacred need not be sacrosanct.
  6. The questions we're most afraid of tend to provide the answers we most need.

leaves etymology trivia on your doorstep, rings the bell, and runs off

"grotesque" is etymologically equivalent to "grotto-esque"

Oh man, you can't just leave it at this. The story here is fascinating!

"Grotesque" in the modern usage is "horrifying, ugly, twisted." But if you go back a few stages, it was actually a distinct art style, one popular in the renaissance. So you often go in to Florentine villas and what have you, and see stuff like this:

It's pretty easy to seem vaguely fancy by correctly identifying the grotesque style- it's the one with a pale or white background, a ton of unconnected, fiddly little figures all doing random stuff, and the occasional really weird looking thing. Usually done in plaster, very occasionally just painted.

The jump from here to our modern sense of 'grotesque' makes a certain amount of sense- the characteristic bizarre chimeras and other twisted figures scattered among the rest are very distinctive, and over time, people started referring to scary monsters or physically disabled people as evoking the grotesque style. It was 'grotesque, as in, would seem at place in a fresco that also has a swan with a human baby head and a dude with insect wings coming out his ears.'

Why was this style popular in the renaissance, you ask? Well!

The renaissance, per the name, was very fond of evoking Rome, especially in an artistic sense. Renaissance, as in re-birth, as in "Rome's back, baby." Dark ages over, civilization back on track. We're powerful like the ancients were, so probably you shouldn't try to conquer my city, I bet that would turn out really bad for you.

The fad for grotesque art was a part of that revival- the original grotesque pieces were Roman, a style the renaissance copied just as they copied sculptures and architectural styles:

Okay, yes, revival of ancient Rome. But why 'grotto'? Well! Grotesques, in the original Roman sense, were especially common in bathhouses and so on- before time and ignorance gave them mystique, it turns out that grotesques were mostly for bathroom decoration.

Baths are on the bottom floor, and often particularly stable in their construction.

As it turns out, soil can accumulate pretty dramatically during the millennium or more between the Roman empire's heyday and the renaissance. Give it a thousand years, and that bottom floor is quite firmly buried. So it turns out to have been a not-uncommon experience in the late medieval era to walk in to a literal cave, and find that despite the weathered stone entrance, the grotto inside was in fact man-made, both carved and decorated in these strange little people and animals, enacting scenes from a long-lost culture.

"Grotesque": the monsters we find in the grottos, characters and stories from an empire so fallen that it's literally become part of the cave system.

Pretend, for example, that you were born in Chicago and have never had the remotest desire to visit Hong Kong, which is only a name on a map for you; pretend that some convulsion, sometimes called accident, throws you into connection with a man or a woman who lives in Hong Kong; and that you fall in love. Hong Kong will immediately cease to be a name and become the center of your life. And you may never know how many people live in Hong Kong. But you will know that one man or one woman lives there without whom you cannot live. And this is how our lives are changed, and this is how we are redeemed.

What a journey this life is! Dependent, entirely, on things unseen. If your lover lives in Hong Kong and cannot get to Chicago, it will be necessary for you to go to Hong Kong. Perhaps you will spend your life there, and never see Chicago again. And you will, I assure you, as long as space and time divide you from anyone you love, discover a great deal about shipping routes, airlines, earth quake, famine, disease, and war. And you will always know what time it is in Hong Kong, for you love someone who lives there. And love will simply have no choice but to go into battle with space and time and, furthermore, to win.

—James Baldwin, The Price of the Ticket

It's interesting to me how much people struggle to intuit differences of scale. Like, years of geology training thinking about very large subjects, and I'm only barely managing it around the edges.

The classic one is, of course, the mantle- everybody has this image of the mantle as a sort of molten magma lake that the Earth's crust is floating on. Which is a pedagogically useful thing! Because the intuitions about how liquids work- forming internal currents, hot sections rising, cool sections sinking, all that- are all dynamics native to the Earth's mantle. We mostly talk about the mantle in the context of those currents, and how they drive things like continental drift, and so we tend to have this metaphor in mind of the mantle as a big magma lake.

The catch, of course, is that the mantle is a solid, not magma. It's just that at very large scales, the distinction between solids and liquids is... squirrely.

When cornered on this, a geologist will tell you that the mantle is 'ductile'. But that's a lie of omission. Because it's not that the mantle is a metal like gold or iron, what we usually think of when we talk about ductility. You couldn't hammer mantle-matter in to horseshoes or nails on an anvil. It's just a rock, really. Peridotite. Chemically it's got a lot of metal atoms in it, which helps, but if you whack a chunk of it with a hammer you can expect about the same thing to happen as if you whacked a chunk of concrete. Really, it's just that any and every rock is made of tons and tons of microcrystal structures all bound together, and the boundaries between these microcrystals can shift under enormous pressure on very slow timescales; when the scope of your question gets big enough, those bonds become weak in a relative sense, and it becomes more useful to think of a rock as more like a pile of gravel where the pebbles can shift and flow around one another.

The blunt fact is, on very large scales of space and of time, almost everything other than perfect crystals start to act kind of like a liquid- and a lot of those do as well. When I made a study of very old Martian craters, I got used to 'eyeballing' the age based on how much the crater had subsided, almost exactly like the ways that ripples in the surface of water gradually subside over time when you throw a rock in to a lake. Just, you know. Slower.

But at the same time, these things are more fragile than you'd believe, and can shatter like glass. The surface of the Earth is like this, too. Absent the kind of overpressures that make the mantle flow like it does, Earth's crust is still tremendously weak relative to many of the planet-scale forces to which it is subject- I was surprised, once, when a professor offhandedly described the crust as having a tensile strength of 'basically zero;' they really thought of the surface as a delicate filigreed bubble of glass that formed like a thin shell, almost too thin to mention, on the outside of a water droplet. On human scales, liquid is the thing that flows, and solid is the thing that breaks. But once stuff gets big or slow or both, the distinction between a solid and a liquid is more that a liquid is the thing that doesn't shatter when it flows. And it all gets really, really vague, which I suppose you'd expect when you get this far outside the contexts in which our languages were crafted.

Avatar
The sixteen stories that make up the Loom of Hours occupy a unique place in world literature. Though their respective foci may appear familiar to the reader, the alienating reality of the societies depicted therein inevitably comes to the fore, providing a window into Illapartian perspectives on the myriad cultures which the civilization of the Hours encountered.
The Loom of Hours did not start out as a literary tradition, but as an oral one. What may seem on the surface to be children’s tales of kings and prophets were originally fables told by a griot, a bard-like elder who would provide advice to their ruler through ballad or song. Instead of providing a dualist moral ‘solution’ to a problem, each fable would instead touch on predetermined aspects of a certain Hour, leaving the story open to richer and less straightforward interpretations. Over time, as writing would come to dominate larger tracts of society, griots would become replaced by encyclopedic collections of stories, with hundreds of finely nuanced variations corresponding to a single Hour. While several entries of the more popular Hours remain extant today, many of the Hours are represented by only a single story in the sole surviving Loom cycle, assembled in the Late Kingdom era by a courtier named Runao, as a gift for the new king. This “Splendid Cycle”, famously translated by Richard Burton in 1888, is what most readers are familiar with today when talking about the Loom of Hours. It is much debated according to what scheme Runao chose the different stories of the Splendid Cycle, or even if he understood their subtleties at all - perhaps he was looking for entries that would play up certain virtues of his, or perhaps he sincerely did not view them as anything more than children’s tales. Whatever the case, each story highlights its specific Hourly aspects with aplomb, and is capped with a short commentary by Runao identifying what he believes are the most crucial lessons to be learned therefrom.

On reflection, I'm impressed and grateful that Link ended up being such a pop culture phenomenon, and I think it's one of the more unlikely media occurrences of this particular timeline. The archetype of the knight-errant could easily have fallen out of cultural primacy; it's so ancient and culturally loaded that most attempts to use that kind of hero are going to end up deconstructionist, ironic, or at least self-conscious.

But Link is non-western and avoids the 'shining armor' look, and the games are often very good, which are collectively the secret sauce you need to get away with it despite the difficulties. (And probably it helps that the games cultivate younger audiences, so that most of us meet Link before we have a chance to get jaded about stories.) With the Zelda games, we have at least one character at the apex of our pantheon that nails it right on the head without any kind of awkward layer of critical pseudo-irony. Just a guy with a magic sword, a noble steed, and an oath of courage, on a quest to rescue the princess.

Avatar

I agree it's not self-consciousness, but I think it's notable that in addition to being a knight errant Link is a silly little guy. Perhaps it achieves the diffusion of the cultural loading without the insincerity of ironic detachment by *sincerely* being a silly little guy.

Oh agreed, the silly little guy factor is doing a ton of work here.

-Link doesn’t receive deference, excising the whole feudal apparatus from the knight archetype and making more interesting quest lines in villages and towns,

-Makes goofy noises when he takes damage instead of manly grunting, so playing doesn’t feel painful and making persistence and risk-taking more fun and not just grimy heroic,

-Sexless enough that the question of his relationship with Zelda can be largely ignored, which lets them keep doing the princess-rescuing quests without awkward questions like you get with Bowser and Peach; Link can be genre-appropriately chaste but in an understated way,

-Justifying a lot of dithering, lack of monomaniacal focus on the quest, and otherwise giving the player permission to goof around, play games, and explore,

-Inviting non-evil characters to show their kindest (though not most helpful) face, as one does for children, endearing a broad cross-section of the world’s people to the player and integrating him into their community instead of making them mere peasants for Link to save,

and so on. I assume it was originally there just to make sure that a younger audience could inhabit and empathize with Link, but when you give it a think it ends up doing a huge amount to buttress his role as a knight in a modern egalitarian culture that’s not as well suited for it, and to make the thing work in an interactive medium.

On reflection, I'm impressed and grateful that Link ended up being such a pop culture phenomenon, and I think it's one of the more unlikely media occurrences of this particular timeline. The archetype of the knight-errant could easily have fallen out of cultural primacy; it's so ancient and culturally loaded that most attempts to use that kind of hero are going to end up deconstructionist, ironic, or at least self-conscious.

But Link is non-western and avoids the 'shining armor' look, and the games are often very good, which are collectively the secret sauce you need to get away with it despite the difficulties. (And probably it helps that the games cultivate younger audiences, so that most of us meet Link before we have a chance to get jaded about stories.) With the Zelda games, we have at least one character at the apex of our pantheon that nails it right on the head without any kind of awkward layer of critical pseudo-irony. Just a guy with a magic sword, a noble steed, and an oath of courage, on a quest to rescue the princess.

Avatar

How Lutessa Found A King

I.

Fair Lutessa was not always the cradle of kings, and before its austere temples and marble effigies came to dominate the horizon, Lutessa was a village just like yours or mine. But the Wheel turns, and Lutessa’s idyll shattered as she grew. Her people came to bicker and fight more and more often amongst themselves, over even the smallest of things! Things such as: whether this man had sold his neighbor the right amount of flour, or whether the miller’s children had pushed the blacksmith’s children into the marshes, or whether the inn-keeper’s wife was allowed to dine with the tavern-keep. Since Lutessa had yet no magistrate or ruler to call her own, the townspeople, fresh from a brawl, clamored for adjudication at the doors of the Martyr-God’s prophet.

Avatar

Kinwa and the Engineer

The Matubon people of Slow-Lake have a strange custom, which is that they consider teaching taboo, and refuse to mark symbols, deal in numbers, and do all but the most passive apprenticeship; for the only true knowledge is that which was wrested from the lions of one’s mind - all else is lies and the whispers of wicked men.

“Poverty is that state and condition in society where the individual has no surplus labour in store, or, in other words, no property or means of subsistence but what is derived from the constant exercise of industry in the various occupations of life. Poverty is therefore a most necessary and indispensable ingredient in society, without which nations and communities could not exist in a state of civilization. It is the lot of man. It is the source of wealth, since without poverty, there could be no labour; there could be no riches, no refinement, no comfort, and no benefit to those who may be possessed of wealth.” – Patrick Colquhoun

I started reading some of Orwell’s nonfiction essays recently.  “The Spike” isn’t my favorite so far- that honor probably goes to “A Hanging”, although I’m still reading- but it got me doing a Wikipedia dive about British workhouses and that in turn gave me the quote.

It struck me mostly because it’s one of most direct and blunt ways I’ve seen this argument made in the first person.  That is, one often sees this point of view imputed to people that hold capital in the modern era, but it’s always shocking how explicit people could be about it during the early industrial revolution, around the era that gave Polanyi his “Great Transformation.”  Near as I can tell, this isn’t a weak-man argument; the belief in poverty as load-bearing was common enough to express itself in legal policy, and possibly even correct to boot.

The other thing I learned from the Wiki dive is that workhouses themselves (or at least, the system of legal obligations that would mature into them) date from a similar attempt to control and channel human skill at the expense of the skilled:

“The Poor Law Act of 1388 was an attempt to address the labour shortage caused by the Black Death, a devastating pandemic that killed about one-third of England’s population. The new law fixed wages and restricted the movement of labourers, as it was anticipated that if they were allowed to leave their parishes for higher-paid work elsewhere then wages would inevitably rise… The resulting laws against vagrancy were the origins of state-funded relief for the poor.  ”

That is, in response to growing wages, a law was created to keep skilled workers in their place both figuratively and literally.  Relief for poverty was a knock on; not strictly necessary, but if you won’t let people leave to find work, it’s probably smart to give them food at least.  The balance of power eventually swung back towards the nobility, but the workhouses themselves just persisted from century to century, reinventing themselves with new justifications well in to the 20th century.

A friend of mine grew up in a town with an old workhouse that had been recommissioned as an old folks’ home.  When she was a child, she’d run as she passed it- the shadow of the building was bad luck.

No thesis I think, but I want to write it down.  Catch some of these feelings in amber before I move on to Orwell’s other essays.

I do wonder if the connotations of the word poverty have changed, here? It seems to me that Colquhoun is not describing what we think of as poverty, but rather the state of not being a rentier. There is no contradiction between being upper-middle-class in terms of material possessions and lifestyle, and having no investment income and thus being ‘forced’ to work as Colquhoun outlines. But a software engineer who spends all his income is hardly poor, modern sense. (Not very smart, obviously, but that’s a separate issue.) Yet in Colquhoun’s sense he is indeed living in poverty, while being far wealthier than anyone alive in Colquhoun’s time!

And in this sense it does seem to me that the 'poverty is needed’ argument is stronger. You still cannot make absolutely everyone a rentier. (With present technology, that is.) There just aren’t enough resources for a livable UBI for everyone, even in the US. (Yet. Growth mindset, by all means.) You might be able to arrange things so everyone can retire on their investments after a certain age, but you’re still going to have someone doing the work that generates the real income those investments are a claim on.

Ran across this comment again after I think like two years? So this is not a reply so much as an opportunity to think out loud, and keep putting pressure on these ideas in my head and see how they shift. In particular, I think the big thing that changed since I wrote the OP and read this reply last time is that I started reading a lot more Henry George, so take that for what it's worth.

In any case, I think this comment is very on-point in that Colquhoun, above, is absolutely equating 'not being a rentier' with 'being impoverished'. Likely, I think, because he was only writing at the very beginning of the industrial revolution, and an educated/specialist middle class basically didn't exist in 1800. But I don't think you can say the connotations of 'poverty' have changed all that much; rather, what's changed is the idea that a laborer could achieve anything other than subsistence, with any real surplus just being extracted by rentiers and capitalists. (That is, by holders of capital, not in the ideological sense.) Remember that this is the same era in which Malthus lived and theorized!

'Labor' here, as an economic construct, means basically a pile of undifferentiated human flesh that can be flexibly used in the same way that we'd use programmable robots today- plonked in a factory line, given basic instructions, and told to repeat those instructions indefinitely. Capital, in this equation, was the store of value from which this pile of flesh is provided shelter and nutrition- and in fact, per Malthus, this flesh-pile will in fact grow to the capacity set by capital rather than build savings as an individual might. The construct was very 'ecological' in that way. Charitably, the Flynn effect hadn't happened yet, so I think it was probably at least marginally easier to think this way without being a cartoon villain.

The difference is, I suppose, one of the great unanticipated triumphs of industrialism- the discovery that humans at all economic strata are in fact persons, both educable and agentic, and that Malthus can in fact go right to hell.

There are a lot of structural forces now in play that genuinely act to preserve the 'non-subsistence labor' class, some enshrined in law and some encoded in the needs of the modern economy itself. At the same time, I think unskilled labor is still effectively in the same boat as it was in Colquhoun's day, and the legal and economic advantages enjoyed by skilled labor aren't strong enough to prevent Walmart shelvers and Amazon warehouse packagers from reverting over time to the most base level of subsistence possible within their host nation's welfare system and tech level.

This is, of course, where the Georgist nugget kicks in. For all that we may not have the level of ambient wealth needed to support our entire population comfortably on a living-wage UBI, it's undeniable that as a civilization we're orders of magnitude more wealthy than previous generations- and by the same token, it's equally undeniable that a shitty apartment in Portland or New York in 2023 demands a greater store of wealth from its tenants than Colquhoun himself ever laid claim to in his whole damn life. If our average rent today was "the amount of wealth commanded by a day laborer in 1800", it would be effectively free!

The value of (especially urban) land- not improvements or construction, mind, just the price of an empty lot- grew hand in hand with the wealth and technology we created throughout the industrial revolution, as did the value of other unmodified natural resources like water, precious metals, and even sunlight. And they seem very likely to continue doing so as we fiddle our way through the full symphony of human technological progress; the greater our arts, the more opportunities we'll see in the world around us. That created value genuinely is collective, as few other things are, and even if a UBI can't get all the way to a living wage (yet!), then distributing those gains widely would still go a really long way towards allowing unskilled workers to escape subsistence, or (as they prefer) to work far fewer hours in order to achieve it, even if they aren't educated specialists benefiting directly from employment in O-ring production networks. Monopolization of natural resources really does seem to be a huge contributor to subsistence poverty in technologically modern states.

In other words, my response to "you can't make everybody a rentier," is "sure you can! You can literally make 'everybody' the beneficiary on rents extracted from monopolies on basic natural resources, dividing them equally and impartially among the whole population. And frankly that seems like a great plan, even if it doesn't end the need for labor as such, because it does so much to alleviate the misery of poverty."

Avatar

The Trials of Arteama

When Arteama turned fourteen, she snuck out of the temple unsupervised (something that, she would later learn, was a necessary part of her ascension ritual). The streets of Vimvi were still waking, none of the usual bustle yet to be found. Arteama, wending her way through the near-empty alleys and boulevards, felt as if the morning sun was embracing her in a cloak of divine safety, within which nothing could harm her. Eventually, she would learn that the penalties meant nobody would dream of harming a novice of the Wandering Serai in the first place. But she knew none of that at the time, and was amazed at the impunity with which she wandered the city.

So I got the 'rona (sad!), but am back mostly on the upswing (yay!) and am in that incredible Flowers for Alegernon moment where I mostly don't have a fever any more but can still appreciate really bad tv and can watch it guilt-free while I heal up. Right now it's "The Imperfects" but I am Open To Suggestions.

The correct choice here ended up being reading the (first half of the) Antimemetics Division SCP fiction as reviewed by @brazenautomaton, and let me tell you, literal fever dreams about this subject are incredible. I'm not even mad, I spent all night having the kind of nightmares a person could wait years for.

So I got the 'rona (sad!), but am back mostly on the upswing (yay!) and am in that incredible Flowers for Alegernon moment where I mostly don't have a fever any more but can still appreciate really bad tv and can watch it guilt-free while I heal up. Right now it's "The Imperfects" but I am Open To Suggestions.

Free Will is a Value Statement

When I was a kid, we had a dog.  It didn’t go well.

This particular dog- one of several in my childhood, and the only time it went awry- loved us very much, and we loved him too.  But when it came to strangers, he was very aggressive, and very dangerous, and not fully under our control.  We’d have to lock him up when there were visitors to the house, and even then it was less ‘barking’ and more ‘baying of hounds’, and unlike some animals he didn’t suddenly turn nice when he was in the same room with them.  And he was large, much too large for this to be safe.  Things came to a head when my mom was taking him for a walk and he started threatening a small kid playing in their own yard, and she came back terrified that if he ever got out, somebody would be badly hurt.

I remember quite clearly the conversation where my parents told me we couldn’t keep him.  They’d made the unfortunate choice to feed me cookies at the same time, to make the bad news go down easier; the net result is that there’s a specific brand of cookies that, to this day, I still can’t eat.  They just turn to ashes in my mouth.

(The good news is that, against all odds, it seems the ‘farm upstate’ that they sent him to was actually real.  They literally saved the receipts, so that when I got old enough to realize what that kind of story usually meant, they could give me proof that they hadn’t lied.  He did live what I believe to be a happy life in what was, more or less, a wild animal sanctuary.  Not all dangerous animals are so lucky, but sometimes, they are.)

The reason to dredge this up is to notice how unthinkable it was for any of us to call him ‘evil.’  Even when he was straining at the leash as hard as he could snarling and growling at a three year old, he wasn’t evil.  ‘Dangerous’, yes.  ‘Violent’, certainly.  But not that, not ever.

And that’s how it works, right?  We recoil at using the E-word for pets, young children, anyone that’s enough weaker than we are.  Evil-as-an-adjective is for peers and superiors, things which present a genuine threat to us.  You can watch this change for the natural world in real time- us moderns watch nature documentaries about predators avidly, and not as horror films, but our received culture still has ancient fairy tales about the ‘big bad wolf’ that date from before our conquest of Earth’s ecosystems.  What a difference a little power makes!  What was once a real and imminent fear, and a central figure in the atlas of evil, has withered away to a narrative archetype with no material referent, while the wolves themselves become objects of admiration and wonder, or a focus of conservation efforts, in direct proportion to our own sense of security against them.

And maybe you’re not the sort of person who thinks about evil much at all, which is honestly a pretty good strategy most of the time.  It can often obstruct thinking more often than it clarifies.  But even if you don’t, I’ll bet you still think about ‘justice’ a fair bit- and that follows the same rules, for about the same reasons.  The punitive and remunerative kinds of justice, anyway.  Was it some kind of punishment, to have that part of my family broken away when I was a child?  Was my dog’s loss and confusion something he deserved?  Of course not.  It was just- disharmony, I suppose.  We couldn’t find a way to put the world right, and so we suffered instead.

And yet when we reach a certain level of direct personal injury or threat of injury, especially by human causes- political enemies, alien people, angry mobs- then, almost without fail, we find ourselves reaching for this idea of justice.  (And if you wrong us, shall we not revenge?) Show me, anywhere in the world, where a person has in all sincerity called for justice- and I’ll show you someone who feels weak.

Now, I can point at sentences like ‘my dog was not evil,’ and it should be pretty clear that I’m making a value statement, rather than expressing mundane factual belief in the same mold as ‘grass is green.’  That is, I’m not disputing any mechanism of action, or trying to explain why events occurred as they did.  I’m not giving you information you could use to prevent this from happening to you too, much as I hope you can.  I’m telling you how I feel, about what I want, about who I am.  I’m telling you about my grief.

Loosely speaking, you can imagine beliefs falling along a spectrum.  Don’t take this typology too seriously, it’s just a useful distinction to make for present purposes.  The first extreme of our spectrum is just the observational set of beliefs- the ‘sky is blue, grass is green’ category.  These are especially good for making plans that work, since they model a system that we usually want to work with in some capacity.  If you don’t want to fall off a cliff, it helps to have a good map. The second type is imperatives or value statements, beliefs about how to direct our efforts.  ‘Murder is bad’ is a belief like any other, but instead of telling us how to accomplish a goal, it tells us what goals we ought to have and what ends we should work towards.  (Moral realists will think of this second category as being a subset of the first; that’s perfectly reasonable but orthogonal to my point.).  Both types of belief are absolutely necessary for acting in the world: the means and the end, if you like.  

Here’s where I reveal my thesis:  When, honestly, was the last time you used the concept of free will to make a plan?

“People have free will” sure feels like a factual belief, from the inside.  It’s a description of who we are, right?  Like saying we usually have two legs, like saying the Earth goes around the Sun?  Only… it isn’t doing any of the things I do with factual beliefs.  It doesn’t make predictions, it doesn’t expand my capacity to act on the world.  If anything, ‘free will’ as a concept has a weird twisty negative definition (often something like ‘nonrandom indeterminacy’) that resists analysis of the reductive kind we usually use for this sort of thing.  

And if we look at how it’s positioned in the grand constellations of human thought, it’s awkwardly conjoined with a lot of the other things I’ve been talking about here.  Good, evil, justice.  I use my belief in free will a lot when I’m talking about culpability or praiseworthiness, when I’m deciding what to act towards, when to cheer and when to boo.  

I use it when I’m feeling weak.

Or, less personally, think about where ‘free will’ crops up in our court system.  And it does, in more than a few guises.  For example, altered states that compromise our volition are taken into account, and might even qualify as fully mitigating circumstances that tell the court not to punish the transgression.  (“I was not negligent on that construction site, your honor, I’m a diabetic and I was having a blood sugar crash.”)  In other cases, such as in murder charges, malice aforethought or planning the crime carefully might upgrade the sentence to be more harsh, whereas a crime ‘of passion’ might net fewer years in prison. (First-degree versus second-degree murder.)  What all of these have in common, notably, is in assessments of culpability, relevant to the question of how strongly the community wants to punish or condemn the situation.  But when it comes to the presentation of evidence, the chain of material observations that we use to establish confidence in the story of ‘what happened’, we invoke ‘motive’ instead- that is, we ask what benefits, inducements, insults, or other circumstances might have led the defendant to commit the act.  “Your honor, the accused is ordained with free will and is capable of choice,” is, notably, not considered sufficient to establish motive- but “your honor, the defendant was listed in the victim’s will as a primary recipient, and they were seen to have a large argument two days before the murder,” very much is.  Interesting discrepancy, no?  When we ask whether we should condemn others or show mercy, we care deeply about the defendant’s capacity to exercise free choice.  But when we ask material questions about what happened, trying to get a clear picture of the world as it is, we instead ask where the defendant is positioned in a causal web of material and social circumstances.

It’s hard, really hard, to reliably tell when our beliefs are about facts, describing things other than ourselves, and when they’re doing something else, paying rent in other ways.  But I notice, when I was a little kid crying in the car, I never once asked whether any of this was my dog’s fault.  It’s not that I didn’t know whether he had free will or not; it’s that it didn’t occur to me to ask.  I asked if it was my fault, certainly.  I’m sure my parents did too.  But we never asked if it was his, whether he’d decided to be this way.  That’s just not what ‘free will’ as a concept was for.

So, am I saying there’s “no such thing as free will” in the sense that I’m saying humans are fully deterministic and mechanistic?  Nah, not really.  To reiterate: I’m not saying that I have any confidence whatsoever that humans are deterministic, mechanical agents.  I think there’s plenty of room for consciousness to complicate the story of causality in ways I can’t anticipate; there’s every chance that human brains aren’t just billiard balls bouncing around in a universe running on linear algebra or whatever.  But I don’t think that ‘free will’ as currently discussed is in any sense an alternative to that model, either.  What I’m trying to say is that ‘free will’ isn’t really a claim about what the world is like at all.

The opposite of a belief in free will isn’t ‘I assert humans are chemical robots governed by deterministic electrochemical reactions’.  Instead, the opposite is ‘I am not angry at you for hurting me.’  Free will is a value statement.

Remember that ‘rate my dog’ parody account, and the central joke was that all the dogs got scores of like 12/10 or whatever?  And the punchline to it all, when somebody tried to call them out on the uselessness of a rating system that always stayed maxed out: “They’re good dogs, Brent.”  If I were at a high enough perch- strong enough, wise enough, safe enough- then that same optimism, I think, is the only part of my need for justice that would survive.  True power doesn’t rank humans from best to worst, or spend time blaming us for outcomes that cause suffering to ourselves or to others.  It doesn’t need to.