karl-popper

Every intellectual has a very special responsibility. He has the privilege and the opportunity of studying. In return, he owes it to his fellow men (or ‘to society’) to represent the results of his study as simply, clearly and modestly as he can. The worst thing that intellectuals can do - the cardinal sin - is to try to set themselves up as great prophets vis-à-vis their fellow men and to impress them with puzzling philosophies. Anyone who cannot speak simply and clearly should say nothing and continue to work until he can do so.
—  Karl Popper 1994: Against Big Words

Karl Popper, as quoted by David Deutsch in The Beginning of Infinity:

I think that there is only one way to science – or to philosophy, for that matter: to meet a problem, to see its beauty and fall in love with it; to get married to it and to live with it happily, till death do ye part – unless you should meet another and even more fascinating problem or unless, indeed, you should obtain a solution.

There you have it. If you like you like that problem so much, why don’t you marry it?

Free Will & the Fallibility of Science

One of the most significant intellectual errors educated persons make is in underestimating the fallibility of science. The very best scientific theories containing our soundest, most reliable knowledge are certain to be superseded, recategorized from “right” to “wrong”; they are, as physicist David Deutsch says, misconceptions:

I have often thought that the nature of science would be better understood if we called theories “misconceptions” from the outset, instead of only after we have discovered their successors. Thus we could say that Einstein’s Misconception of Gravity was an improvement on Newton’s Misconception, which was an improvement on Kepler’s. The neo-Darwinian Misconception of Evolution is an improvement on Darwin’s Misconception, and his on Lamarck’s… Science claims neither infallibility nor finality.

This fact comes as a surprise to many; we tend to think of science —at the point of conclusion, when it becomes knowledge— as being more or less infallible and certainly final. Science, indeed, is the sole area of human investigation whose reports we take seriously to the point of crypto-objectivism. Even people who very much deny the possibility of objective knowledge step onto airplanes and ingest medicines. And most importantly: where science contradicts what we believe or know through cultural or even personal means, we accept science and discard those truths, often wisely.

An obvious example: the philosophical problem of free will. When Newton’s misconceptions were still considered the exemplar of truth par excellence, the very model of knowledge, many philosophers felt obliged to accept a kind of determinism with radical implications. Give the initial-state of the universe, it appeared, we should be able to follow all particle trajectories through the present, account for all phenomena through purely physical means. In other words: the chain of causation from the Big Bang on left no room for your volition:

Determinism in the West is often associated with Newtonian physics, which depicts the physical matter of the universe as operating according to a set of fixed, knowable laws. The “billiard ball” hypothesis, a product of Newtonian physics, argues that once the initial conditions of the universe have been established, the rest of the history of the universe follows inevitably. If it were actually possible to have complete knowledge of physical matter and all of the laws governing that matter at any one time, then it would be theoretically possible to compute the time and place of every event that will ever occur (Laplace’s demon). In this sense, the basic particles of the universe operate in the same fashion as the rolling balls on a billiard table, moving and striking each other in predictable ways to produce predictable results.

Thus: the movement of the atoms of your body, and the emergent phenomena that such movement entails, can all be physically accounted for as part of a chain of merely physical, causal steps. You do not “decide” things; your “feelings” aren’t governing anything; there is no meaning to your sense of agency or rationality. From this essentially unavoidable philosophical position, we are logically-compelled to derive many political, moral, and cultural conclusions. For example: if free will is a phenomenological illusion, we must deprecate phenomenology in our philosophies; it is the closely-clutched delusion of a faulty animal; people, as predictable and materially reducible as commodities, can be reckoned by governments and institutions as though they are numbers. Freedom is a myth; you are the result of a process you didn’t control, and your choices aren’t choices at all but the results of laws we can discover, understand, and base our morality upon.

I should note now that (1) many people, even people far from epistemology, accept this idea, conveyed via the diffusion of science and philosophy through politics, art, and culture, that most of who you are is determined apart from your will; and (2) the development of quantum physics has not in itself upended the theory that free will is an illusion, as the sort of indeterminacy we see among particles does not provide sufficient room, as it were, for free will.

Of course, few of us can behave for even a moment as though free will is a myth; there should be no reason for personal engagement with ourselves, no justification for “trying” or “striving”; one would be, at best, a robot-like automaton incapable of self-control but capable of self-observation. One would account for one’s behaviors not with reasons but with causes; one would be profoundly divested from outcomes which one cannot affect anyway. And one would come to hold that, in its basic conception of time and will, the human consciousness was totally deluded.

As it happens, determinism is a false conception of reality. Physicists like David Deutsch and Ilya Prigogine have, in my opinion, defended free will amply on scientific grounds; and the philosopher Karl Popper described how free will is compatible in principle with a physicalist conception of the universe; he is quoted by both scientists, and Prigogine begins his book The End of Certainty, which proposes that determinism is no longer compatible with science, by alluding to Popper:

Earlier this century in The Open Universe: An Argument for Indeterminism, Karl Popper wrote,” Common sense inclines, on the one hand, to assert that every event is caused by some preceding events, so that every event can be explained or predicted… On the other hand, … common sense attributes to mature and sane human persons… the ability to choose freely between alternative possibilities of acting.” This “dilemma of determinism,” as William James called it, is closely related to the meaning of time. Is the future given, or is it under perpetual construction?

Prigogine goes on to demonstrate that there is, in fact, an “arrow of time,” that time is not symmetrical, and that the future is very much open, very much compatible with the idea of free will. Thus: in our lifetimes we have seen science —or parts of the scientific community, with the rest to follow in tow— reclassify free will from “illusion” to “likely reality”; the question of your own role in your future, of humanity’s role in the future of civilization, has been answered differently just within the past few decades.

No more profound question can be imagined for human endeavor, yet we have an inescapable conclusion: our phenomenologically obvious sense that we choose, decide, change, perpetually construct the future was for centuries contradicted falsely by “true” science. Prigogine’s work and that of his peers —which he calls a “probabilizing revolution” because of its emphasis on understanding unstable systems and the potentialities they entail— introduces concepts that restore the commonsensical conceptions of possibility, futurity, and free will to defensibility.

If one has read the tortured thinking of twentieth-century intellectuals attempting to unify determinism and the plain facts of human experience, one knows how submissive we now are to the claims of science. As Prigogine notes, we were prepared to believe that we, “as imperfect human observers, [were] responsible for the difference between past and future through the approximations we introduce into our description of nature.” Indeed, one has the sense that the more counterintuitive the scientific claim, the eagerer we are to deny our own experience in order to demonstrate our rationality.

This is only degrees removed from ordinary orthodoxies. The point is merely that the very best scientific theories remain misconceptions, and that where science contradicts human truths of whatever form, it is rational to at least contemplate the possibility that science has not advanced enough yet to account for them; we must be pragmatic in managing our knowledge, aware of the possibility that some truths we intuit we cannot yet explain, while other intuitions we can now abandon. My personal opinion, as you can imagine, is that we take too little note of the “truths,” so to speak, found in the liberal arts, in culture.

It is vital to consider how something can be both true and not in order to understand science and its limitations, and even more the limitations of second-order sciences (like social sciences). Newton’s laws were incredible achievements of rationality, verified by all technologies and analyses for hundreds of years, before their unpredicted exposure as deeply flawed ideas applied to a limited domain which in total provide incorrect predictions and erroneous metaphorical structures for understanding the universe.

I never tire of quoting Karl Popper’s dictum:

Whenever a theory appears to you as the only possible one, take this as a sign that you have neither understood the theory nor the problem which it was intended to solve.

It is hard but necessary to have this relationship with science, whose theories seem like the only possible answers and whose obsolescence we cannot envision. A rational person in the nineteenth century would have laughed at the suggestion that Newton was in error; he could not have known about the sub-atomic world or the forces and entities at play in the world of general relativity; and he especially could not have imagined how a theory that seemed utterly, universally true and whose predictive and explanatory powers were immense could still be an incomplete understanding, revealed by later progress to be completely mistaken about nearly all of its claims.

Can you imagine such a thing? It will happen to nearly everything you know. Consider what “ignorance” and “knowledge” really are for a human, what you can truly be certain of, how you should judge others given this overwhelming epistemological instability!

At the local viewing of the transit of Venus, I asked an astronomer named Lisa how people noticed a planet going in front of the Sun in the first place. (Surely they weren’t just staring at the sun all day?)

She told me:

  1. Edmund Halley predicted the transit of Venus. He died before being seen right, which seems sad, but we didn’t discuss that any further. Theory preceded observation. EDIT: Apparently Jeremiah Horrocks first wrote of the transit of Venus.
  2. The first observed transit of Venus killed that last free parameter to allow scientists to figure out the absolute distance from Earth to the Sun. (Previously they’d only known relative distances between planets.)
  3. I asked her the question I had formulated while watching Lawrence Krauss’ talk: how can you know, as in know-know, know know know, whether a star is bright or close?

    Her answer: astronomers make a lot of assumptions. (Ahhh, satisfaction.) In particular they assume that most stars are normal (Gaussian, not just usual). Well, that makes a lot of sense then.
  4. Nowadays another telescope is being built (thank you, government) that will triple the range within which relevant things can be seen, so we will be able to see to the centre of the Milky Way galaxy (and equal distance in the opposite direction) – and do so very precisely.

    So precisely that we will be able to measure parallax – the difference in how stars appear in winter versus summer, when we’re on opposite sides of the Sun – and obtain precise knowledge of where many, many stars are. (Tripling length means roughly times 3³ volume, so more like 20-30 times more stars’ positions will be known.)
  5. Now this is the kicker in your Popperian dirtsack. Ancient Greeks had the right theory (heliocentric solar system) but discarded it on the basis of experimental evidence!

    Never preach to me about progress-in-science when all you’ve heard is a one-liner about Popper and the communal acceptance of general relativity. Especially don’t follow it up by saying that “science” marches toward the Truth whilst “religion” thwarts its progress.

    According to Astronomer Lisa, it’s not true that the Greeks simply thought they and their Gods were at the centre of the Universe because they were egotistical. They reasoned to the geocentric conclusion based on quantitative evidence. How? They measured parallax. (Difference in stellar appearance from spring to fall, when we’re on opposite sides of the Sun.) EDIT: More by @rmathematicus, suggested by @sc_k. How did heliocentrism eventually triumph in the Renaissance?

    Given the insensitivity of their measurement tools at the time, the stars didn’t change positions at all when the Earth moved to the other side of the Sun. Based on that, they rejected the heliocentric hypothesis.

    If the Earth actually did move around the Sun, then the stars would logically have to appear different from one time to another. But they remain ever fixed in the same place in the Heavens, therefore the Earth must be still (geocentric).

I always told this story to myself as the gradual removal of anthropocentrism from the natural order. First we learn we’re not the centre of the Universe, then we’re not the only Galaxy, we’re not the only species that falls in love, we’re evolved by chance like everyone else, and so on. But that story is wrong. It doesn’t fit this bit of the history of ideas and I bet it doesn’t fit other bits of history either. I need a new story.

What if technology makes scientific discoveries that we can’t understand?

When scientists think about truth, they often think about it in the context of their own work: the ability of scientific ideas to explain our world. These explanations can take many forms. On the simple end, we have basic empirical laws (such as how metals change their conductivity with temperature), in which we fit the world to some sort of experimentally derived curve. On the more complicated and more explanatory end of the scale, we have grand theories for our surroundings. From evolution by natural selection to quantum mechanics and Newton’s law of gravitation, these types of theories can unify a variety of phenomena that we see in the world, describe the mechanisms of the universe beyond what we can see with our own eyes, and yield incredible predictions about how the world should work.

The details of how exactly these theories describe our world—and what consists of a proper theory—are more properly left to philosophers of science. But adhering to philosophical realism, as many scientists do, implies that we think these theories actually describe our universe and can help us improve our surroundings or create impressive new technologies.

That being said, scientists always understand that our view of the world is in draft form. What we think the world looks like is constantly subject to refinement and even sometimes a complete overhaul. This leads us to what is known by the delightful, if somewhat unwieldy, phrase of pessimistic meta-induction. It’s true that we think we understand the world really well right now, but so has every previous generation, and they also got it wrong. This is why scientists love Karl Popper, who says we can never prove a theory correct, only attempt to overturn it via falsification. So we must never be too optimistic that we are completely correct this time. In other words, we think our theories are true but still subject to potential overhaul. Which sounds a bit odd.

"All evils are caused by insufficient knowledge."

So David Deutsch argues in The Beginning of Infinity, his breathtakingly profound and impossibly affecting new book. He continues:

Optimism is, in the first instance, a way of explaining failure, not of prophesying success. It says that there is no fundamental barrier, no law of nature or supernatural decree, preventing progress… If something is permitted by the laws of physics, then the only thing that can prevent it from being technologically possible is not knowing how.

A disciple of Karl Popper and a quantum physicist, Deutsch is everywhere concerned not with positive absolutes but with the process of conjecture, refutation, and the gradual improvement of our explanatory understanding of the world, as well as the corresponding ability to control it. Amidst his many lucid, remarkably direct assertions about what we can know, what we can do, and the moral repercussions which follow therefrom, he tentatively offers only one moral imperative: “…the moral imperative not to destroy the means of correcting mistakes is the only moral imperative… all other moral truths follow from it…”

If optimism is “a way of explaining failure,” it is because of another of his pronouncements, which he advises humanity to chisel on stone tables: problems are inevitable; and problems are soluble. That is: there is no possible stasis of sustainability for humanity, or any other species, within any ecosystem or civilization. Only a continuous process of problem-solving will suffice to ensure our survival, and not only our survival but our gradual triumph over evil.

Evil! It is not a word he uses often, nor is it a word often-used today, although I suspect this is less because any of us denies the existence of evil -death abounds, injustice abounds, the suffering of the innocent abounds- but because we deny the existence of the good. In any event, discussing evils caused by insufficient knowledge, Deutsch writes:

If we do not, for the moment, know how to eliminate a particular evil, or we know in theory but do not yet have enough time or resources (i.e., wealth), then, even so, it is universally true that either the laws of physics forbid eliminating it [or not]… The same must hold, equally trivially, for the evil of death -that is to say, the deaths of human beings from disease or old age. This problem… has an almost unmatched reputation for insolubility… But there is no rational basis for this reputation. It is absurdly parochial to read some deep significance into this particular failure, among so many, of the biosphere to support human life -or of medical science…

That humanity has not yet conquered death is due to one fact alone: that we have only been engaged in the critical, open-ended creation of knowledge for a few centuries, since the Enlightenment. Before it, fits and starts of such knowledge-creation are well-known, but none were sustained; all fell, all halted, some due to authoritarian political developments, some due to reactionary religious awakenings, and others due to happenstance accidents of history. Above all, Deutsch maintains, those societies in which proto-Enlightenments occurred tended to have a sense of optimism about the solubility of problems and the value of progress, an optimism more fragile than it appears, an optimism easily damaged.

He describes two heartbreaking interruptions in detail: Sparta’s defeat of Athens and Savonarola’s campaign against the Medici’s Florentine Renaissance- before concluding his chapter on optimism with a paragraph I will never forget, particularly when considering the real value of different cultural and political systems:

The inhabitants of Florence in 1494 or Athens in 404 BCE could be forgiven for concluding that optimism just isn’t factually true. For they knew nothing of such things as the reach of explanations or the power of science or even the laws of nature as we understand them, let alone the moral and technological progress that was to follow when the Enlightenment got under way. At the moment of defeat, it must have seemed at least plausible to formerly optimistic Athenians that the Spartans might be right, and to the formerly optimistic Florentines that Savonarola might be. Like every other destruction of optimism, whether in a whole civilization or in a single individual, these must have been unspeakable catastrophes for those who had dared to expect progress. But we should feel more than sympathy for those people. We should take it personally. For if any of those earlier experiments in optimism had succeeded, our species would be exploring the stars by now, and you and I would be immortal.

I will never forget this. Conflict between those who critically examine, creatively conjecture, seek understanding and technological mastery and the atavistic and retrograde elements who believe in some holy antiquity or some savage’s noble edenic idyll is a real one, a suprapolitical one, and it has real victims. All of us who will die count among this number.

The true Enlightenment thinker, the true rationalist, never wants to talk anyone into anything. No, he does not even want to convince; all the time he is aware that he may be wrong. Above all, he values the intellectual independence of others too highly to want to convince them in important matters. He would much rather invite contradiction, preferably in the form of rational and disciplined criticism. He seeks not to convince but to arouse — to challenge others to form free opinions.
—  Karl Popper