about the most recent Almost Nowhere chapter (chapter 40) and unreliable narrators
I have been insane about artificial intelligence existential risk recently and what follows is an expression of that. There's not much of this which I actually believe is true; take it as a creative writing exercise maybe.
Stuff about the Kelly criterion
I’ve been off Tumblr for a little while, but there’s apparently been some discussion about the Kelly criterion, a concept in probability, in relation to some things Sam Bankman-Fried said about it and how that relates to risk aversion. I’m going to do what I can to explain some aspects of the math as I understand them.
The Kelly criterion is a way of choosing how much to invest in a favorable bet, i.e. one where the expected value is positive. The Kelly criterion gives the “best” amount for a bunch of different senses of “best” in a bunch of different scenarios, but I’m going to restrict to one of the simplest ones.
Suppose you have some bet where you can bet whatever amount of money you want, you have probability p of winning, and you gain b times the amount you bet if you win. (Of course, if you lose, you lose the amount you bet.) Also suppose you get the opportunity to make this bet some large number n of times in a row, you have the same probabilities and payoff rules for each of them, and they’re independent events from each other. The assumption that all of the bets in the sequence have the same probabilities and payoff rules is made here to simplify the discussion; the basic concepts can still hold when there are a mix of different bets, but it’s a lot messier to state things and reason about them.
Also suppose that your strategies are limited to choosing a single quantity f between 0 and 1 and always betting f times your total wealth at every step. This is a pretty big restriction, and it too can be relaxed at the cost of making things much messier. But even with this restriction we’ll be able to compare the strategy prescribed by the Kelly criterion to the “all-in” strategy of always betting all of your money.
So what is the best choice of f? The Kelly criterion gives an answer, but the sense in which it’s the “best” is one that it’s not obvious should apply to any choice of f. I’ll state it here but keep in mind that until we’ve done some more calculation, we shouldn’t assume that that there is any choice of f which is the best in this sense.
The Kelly criterion gives a choice of f such that, for any other choice of f, the Kelly criterion produces a better result than the other choice with high probability. Here “high probability” means that the probability that the Kelly choice outperforms the other one goes to 1 as n goes to infinity.
So why is this possible?
Let Xi be the random variable representing the ratio of the money you have after the ith bet to the amount you had before it. So your final wealth is equal to your starting wealth times the product of the Xi for i from 1 to n. Also these Xi are independent identically distributed variables. (We can describe their distribution in terms of p, b, and f but the exact details aren’t too important to the concepts I want to communicate.) Sums of random variables have some nicer things that can be said about them than products, so we take the logarithm. The logarithm of your final wealth is the log of your starting wealth plus a sum of n independent variables log(Xi).
Now, the expected value of that sum is n times the expected value of one of the individual summands, and the (weak) law of large numbers tells us that with high probability the actual value of the sum will be close to that. (To be rigorous about this: For any constant C, the probability that the sum will be further than Cn away from its expected value goes to 0 as n goes to infinity.) So for any betting strategy f, define r(f) to be the expected value of log(Xi). So if we have any two strategies f and f’, the log of your final wealth following strategy f minus the log of your final wealth following strategy f’ will be about r(f)n-r(f’)n, and so will be positive with high probability if r(f)>r(f’). (If you understood the rigorous definition in the previous parenthetical, you should be able to make this argument rigorous as well.) Thus with high probability the log of your final wealth will be greater using strategy f than strategy f’. Since log is an increasing function, this is equivalent to saying that with high probability, f will result in a greater final wealth than f’.
Then if you pick f such that r(f) is maximized, then for each other choice of f, you’ll outperform that choice with high probability. This is what the Kelly criterion says to do. Maximizing r(f) can be equivalently described by saying that at each bet, you bet the amount that maximizes the expectation of the logarithm of the amount you’ll have after the bet.
A pitfall to avoid here: Although the log of the final wealth can be said to be “about” a certain value with high probability, we can’t really say that the final wealth is guaranteed to be “about” anything in particular. Differences that we can consider to be negligibly small when we’re looking at the logarithm can balloon to very large differences when we’re looking at the actual value, and it is very possible for one experimental trial using a given strategy to yield something many times larger than another trial using the same strategy where you’re a little less lucky.
The Kelly criterion is not the strategy that maximizes the expected amount of money you have at the end. The best strategy for that goal is that is the one where you put all of your money in on every bet. This isn’t inconsistent with the previously stated results; in almost all cases the Kelly criterion outperforms the all-in strategy (because the all-in strategy loses at some point and ends up with no money). But in the very unlikely event that you win every single one of your bets, you end up with an extremely large amount of money, so large that even when you multiply it by that very small probability you get something that’s larger than the expected value of any other strategy.
What if, instead of trying to maximize the expected dollar payoff, you have some utility function of wealth, and you’re trying to maximize the expected value of that? Well, it depends what your utility function is. If your utility function is the logarithm of your wealth, the Kelly criterion maximizes your expected utility; in fact, in this case we don’t even need to assume n is large or invoke the law of large numbers. But going back to the case of large n, there are a lot of other utility functions where the Kelly criterion is also optimal. Think about it like this: the Kelly strategy outperforms any other strategy in almost all cases; the only situation where you might still prefer the other strategy is if in the tiny chance that you get a better outcome, your outcome is so much better than it makes up for losing out the vast majority of the time. So if your utility function grows slower than the logarithm, you care even less about that tiny chance of vast riches than you would if you had a logarithmic utility function, so the Kelly criterion continues to be optimal. More generally, I think it can be shown that when comparing the Kelly criterion to some other strategy, the probability of that other strategy doing better than it decays exponentially in n. Since the amount the other strategy can obtain in that tail situation grows at most exponentially in n, this implies that as long as your utility function grows slower than nε for all ε>0, you won’t care about the tail so the Kelly criterion is still optimal. If your utility function grows faster than that, i.e. if there is some ε>0 such that your utility function grows faster than nε, then I think for sufficiently favorable bets, all-in comes out ahead again.
Okay but how does this all of this apply in the real world? Honestly I’m not sure. If your utility function is your individual well-being, it seems very likely to me that that grows logarithmically or slower; if what you care about is maximizing the amount of good you do for the world by charitable donations, I think there is some merit to SBF’s argument that you should treat that utility as a linear function of money, at least up to a certain point. But even he acknowledged that it drops off significantly once you get into the trillions, and since the reasons for potentially preferring riskier strategies over the Kelly criterion hinged on exponentially small probabilities of exponentially large payoffs, I think that that trillion-dollar regime might actually be pretty relevant to the computation.
Really any utility function should be eventually constant, but in that case the Kelly criterion ceases to be optimal in the way discussed before. For large enough n, it will get you all the money you could want, but so will any other strategy other than all-in and “never bet anything”. Obviously this is not a good model of how the world works. To repair this we probably want to introduce time-discounting, but to make sense of that we need to have some money getting spent before the end of the experiment rather than all of it available for reinvesting, and by this point things have gotten far enough away from the original scenario that it’s hard to tell how relevant the conclusions from it even are. It seems like it’s a useful heuristic in a pretty wide range of scenarios? But I have no idea whether SBF was right that he was not in one of them.
To be clear, none of this is to excuse his actions; whether or not he should have been applying the Kelly criterion, I think “committed billions of dollars of fraud” does a better job of capturing what he did wrong than “was insufficiently risk-averse”.
So: Kelsey has confirmed in a fairly public Discord server that the screenshot I posted earlier was genuine. I was hoping that someone else would end up sharing that confirmation to Tumblr, but to my knowledge this has not happened. As such, if you're not in that server, me saying that Kelsey confirmed it should probably not convince you of anything, but you're not the primary intended audience for this post.
Kelsey says that shortly after posting the messages in the screenshot I shared earlier, she realized that it was overstated and posted a correction, and that by omitting this correction I and/or my source was deliberately misrepresenting what she said. I believe this is a significant distortion of the truth. I believe the "correction" she's referring to is the following:
This is not a correction, it is a reframing. It does not contradict anything she said in the screenshot I posted earlier, nor anything I said based on that in my post. Although it is clearly intended to downplay the involvement of the EAs in question, it doesn't really say very much of substance at all. "I don't know what the sanctions would have been like without their involvement; maybe they would have even been worse" is something that could be truthfully said about anyone with any level of involvement. "I am very sure EAs did not cause there to be sanctions - that is definitely a very high level administrative decision" puts an upper bound on their involvement, but a very high one, and I was pretty explicit when I was talking about this that I wasn't claiming that anyone high up in the administration and involved in making the final decision was motivated by AI concerns. And honestly, regardless of what she said later, I don't think there is anything you can accidentally overstate as "EAs wrote the semiconduct export controls" that wouldn't itself be pretty damning.
Anyway, this is the last thing I'm going to say on this topic. I don't think I have much relevant non-public information at this point, and while it's tempting to try to be involved in evaluating future developments, arguing for my interpretation of the facts, keeping the known information documented in a single place, etc., there are lots of other people who are at least as well-positioned to do those things as I am.
Effective altruists wrote the semiconductor export restrictions
The Biden administration recently imposed restrictions on exports to China of certain high-end microchips and technologies used to produce them. This was ostensibly in order to prevent their use in Chinese military applications, and probably that was the primary reason the administration chose to impose the controls, but the people who actually wrote the restrictions apparently had other motivations.
They were members of the effective altruist community and were motivated by the idea that if there is a single supply chain for the high-end chips (i.e. a US monopoly) it will become more politically feasible to impose restrictions on how those chips can be used. Specifically, that it might be possible to make the chips in such a way that they cannot be used to train powerful machine learning models. For context, the concern about AI among effective altruists is that a sufficiently advanced system might wipe out humanity.
I don’t know a lot about foreign policy but my understanding is that this is a very aggressive move and that it will do significant damage to the Chinese economy (and very possibly hurt the US economy as well, depending on how various things shake out). It’s also, I think, an unprecedented step in decoupling the economies of the two countries, and therefore a significant step toward a second cold war.
Prior to learning about this, my feeling about AI-focused effective altruists was that they were probably wasting a lot of money but not doing anything much worse than that. Now it seems like they are influencing global events in ways that are not widely known and are likely very bad.
If I had specific names and definitive proof I would probably be trying to talk to a journalist, but I don’t. What I do have is entirely convincing to me but due to privacy concerns I’m not going to share where this information is coming from. As such this is essentially a rumor, but even as a rumor I think it’s probably of interest to a bunch of people on Tumblr who are adjacent to that community.
UPDATE: I got permission to share the following screenshot. This is Kelsey Piper aka theunitofcaring, a prominent member of the effective altruist community, talking in a Discord channel.
I did not take the screenshot but I trust it to be genuine. It is the main source for the claims I made above, so if there are any details where what I said seems unsupported by the screenshot, you should probably go with the screenshot.
(This is not a bot; I believe that tag shows up when people post to Discord via IRC or other messaging services rather than posting directly to Discord.)
There's a transcript in the readmore.
Effective altruists wrote the semiconductor export restrictions
The Biden administration recently imposed restrictions on exports to China of certain high-end microchips and technologies used to produce them. This was ostensibly in order to prevent their use in Chinese military applications, and probably that was the primary reason the administration chose to impose the controls, but the people who actually wrote the restrictions apparently had other motivations.
They were members of the effective altruist community and were motivated by the idea that if there is a single supply chain for the high-end chips (i.e. a US monopoly) it will become more politically feasible to impose restrictions on how those chips can be used. Specifically, that it might be possible to make the chips in such a way that they cannot be used to train powerful machine learning models. For context, the concern about AI among effective altruists is that a sufficiently advanced system might wipe out humanity.
I don’t know a lot about foreign policy but my understanding is that this is a very aggressive move and that it will do significant damage to the Chinese economy (and very possibly hurt the US economy as well, depending on how various things shake out). It’s also, I think, an unprecedented step in decoupling the economies of the two countries, and therefore a significant step toward a second cold war.
Prior to learning about this, my feeling about AI-focused effective altruists was that they were probably wasting a lot of money but not doing anything much worse than that. Now it seems like they are influencing global events in ways that are not widely known and are likely very bad.
If I had specific names and definitive proof I would probably be trying to talk to a journalist, but I don’t. What I do have is entirely convincing to me but due to privacy concerns I’m not going to share where this information is coming from. As such this is essentially a rumor, but even as a rumor I think it’s probably of interest to a bunch of people on Tumblr who are adjacent to that community.
A point of clarification:
When I said “This was ostensibly in order to prevent their use in Chinese military applications, and probably that was the primary reason the administration chose to impose the controls” the reason for the hedging “probably” was that they might (as far as I know) be motivated by considerations like wanting to benefit US chip manufacturers, or wanting to damage the Chinese economy more generally, or other things that I don’t even know about. I don’t think that there is any real possibility that anyone high up in the administration and involved in making the final decision to impose the restrictions cares much about AI risk.
Effective altruists wrote the semiconductor export restrictions
The Biden administration recently imposed restrictions on exports to China of certain high-end microchips and technologies used to produce them. This was ostensibly in order to prevent their use in Chinese military applications, and probably that was the primary reason the administration chose to impose the controls, but the people who actually wrote the restrictions apparently had other motivations.
They were members of the effective altruist community and were motivated by the idea that if there is a single supply chain for the high-end chips (i.e. a US monopoly) it will become more politically feasible to impose restrictions on how those chips can be used. Specifically, that it might be possible to make the chips in such a way that they cannot be used to train powerful machine learning models. For context, the concern about AI among effective altruists is that a sufficiently advanced system might wipe out humanity.
I don’t know a lot about foreign policy but my understanding is that this is a very aggressive move and that it will do significant damage to the Chinese economy (and very possibly hurt the US economy as well, depending on how various things shake out). It’s also, I think, an unprecedented step in decoupling the economies of the two countries, and therefore a significant step toward a second cold war.
Prior to learning about this, my feeling about AI-focused effective altruists was that they were probably wasting a lot of money but not doing anything much worse than that. Now it seems like they are influencing global events in ways that are not widely known and are likely very bad.
If I had specific names and definitive proof I would probably be trying to talk to a journalist, but I don’t. What I do have is entirely convincing to me but due to privacy concerns I’m not going to share where this information is coming from. As such this is essentially a rumor, but even as a rumor I think it’s probably of interest to a bunch of people on Tumblr who are adjacent to that community.
> [W]hile it may be tempting to dressing up a child up as Coco from the Disney movie because it would be "so cute" for an Instagram post, if the child has no Mexican and/or Indigenous ancestry, that decision should be reconsidered. [...]
> It's also important to consider if you can separate the real person from the marginalized culture. Instead of dressing your child up as Frida Kahlo, assuming they do not have Mexican heritage, encourage them to dress as a non-descript painter, but then engage in a history lesson about Frida and her life before they go trick-or-treating. Considering the elements of magical realism in Frida's work, another idea is opting for a magical creature like a butterfly or fairy as an alternative costume.
> Other safe bets? Athletes, musicians, and public figures not directly tied to one culture or heritage.
Huh I wonder who gets to count as “not directly tied to one culture or heritage”.
On the one hand, the word “narcissism” has a long history predating the concept of narcissistic personality disorder, and people have always used the word to refer to things which differ in both degree and kind from what psychiatrists label as NPD. On the other hand, there are a lot of contexts in which people will talk about “narcissists” but not NPD as such, but nevertheless clearly intend to invoke the stigma against the psychiatric disorder specifically; most discussion I’ve seen of “narcissistic abuse” fits this pattern.
here’s a video game idea which would probably be pretty hard to make fun but which I like as a concept:
Gameplay consists of short (~1 hour or less) runs, and a bunch of the rules governing how the game works are randomly generated and vary from one run to the next. As an example of the kind of variation you could have, there’s the common mechanic in roguelikes where e.g. a “blue potion” has different effects from one run to the next, and you have to figure out what it does this run, sometimes by doing something dangerous like drinking it, sometimes by other means. Here, though, more things are uncertain than in a traditional roguelike, and getting that information is riskier; it’s balanced such that you can’t expect to get, within a single run, all the information you would need to win. And you can’t carry information over to future runs because next time you’ll get a completely different set of rules.
But it’s played online, and although any ruleset is seen by each player only once, after one player loses a run another player is given the same ruleset. After losing a run a player can write a short message (maybe limited at 500 characters?) describing what they learned, and players can see the messages from everyone who has previously attempted the game with the same ruleset that they’ve been given. Once someone finally wins for that ruleset, everyone in the chain gets an email or a message to their account, which gives the full list of the messages written by players in the chain together with a video or transcript of the actual winning run.
The Orange
by Wendy Cope
At lunchtime I bought a huge orange The size of it made us all laugh. I peeled it and shared it with Robert and Dave— They got quarters and I had a half. And that orange, it made me so happy, As ordinary things often do Just lately. The shopping. A walk in the park This is peace and contentment. It’s new. The rest of the day was quite easy. I did all my jobs on my list And enjoyed them and had some time over. I love you. I’m glad I exist.
Here’s a fun* philosophy thing I just found out about:
It is often claimed that an object’s particularized properties are ontologically dependent on that object, which means, for example, that an apple’s redness cannot exist without the apple existing. Which sounds reasonable enough on the face of it. But here’s a counterargument: An apple’s redness is plausibly the same thing as the redness of its skin, and the skin could go on existing and being red even as the apple ceases to exist.
*your mileage may vary
I don’t want to reblog this thread because it is long and I only want to respond to one small part of it, and because the comment I’m responding to is from one of the most racist people on Tumblr that I know of and I don’t particularly want them on my blog, but regarding this bit, the first in a list of things that are wrong with bioethicists:
Trying to outlaw small amounts of help (Hellman well-intentionedly tries to demand large amounts of help, but if you demand ‘large help or nothing’ you are functionally banning small amounts of help, also what kind of hack writer put in an unethicist named Hellman)
As I pointed out when the article was first going around Tumblr, this is an obvious and egregious misreading of what Hellman is saying in the linked article. The fact that it is still topping lists of the evils of bioethicists five years later does a lot to reinforce my belief that many rationalists feel the need to cast bioethicists as villains in some ways that are pretty far removed from things that actual bioethicists are actually saying.
Physics teacher.
My English teachers were so fucking cis. One of them taught The Taming of the Shrew to make some kind of point about gender.
if there’s one thing that unites all queer people, it’s our close childhood bonds with authority figures
I have a physics question about the arrow of time. Actually I have a lot of questions about the arrow of time, but most of them are hard to make precise, and the vagueness of the questions results in vague and unsatisfying answers. Here is one that seems precise enough to hope for a clear answer:
How does the global arrow of time (the universe started in a low-entropy state and is becoming more entropic basically because higher-entropy states make up much more of the space of possible states) result in the local effect where any temporarily closed system will see an increase in entropy as long as it is closed to the outside world?
You can say “well, statistical mechanics shows that if you specify a not-maximally-entropic initial (macro)state for your closed system and run the fundamental laws of physics on it forward from that point, it will with extremely high probability become more entropic” but the reason this is unsatisfying is that if we specify a final macrostate and run the fundamental laws of physics backward from that point, with extremely high probability it ends up more entropic at the start. As I understand it, the reason the latter conclusion doesn’t hold for actual physical systems is basically that the final microstate is an extremely unusual representative of its macrostate (namely, one that becomes less entropic if you run the laws of physics on it backwards) while the initial microstate is, as far as the experiment is concerned, a typical representative of its macrostate. But... why? How does the global increase in entropy in the universe result in it happening this way, rather than the reverse, 100% of the time?
Gideon the Ninth: does not know what is going on because she is too busy thinking about girls and swords
Harrow the Ninth: does not know what is going on because she is haunted by the horrors
Nona the Ninth: does not know what is going on because she is six months old
There was a post going around recently where someone said in vaguely social-justice-y language that the recent stabbing of Salman Rushdie was justified (or maybe not that it was justified but that he was asking for it and we shouldn’t feel sorry for him, or something. The precise thesis was kind of confused and varied between posts.) There were many dozens of people arguing against the post, and as far as I could tell, no one except the original poster defending it. Among the many people arguing with it, a bunch were bemoaning the sorry state of modern progressivism (or some subset of it), that it led people to say things like this.
Here is your reminder that one weirdo on the internet does not a movement make, and if the only reason you’re seeing an opinion is that people hate-shared it, it does not provide much reason to think that any significant number of people share the opinion. To be clear, there are lots of people in the world who think that Rushdie deserved to be stabbed, but most of them are extremely conservative Muslims. Among people you might mistake for progressive? I don’t buy that this is really a thing.
(I’m sure it’s possible to dig up a few such people on Twitter, because Twitter is home to every deranged ideological configuration you can imagine. But looking through the top and most recent tweets about Rushdie, the only people I saw saying anything positive about the attack were openly conservative Muslims, so absent further evidence I remain unconvinced that progressives who thought the attack was justified exist in large enough numbers to signify much.)
I stopped reading Slate Star Codex / Astral Codex Ten a long time ago, and at this point when I see stuff from that on Tumblr, half the time it’s like “here’s an entertaining psychiatry anecdote from a blog I follow :)” and half the time it’s like “welp, now Scott is talking about the danger posed by immigration changing a nation’s ethnic character”
fantasy novel where everyone speaks completely normal modern English, but there are extensive footnotes and appendices explaining the etymologies of the words being used and the history of the language, and it’s all totally different from real English

