Avatar

Marcus Seldon

@marcusseldon / marcusseldon.tumblr.com

A lot of talk of politics, my personal issues, occasional social justice stuff, and occasional weird reblogs.
Avatar

As someone who has organized a gangbang, it is SO HARD to Wrangle People towards the sexy parts and away from the crafted table of snacks which just so happens to be in front of your book shelf and OMG you have THIS gaming System?? That was Kickstarter exclusive! Like, no. Stop. Please return the game book to the shelf and remove your clothes. Please?

Avatar

well thank god it's not just me

Avatar

The best sex party I ever went to nearly stopped because someone taped a sheet to the back of sliding glass windows and were using dry erase markers to make diagrams. A bunch of math and physics PhD’s were helping a chemistry phd with a thorny problem and they cheered when they solved it. A board game night broke out and it was really hard to pry people away from the games, science and snacks for sex so someone put up a pole in the living room and four women started pole dancing while shouting instructions to the scientists and board game nerds.

Epic party, I think I shagged 8 women that night and I won a card game.

Avatar

You go to a bathhouse the first time for the gay sex, but the SECOND time you go for the hot tubs and sauna and cafe and plush seating and wi-fi. And the gay sex.

Where are all these nerdy, intelligent people who go to sex parties all the time? How does one find them (especially as a straight man)? All of my nerdy friends are super monogamous and boring in this department.

Avatar

Kink events, poly meetups, anime conventions, improv, D&D groups, and swingers.

It’s been forever since I e tried to break into the kink community but I think all the public-facing events are still listed on fetlife. Go to one and get to know people and eventually you’ll start getting invited to the private parties.

Maybe I’m going to the wrong D&D groups lol

I’ve always avoided explicit kink events because I’m not really into BDSM at all, more just broader sex positivity and non-monogamy. But maybe I should go anyway. 

I watched large parts (but nowhere near all) of the recent episode of Lex Fridman's podcast where he interviews Eliezer Yudkowsky, who is trying to drive home the point that we're all pretty much doomed by the prospect of AGI. What I saw was interesting a lot of ways, but Yudkowsky's main point that he kept returning to in the parts I saw was "What I'm trying to convey is the notion of what it means to be in conflict with something that is smarter than you." And I couldn't help but think, "Yeah, the experience in itself of trying to argue with Yudkowsky for several hours should do a pretty good job of conveying that notion to Lex Fridman."

buddy, Yudkowsky isn’t smart. He’s a highschool dropout whose most notable achievements are: 1. Moderating a Forum 2. Writing a Very Long Work of Fanfiction 3. Convincing A Lot of People That He is Very Smart That’s not intelligence. That’s charisma. Not being able to tell the difference seems like a miserable way to go through life. 

Assuming that someone who was unwilling/unable to finish school can't be intelligent or become highly knowledgeable in some area seems like an ableist way to go through life.

Dude. Seriously. I know I shouldn't keep taking your bait, but you made it really easy this time (also, I wanted to expand a little on my OP anyway). I had already read a ton of Yudkowsky's work before ever seeing footage of him or getting a sense of charisma or whatever it is, and I contend that it's impossible to read many of his essays without seeing a massive amount of raw intelligence behind them. I can definitely see someone coming away from them disagreeing with him or even concluding that he's batshit insane, but not that he's merely a pseudointellectual with a little writing flair or something.

Lex Fridman, on the other hand, is someone I've only become fully aware of very recently, and I've watched very little of him, but he gives off a strong smell of being another podcaster in the mold of Dave Rubin or Joe Rogan -- that is, an intellectual lightweight who's talented at podcasting and asking relevant questions without having any firmly-grounded knowledge or deep insights on any topic. He is absolutely bested by Yudkowsky, and, to my impression, not just because AI threat has been Yudkowsky's area of expertise for two decades.

IDK Yudkowsky is clearly smart but I don’t think he’s the genius he or many of his fans like to frame him as. I think there are a ton of people in the broad rationalist/rationalist-adjacent sphere more intelligent than him. Maybe I’m biased because the initial posts I read from him were on philosophy (which I majored in), but his writing on philosophical topics was overly simplistic and not rigorous at all. He strikes me as very much a dilettante.

Avatar

As someone who has organized a gangbang, it is SO HARD to Wrangle People towards the sexy parts and away from the crafted table of snacks which just so happens to be in front of your book shelf and OMG you have THIS gaming System?? That was Kickstarter exclusive! Like, no. Stop. Please return the game book to the shelf and remove your clothes. Please?

Avatar

well thank god it's not just me

Avatar

The best sex party I ever went to nearly stopped because someone taped a sheet to the back of sliding glass windows and were using dry erase markers to make diagrams. A bunch of math and physics PhD’s were helping a chemistry phd with a thorny problem and they cheered when they solved it. A board game night broke out and it was really hard to pry people away from the games, science and snacks for sex so someone put up a pole in the living room and four women started pole dancing while shouting instructions to the scientists and board game nerds.

Epic party, I think I shagged 8 women that night and I won a card game.

Avatar

You go to a bathhouse the first time for the gay sex, but the SECOND time you go for the hot tubs and sauna and cafe and plush seating and wi-fi. And the gay sex.

Where are all these nerdy, intelligent people who go to sex parties all the time? How does one find them (especially as a straight man)? All of my nerdy friends are super monogamous and boring in this department.

I’m working my way through all the Studio Ghibli back catalogue and it’s amazing. Until about a year or two ago I had only seen Spirited Away and Princess Mononoke. I can’t say enough positive things about all of these movies. Is there more anime like this? I know Studio Ghibli is probably the pinnacle, but surely there is other anime out there that is attempting to be Ghibli-esque.

I don’t feel like I have any projects right now. Nothing to work toward. I see possibilities but none of them motivate me. The thing is I’m doing ok in the routine day-to-day stuff, I go to work (and recently got a new job) and maintain an active social life, but I don’t feel I’m working toward anything. It’s all so passive.

I don’t know how to get out of this rut...

In many ways, my life is going better now than it ever has in some objective sense. I have more and closer IRL friendships than I ever have, I have social plans on most days now, I recently accepted an offer for a new job with a 50% raise that will involve more interesting work, I have my own place and am financially independent, I’ve been exercising regularly for the first time in my life and have lost 20 pounds, and I have significantly reduced my social anxiety. 

And yet, I often feel a lack of meaning in my life. Everything feels good in the moment, but when I come home and I’m alone I often feel this deep emptiness. Occasionally I break down crying when I’m alone. Everything feels transient, and I feel like I lack purpose. I go to work to make money to pay rent so I can engage in hobbies with friends and then do the same thing over and over until I die. 

I suppose the one area where my life is (arguably) worse is my romantic life. I broke with with my long term partner nine months ago. It was mutual and really was necessary, but it was deeply sad. I invested so much in that relationship, it really was where I found most of my meaning in life, and yet I feel like what I got in return was a series of experiences that left me feel neglected, misunderstood, or even traumatized. My partner was *not* abusive, rather she had real mental health problems that caused her to not always be the best partner to me. For example, withdrawing when I was upset or trying to assert a need, or having depression so bad I had to essentially be her emotional caretaker for years. 

But I feel like a breakup might lead to sadness or anger, but not a sense of meaninglessness. And not for so long, and not for so many things outside of dating and relationships. 

I’m not suicidal or anything, but I don’t know where to go from here. I’d rather be alive and have a meaningless existence than not alive, but that’s a low bar.  And again, the bizarre thing is I’m doing everything right, and I am having lots of positive experiences. But they don’t add up to a sense of purpose or meaning. 

I’ve looked into getting involved in volunteering, religion, and effective altruism, but the prospect of doing so doesn’t make me feel any different. I just feel indifference to them, a deeply felt knowing that they’ll disappoint.

I’ve also looked into new hobbies, but it’s the same feeling every time.

I feel like I invested so much in my former relationship and I don’t have that kind of meaningful emotional investment left in me anymore.

I have been feeling strong mixed feelings about generative AI recently. No, that’s not right, I’ve been feeling negatively about it in two different ways which are at least partially in tension.

One of those feelings is a strong sense of skepticism. Maybe I’m just a dumb newb AI user, but I have not been able to replicate many of the supposedly impressive and useful results these AIs produce in demos. I suspect there’s a lot of cherrypicking going on, as well as a shallow conflation between superficial impressiveness and actual utility. 

I also feel that there’s a real lack of intelligence underlying these models which only becomes apparent the more you play with them. At least for me, the first few interactions feel almost magical, but the more I interact with them the more I start to sense that they’re bullshitting stochastic parrots. Whatever they are, it is simultaneously unnerving, lacking in some fundamental properties of intelligence, and not obviously super useful. I’ve felt it really hard to get excited about these products once I interact with them, and indeed I find them almost boring now.

And what’s worse is I’m seeing this huge rush in the tech industry to embrace generative AI in a way that is reminiscent of the crypto boom to me. The rhetoric, the hype, the capital, the coverage, it’s making me feel so skeptical.

The other negative feeling is one of dread. I worry about people outsourcing thought to these models. I worry about the internet being flooded with AI generated content. I worry that the role for human beings in the creation of art and media will becoming more and more narrow over time. 

Is the future really that the average person will be reduced to a passive consumer of AI-generated art in their free time while working unfulfilling blue collar or service jobs for a living while being ruled over by a small group of AI companies, tech capitalists, and AI engineers? 

I sometimes see the seeds of AI destroying the human spirit. All the worst things about modern society, but more so. Much more so. 

While these can both be true, there is a tension here. If generative AI is mostly hype, than those doomerist ideas seem less likely, and visa versa.

Either way, both lead me to feel a sense of emerging meaninglessness around me. 

Coming back to Tumblr, the discourse here is so radically different than on Twitter or among people I know IRL. It’s truly wild. I think in hindsight my spending so much time here gave me a very distorted view about what society or people in general, or even just young educated people, thought and believed.

I've done a ton more pondering than is evident even on this blog, over the question of why everyone seems to be expressing unhappiness and unhealthiness to the increased extent that we appear to be in the last 10-15 years, when the world has objectively never been better. I've thought for years of expounding on this in lengthy effortposts. Frankly, a lot of my thesis was going to boil down to something along the lines of "we're all more spoiled and fragile than in decades past, and while developing higher standards is the very definition of progress and in itself a Good Thing, we, especially younger people, are allowing it to have the side effect that we tend to frame problems more negatively and are less fit to cope with them."

But over recent months, I've noticed myself shifting abruptly more in the direction of conceding what the rhetoric and ranting of others frequently seems to imply: that the world objectively has been better, that certain aspects of modern society truly are making life truly harder in substantial ways than in decades past, that we are not entirely fortunate to be growing up in the late 20th or early 21st century living in the 2020's. (And I mean, even discounting the effects of the pandemic.) I don't have the energy to justify this now, but it's contributing to a general feeling of frustration for myself as an individual as well as for humanity as a whole.

I’ve been feeling similar. I used to scoff at people who said human life (at least in the developed world) peaked in 1999, but it now seems to me like there’s a lot of truth to that. It seems to be caused by many things, though it’s hard to escape the thought that digital media and social media have a lot to do with this.

Something I can’t get over from reading a lot of the AI enthusiasts is there’s a really strong... religious tone to the way they talk about AI. I’m not even just talking about the AI x-risk stuff, but in general there’s this ominous, “we’re making gods” kind of tone that makes me want to roll my eyes. It makes me a bit suspicious of them, like they’re using this AI stuff to fulfill some primal religious need.

gpt-4 prediction: it won't be very useful

Word on the street says that OpenAI will be releasing "GPT-4" sometime in early 2023.

There's a lot of hype about it already, though we know very little about it for certain.

----------

People who like to discuss large language models tend to be futurist/forecaster types, and everyone is casting their bets now about what GPT-4 will be like. See e.g. here.

It would accord me higher status in this crowd if I were to make a bunch of highly specific, numerical predictions about GPT-4's capabilities.

I'm not going to do that, because I don't think anyone (including me) really can do this in a way that's more than trivially informed. At best I consider this activity a form of gambling, and at worst it will actively mislead people once the truth is known, blessing the people who "guessed lucky" with an undue aura of deep insight. (And if enough people guess, someone will "guess lucky.")

Why?

There has been a lot of research since GPT-3 on the emergence of capabilities with scale in LLMs, most notably BIG-Bench. Besides the trends that were already obvious with GPT-3 -- on any given task, increased scale is usually helpful and almost never harmful (cf. the Inverse Scaling Prize and my Section 5 here) -- there are not many reliable trends that one could leverage for forecasting.

Within the bounds of "scale almost never hurts," anything goes:

  • Some tasks improve smoothly, some are flatlined at zero then "turn on" discontinuously, some are flatlined at some nonzero performance level across all tested scales, etc. (BIG-Bench Fig. 7)
  • Whether a model "has" or "doesn't have" a capability is very sensitive to which specific task we use to probe that capability. (BIG-Bench Sections 3.4.3, 3.4.4)
  • Whether a model "can" or "can't do" a single well-defined task is highly sensitive to irrelevant details of phrasing, even for large models. (BIG-Bench Section 3.5)

It gets worse.

Most of the research on GPT capabilities (including BIG-Bench) uses the zero/one/few-shot classification paradigm, which is a very narrow lens that arguably misses the real potential of LLMs.

And, even if you fix some operational definition of whether a GPT "has" a given capability, the order in which the capabilities emerge is unpredictable, with little apparent relation to the subjective difficulty of the task. It took more scale for GPT-3 to learn relatively simple arithmetic than it did for it to become a highly skilled translator across numerous language pairs!

GPT-3 can do numerous impressive things already . . . but it can't understand Morse Code. The linked post was written before the release of text-davinci-003 or ChatGPT, but neither of those can do Morse Code either -- I checked.

On that LessWrong post asking "What's the Least Impressive Thing GPT-4 Won't be Able to Do?", I was initially tempted to answer "Morse Code." This seemed like as safe a guess as any, since no previous GPT was able to it, and it's certainly very unimpressive.

But then I stopped myself. What reason do I actually have to register this so-called prediction, and what is at stake in it, anyway?

I expect Morse Code to be cracked by GPTs at some scale. What basis to I have to expect this scale is greater than GPT-4's scale (whatever that is)? Like everything, it'll happen when it happens.

If I register this Morse Code prediction, and it turns out I am right, what does that imply about me, or about GPT-4? (Nothing.) If I register the prediction, and it turns out I am wrong, what does this imply . . . (Nothing.)

The whole exercise is frivolous, at best.

----------

So, here is my real GPT-4 prediction: it won't be very useful, and won't see much practical use.

Specifically, the volume and nature of its use will be similar to what we see with existing OpenAI products. There are companies using GPT-3 right now, but there aren't that many of them, and they mostly seem to be:

GPT-4 will get used to do serious work, just like GPT-3. But I am predicting that it will be used for serious work of roughly the same kind, in roughly the same amounts.

I don't want to operationalize this idea too much, and I'm fine if there's no fully unambiguous way to decide after the fact whether I was right or not. You know basically what I mean (I hope), and it should be easy to tell whether we are basically in a world where

  1. Businesses are purchasing the GPT-4 enterprise product and getting fundamentally new things in exchange, like "the API writes good, publishable novels," or "the API performs all the tasks we expect of a typical junior SDE" (I am sure you can invent additional examples of this kind), and multiple industries are being transformed as a result
  2. Businesses are purchasing the GPT-4 enterprise product to do the same kinds of things they are doing today with existing OpenAI enterprise products

However, I'll add a few terms that seem necessary for the prediction to be non-vacuous:

  • I expect this to be true for at least 1 year after the release of the commercial product. (I have no particular attachment to this timeframe, I just need a timeframe.)
  • My prediction will be false in spirit if the only limit on transformative applications of GPT-4 is monetary cost. GPT-3 is very pricey now, and that's a big limiting factor on its use. But even if its cost were far, far less, there would be other limiting factors -- primarily, that no one really knows how to apply its capabilities in the real world. (See below.)

(The monetary cost thing is why I can't operationalize this beyond "you know what I mean." It involves not just what actually happens, but what would presumably happen at a lower price point. I expect the latter to be a topic of dispute in itself.)

----------

Why do I think this?

First: while OpenAI is awe-inspiring as a pure research lab, they're much less skilled at applied research and product design. (I don't think this is controversial?)

When OpenAI releases a product, it is usually just one of their research artifacts with an API slapped on top of it.

Their papers and blog post brim with a scientist's post-discovery enthusiasm -- the (understandable) sense that their new thing is so wonderfully amazing, so deeply veined with untapped potential, indeed so temptingly close to "human-level" in so many ways, that -- well -- it surely has to be useful for something! For numerous things!

For what, exactly? And how do I use it? That's your job to figure out, as the user.

But OpenAI's research artifacts are not easy to use. And they're not only hard for novices.

This is the second reason -- intertwined with the first, but more fundamental.

No one knows how to use the things OpenAI is making. They are new kinds of machines, and people are still making basic philosophical category mistakes about them, years after they first appeared. It has taken the mainstream research community multiple years to acquire the most basic intuitions about skilled LLM operation (e.g. "chain of thought") which were already known, long before, to the brilliant internet eccentrics who are GPT's most serious-minded user base.

Even if these things have immense economic potential, we don't know how to exploit it yet. It will take hard work to get there, and you can't expect used car companies and SEO SaaS purveyors to do that hard work themselves, just to figure out how to use your product. If they can't use it, they won't buy it.

It is as though OpenAI had discovered nuclear fission, and then went to sell it as a product, as follows: there is an API. The API has thousands of mysterious knobs (analogous to the opacity and complexity of prompt programming etc). Any given setting of the knobs specifies a complete design for a fission reactor. When you press a button, OpenAI constructs the specified reactor for you (at great expense, billed to you), and turns it on (you incur the operating expenses). You may, at your own risk, connect the reactor to anything else you own, in any manner of your choosing.

(The reactors come with built-in safety measures, but they're imperfect and one-size-fits-all and opaque. Sometimes your experimentation starts to get promising, and then a little pop-up appears saying "Whoops! Looks like your reactor has entered an unsafe state!", at which point it immediately shuts off.)

It is possible, of course, to reap immense economic value from nuclear fission. But if nuclear fission were "released" in this way, how would anyone ever figure out how to capitalize on it?

We, as a society, don't know how to use large language models. We don't know what they're good for. We have lots of (mostly inadequate) ways of "measuring" their "capabilities," and we have lots of (poorly understood, unreliable) ways of getting them to do things. But we don't know where they fit in to things.

Are they for writing text? For conversation? For doing classification (in the ML sense)? And if we want one of these behaviors, how do we communicate that to the LLM? What do we do with the output? Do they work well in conjunction with some other kind of system? Which kind, and to what end?

In answer to these questions, we have numerous mutually exclusive ideas, which all come with deep implementation challenges.

To anyone who's taken a good look at LLMs, they seem "obviously" good for something, indeed good for numerous things. But they are provably, reliably, repeatably good for very few things -- not so much (or not only) because of their limitations, but because we don't know how to use them yet.

This, not scale, is the current limiting factor on putting LLMs to use. If we understood how to leverage GPT-3 optimally, it would be more useful (right now) than GPT-4 will be (in reality, next year).

----------

Finally, the current trend in LLM techniques is not very promising.

Everyone -- at least, OpenAI and Google -- is investing in RLHF. The latest GPTs, including ChatGPT, are (roughly) the last iteration of GPT with some RLHF on top. And whatever RLHF might be good for, it is not a solution for our fundamental ignorance of how to use LLMs.

Earlier, I said that OpenAI was punting the problem of "figure out how to use this thing" to the users. RLHF effectively punts it, instead, to the language model itself. (Sort of.)

RLHF, in its currently popular form, looks like:

  • Some humans vaguely imagine (but do not precisely nail down the parameters of) a hypothetical GPT-based application, a kind of super-intelligent Siri.
  • The humans take numerous outputs from GPT, and grade them on how much they feel like what would happen in the "super-intelligent Siri" fantasy app.
  • The GPT model is updated to make the outputs with high scores more likely, and the ones with low scores less likely.

The result is a GPT model which often talks a lot like the hypothetical super-intelligent Siri.

This looks like an easier-to-use UI on top of GPT, but it isn't. There is still no well-defined user interface.

Or rather, the nature of the user interface is being continually invented by the language model, anew in every interaction, as it asks itself "how would (the vaguely imagined) super-intelligent Siri respond in this case?"

If a user wonders "what kinds of things is it not allowed to do?", there is no fixed answer. All there is is the LM, asking itself anew in each interaction what the restrictions on a hypothetical fantasy character might be.

It is role-playing a world where the user's question has an answer. But in the real world, the user's question does not have an answer.

If you ask ChatGPT how to use it, it will roleplay a character called "Assistant" from a counterfactual world where "how do I use Assistant?" has a single, well-defined answer. Because it is role-playing -- improvising -- it will not always give you the same answer. And none of the answers are true, about the real world. They're about the fantasy world, where the fantasy app called "Assistant" really exists.

This facade does make GPT's capabilities more accessible, at first blush, for novice users. It's great as a driver of adoption, if that's what you want.

But if Joe from Midsized Normal Mundane Corporation wants to use GPT for some Normal Mundane purpose, and can't on his first try, this role-play trickery only further confuses the issue.

At least in the "design your own fission reactor" interface, it was clear how formidable the challenge was! RLHF does not remove the challenge. It only obscures it, makes it initially invisible, makes it (even) harder to reason about.

And this, judging from ChatGPT (and Sparrow), is apparently what the makers of LLMs think LLM user interfaces should look like. This is probably what GPT-4's interface will be.

And Joe from Midsized Normal Mundane Corporation is going to try it, and realize it "doesn't work" in any familiar sense of the phrase, and -- like a reasonable Midsized Normal Mundane Corporation employee -- use something else instead.

ETA: I forgot to note that OpenAI expects dramatic revenue growth in 2023 and especially in 2024. Ignoring a few edge case possibilities, either their revenue projection will come true or the prediction in this post will, but not both. We'll find out!

We are all posting about AI art, so I thought I’d fire off a few shots in this debate. It’s more of a ramble than anything. I hope to present everyone’s view fairly, and just sort of remark on some historical precedents I find interesting. I think I want to start by saying there are a couple of different threads of the anti-AI art faction that I actually find somewhat sympathetic. A lot of them look kind of similar, but I think needed to be talked through in different ways. But then I want to situate things a little bit by first situating AI as art, while talking about some of the asymmetries between AI art and traditional art production. My thoughts here are a bit loose and informal, and far more cursory than the subject probably requires. I have tried to avoid the sort of analytic defensive writing approach and tried to make voices heard even if I clearly have a horse in the race.

I am not trying to be glib when I say this, but you tend to notice something about the blogs that are most militantly against AI art. Most of them are people who identify as artists, and have an interest in mostly popular art, or I guess what you could call the folk art of the internet age, an affection for cartoonish types of stylization, and some of them make a more or less middle class life for themselves by doing commissions for their artworks. If your primary exposure to art is in the context of this more-or-less distinctively online subculture, your paradigm cases for artworks are going to be different from someone who has a background in art history or whose primary exposure to art is from museums and high culture environments. As such, there is different data that they are trying to explain when they do informal sorts of theorizing about art, its nature, and its purpose, as well as how they relate to it.

There a couple of reasons I bring this up. It will serve to contextualize some of my discussion, but it also makes one critical point more salient: the people who are most invested in opposing AI art are people who have socioeconomic reasons to do so. People commission these kinds of artists to fulfill some desire that they have, because they have the sorts of skills necessary to fulfill those desires. Automation, much like in any other industry, cuts out the need for certain kinds of specialized labor. So people with these specialized skills are seeing their financial stability disappear and become vulnerable to structural unemployment. If this describes you, it is rather sensible to be worried about this. But, if this case is anything like its historical precedents, then there is nothing you can do about it. Structural unemployment is more or less a fact of life. Begging the public not to use AI art is something that simply will not work any more than the Luddites smashing machines did.

There is actually a closer relation to early anti-industrialists than first meets the eye. Consider the Arts and Crafts movement in 19th Century England for a particularly lucid analysis of this. Making tables, chairs, clothing, and other household goods used to be something that was done by hand, which required a lot of highly skilled labor. If William Morris is to be believed, then these practices also looked a lot more like art than the sorts of mass produced articles made by factory labor. They were the work of tailors rather than an assembly line of specialized and largely interchangeable workers. This is why you tend to see some strands of medievalism in aestheticist sorts of movements — Remizov adopting old slavonic scripts and was an amateur medievalist and so on. It is also part of the reason that the decorative arts played such an important role for people like Wilde and Huysmans (let no one say I don’t criticize my idols). Of course these aren’t the only reasons, the main bit is about constructing a world of your own good taste, of realizing yourself in the visual medium. Nevertheless we’ve accepted mass produced clothing, furniture and so on. It is something which is both out of our control and necessary to clothe and shelter everybody.

Perhaps the more relevant example would actually be cameras. A lot of what professional painters did before the invention of the camera was work on portraits of very wealthy people. So a lot of artists were employed to just kind of make bric-a-brac for aristocrats and the rising merchant class, and in doing so they’d have to faithfully portray them along pretty category-standard lines. This job does not really exist anymore, at least not in the same capacity that it used to. So people with this highly training skillset were suddenly out of work. But we still don’t ban cameras to protect the employability of these people. There is a rather human cost to this, but on the other hand, the artist is taken away from a kind of trifling activity and directs their energies toward other thoughts, other skills, other ideas, other projects, etc.

The change is inevitable, but there are small consolations. If you spend much time playing around with AIs, then you start to notice that they are actually kind of flimsy. You cannot get much that is really specific, even if you really clearly articulate what you are looking for. It gets a little confused by that sort of thing. Similarly, they aren’t really good at writing much beyond rephrasing things you say to them, and really struggle with inferential moves funnily enough. And you might think writing is after something that isn’t paraphrase-able, when the current AI tech works from paraphrase. Certainly more advanced and sophisticated machines and programs will come along though. So like, the displacement that’s already happening does not entirely affect the online-artist-as-craftsperson career path because people are looking for you to make specific things with specific compositions. The real displacement is probably going to be much further down the line.

People do, however, give other reasons for thinking that AI art is something to be resisted rather than celebrated or merely tolerated. It would be circumspect for me to suggest that gesturing toward a general defeatism about the economic worries was enough.

First, people like to think that the skill of the artist is something that we are looking for in artworks; that skill is somehow a requirement for it being art or being good art at any rate. This, I think, is the result of some deceptive value clarity that comes from too steady a diet of artistic examples. There is in this respect, a sort of symmetry between the pre-20th Century account of art as the sort of triumphant march of the senses, that art history is the history of the clarification of a certain way of seeing, and the sorts of feelings that one gets from folk-artistic practices in which you spend time honing your craft and as a result get more positive feedback on your work. On both accounts, skill has something to do with motor function and being able to see how three dimensional objects can admit themselves two dimensionally. But I think we should be cautious of the appearance of value clarity here, because once we stop inhabiting these contingent historical frameworks and sociological vantages and look to more art history and high art practice, we come to see that something has gone wrong.

Duchamp’s readymades are the commonplace example of this. Duchamp did not really have to do much to get the readymade produced, hence the name. Industrial manufacturers have already seen to that. But he situates the object in a way that gets people to either take this sort of distinctively aesthetic stance toward it or to have the social-instutitional framework of validation at the front and center. Skill — or at least the kind of skills involved with our more sundry examples, the ones that have to do with spatial transcription and motor function — has precious little to do with it. Malevich’s suprematist constructions are also rather simple to make. The Black Square is quite simple. It is, after all, just that! But once we take a sort of formal attentive stance, or we start learning enough art history we come to appreciate it a lot more. The way it plays on earlier symbolist works, sorts of Schopenhauerian and Kantian accounts of aesthetic judgment, the way it interfaces with Andrei Bely’s work, the way in which it vindicates Malevich’s own statements about painting the reality of intuition and creates a bold way out of futurism, acting as a “zero of form” within a bombastic futurist opera, its simultaneous endorsement of automatism and rejection of decorative maximalism, and on and on, we come to think it’s quite good. I think the Black Square is magnetic and fascinating. In fact, I think it is one of the greatest paintings of the 20th Century. It revolutionized the Russian avant-garde and was massively influential on the development of modern art. But again, it doesn’t take much skill to make, even though Malevich was a highly proficient painter and illustrator. And this, I think, leads to a lot of dismissive, knee-jerk reactions that prevent people from accessing a really valuable experience and sense of appreciation for something.

The second objection I think gets conflated with the first. People like working on art and developing the skills to make art for its own sake. Becoming a proficient painter or writer is a source of self-esteem and satisfaction when you reflect on your life. There is a sort of eudaimonic happiness we get from cultivating this sorts of practical virtues. I think this is right, but I genuinely do not think that AI art presents a problem for this. Way back in the day, if you wanted to hear music, then you had to play an instrument or have a friend who plays, or have the budget to hire someone to play. Sound recording more or less put a stop to that. The demand for live musicians is somewhat lower, but still very present (though perhaps for reasons that are incongruent with the kind of thing that visual art is). But we still like playing music, even if we’re not particularly good at it. I twiddle away at mediocre renditions of Rachmaninoff preludes even though I can flip on a Vladimir Horowitz recording that will show my amateurish twinkling for what it is. But I still love playing the piano and get some sense of self-worth from working at it. The idea that a computer could make some stunning renditions of Bach’s chorales seems to be no more threatening than the computer. I would keep playing at it. We come to recognize the striving activity for what it is. If you’ll permit me a somewhat vulgar anecdote: Zizek once gave this interview where he was asked about the ideal date. What he tells that he wants the machines to do all the sex for him and his date, and that frees them up to talk about movies, and enjoy themselves, and actually have sex in a way that isn’t wrapped up in what he calls thinks of as satisfying the super-ego. We become much less wrapped up in the actual product that we are aiming for, and become more attentive to the process of becoming something and of working toward something. So AI art actually becomes clarifying because we are not always so interested in the product as we are in the process, in the becoming and so on.

Another reason people say that it is not art is that people value sorts of interpersonal connections that are mediated by artworks. People love buying biographies of artists that they like and learning all about them — I am certainly among them. I love learning bits of trivia about Prokofiev and Wilde and thinking “he’s just like me for real.” And we do not get the sense that someone is communicating with us when we see a piece of AI art that we reflectively know is AI art. I think there are some problems with this. First, the sorts of AIs that are involved depend heavily on getting the specified input in correctly and in a way that’s going to make something good. There’s also the act of curating, and throwing out all the junk that the AI produces even from a good prompt. So people can still communicate and feel connected with each other through this. There is also a way in which our very 20th Century ways of analyzing art can be brought to bear; we do not need intentions to find meaning or significance in something, even artworks. We also find things beautiful and significant all the time, even though they exist without some kind of final purpose and just as a matter of fact. This is, after all, the beauty of nature. Cracked rockfaces, stretches of mold infesting concrete, vibrant birds, and all the other little charms of nature do not have intentions or meanings. We still think of them as aesthetically valuable though. Consider the following, rather relaxed approach to the demarcation problem: artworks are just artifacts that have aesthetic value or are responsive to the kinds of reasons that art critics are interested in. Like any definition of art, this is contestable; but the contestability is built in. Art unfortunately does not have some pre-ordained class of objects that it is trying to capture, but rather is an idea that we negotiate over and debate about because the category is evaluative and figures into our practical reasoning in particular ways. I can credit Bertram and Sundell for parts of this.

I somehow doubt that any of this will be helpful for resolving anything, but they might make things a little more clear. If nothing else, I hope it provokes some more thoughtful discourse about AI art.

I was going to say some of this in a significantly more unhinged way.

1917 marks the first exhibition of the Society of Independent Artists. It is an avant garde effort at attacking the exclusionary policies and insularity of the art world. There are no juries or prizes; the art is displayed in alphabetical order by last name of the artist and all submissions are accepted as long as you can pay a six dollar entry fee.

Marcel Duchamp, one of the founders of the Society of Independent Artists, wonders how open this process really is.

So, under a pen name, he submits a urinal to the exhibition.

This is instantly controversial, and the board of the Society of Independent Artists refuses to display it with the rest of the exhibition, leading Duchamp to resign.

Questions like, “What is art?” and, “Which art has value?” characterize the next century. Can a black square be art? What about random splotches of paint on canvas? What about comic books? Cartoons? Soup cans?

Now, a century after Duchamp’s fountain, we have the beneficiaries of his gesture demanding that we relitigate the Society of Independent Artists exhibition and find against him.

Like, quite literally: the newspapers were pushing a story about a guy who submitted an AI generated image to an art festival and the objections were those leveled against Duchamp’s fountain.

It’s bizarre to me because, one, as we enter the era of poptimism I thought we had settled all this, and two, the shockwaves from Fountain are a large part of the reason that we now take shonen jump comics, superhero movies and furries seriously.

There’s nothing inconsistent I can see about the idea that “Art is a fundamental expression of humanity, and its value comes from the way it can communicate to others and show off the skill of the artist. For example, it might communicate how much the artist likes wolf-girls with huge tits. And no computer can ever create that kind of value.”

But that position cuts across the most important positions of the last two centuries in such a strange and novel way that it is bizarre to me that the people promulgating it seem to act like it’s the most obvious thing in the world.

Will generative AI models “change everything”, as many in silicon valley are now claiming? Or are they just the next superficially impressive but overhyped product of the tech industrial complex?

I find it hard not to be cynical about this. There are many parallels between generative AI and crypto, for example. Both technologies are solutions to longstanding problems in computer science. Both are very flashy and impressive at a superficial level. Both have lots of loud evangelists in the tech world who tell superficially plausible stories about how the technology will radically change society. Both have yet to have made any major impacts on the real economy or people’s day-to-day lives, especially not relative to the hype from their evangelists. Both have hardcore fans who will condescendingly mock you on social media if you question the tech or its usefulness. Both do have some legitimate use cases, but again these are much smaller and niche than the hype suggests. Both have, among their proponents, rich tech businesspeople who are very skilled at hyping up products so they can make money.

I’m not saying the analogy is perfect. I suspect generative AI will ultimately prove more useful than crypto, but that’s not a super high bar. 

Ok, but how will we know if my cynicism is justified? I don’t think how “cool” or “impressive” generative AI is evidence one way or the other. I’m sure GPT-4 will do some things that GPT-3 can’t that will seem like “magic”, some of which may be unexpected. 

Instead, I think we need to wait some time, probably 2-5 years, and then see whether generative AI has made major impacts on the economy, work, day-to-day life, or society in general. Of course, “significant” is fuzzy and subjective, but here are some of things I’d look for:

  • Do software, or software features, that rely on generative AIs become “must have” tools for whole categories of jobs, like writing or digital art? This should be comparable to how any professional who writes as part of their job needs to use a word processor, for example. The impact should be dramatic enough that an average member of the profession simply cannot compete for work without the aid of generative AI tools.
  • Are there significant jobs losses in whole classes of jobs because generative AIs partially or fully replace employees in those jobs, such as administrative assistants or tech support or translators?
  • Is productivity measurably increased in some classes of jobs or industries due to the introduction of generative AI tools?
  • Is there widespread use of generative AIs in our day-to-day routines, at least for the more tech savvy among us, that isn’t merely for the purpose of novelty or entertainment?
  • Are any existing computer tools and technologies dramatically improved by the introduction of generative AI, such that you “can’t go back” to the pre-generative AI versions? Some examples of tools I’m thinking of: Google search, Wikipedia search, Google Maps, word processors, databases, spreadsheet software, ad targeting, etc. Minor improvements or peripheral features don’t count.

So much discourse in this area seems to be based solely on vibes, I hope we can move away from that.

Ramblings about dating

My girlfriend of six years and I broke up back in June. I may write more about that another time, if I feel comfortable writing about it publicly, but since we’ve been broken up I’ve tried to start casually dating (ideally a FWB situation, though something even more casual would be fine), and so far it hasn’t been a pleasant experience.

I think I’m more of an attractive prospect than I have ever been before. My IRL social life is more active, I have a stable full-time job and my own place, I’m in the best shape of my life (I’m not ripped or anything but I have been working out consistently for the past year and a half), I’m more confident and less socially anxious, and I have pretty decent pictures for online dating sites. Oh, and I paid for Tinder Platinum, too. I still have plenty of flaws but I feel much better about myself and my life than I did a few years ago. But despite that, things have not been going well.

First, there’s the problem many men face of not getting many matches. Now, I was getting far more than I had before, but that was still only 1-3 matches a week despite me swiping right on profiles I was on the fence about. Of those matches, about half never bothered to respond to my first message, and of those that did respond, probably 2/3 stopped responding after one one or two back and forths. So really I was only getting about one match a month that went anywhere. And (perhaps unsurprisingly?) these few women were the matches I was least interested in based on physical attractiveness and common interests (I don’t think I have unreasonably high standards in either of those areas either). A couple of them ghosted me before we could nail down an exact day and time for a date, and I had zero chemistry with the two I actually managed to meet in person (didn’t help that both looked significantly different in person than they did in their photos). 

The second thing that made dating unpleasant was that I had a hard time having a casual mindset about finding dates. I got obsessive about dating, and spent hours a day reading advice, tinkering with my profile, swiping, and messaging. I got very depressed about every setback and disappointment. This was both very emotionally draining and quickly had diminishing returns. Ideally, if one is looking for something more casual, they shouldn’t put such a large emphasis on dating, it’s not really a healthy dynamic (probably isn’t healthy when looking for something serious either). I started to feel jaded about dating and felt nothing when I messaged with a match.

Finally, it was hard because I’m still processing everything about  my break up with my ex. We were very compatible in so many ways, but there were few specific things that made the relationship very hard on me in particular. I don’t regret the breakup, but I still feel sad for what could have been, since we really did have so much going for us, and I think deep down I’m still hoping we can get back together if she grows as a person in certain ways. We still talk and hang out occasionally as friends (after a period of no contact), which I really don’t want to stop doing but might have to reduce the frequency of pretty soon if I want to move on. 

I’m really burnt out on dating at this point, so I’m taking a break until after the new year. I had naively thought/hoped that finding something on the more casual side would be easier, after all there should be fewer deal breakers than in serious dating, but that hasn’t been the case.

What about meeting people in person? Well, my social life is going much better than in the past, but my current circle is very male-dominated, and the women who are in the circle are either in long term relationships, or single for a reason.

with regards to my earlier comments about language processing and generation being a pareto problem, and the limitations inherent in that nature triggering the first big wave of AI pessimism: a weird thing I see in the rationalist AI people is that they invert this, and argue, well, we have natural-language processing with obvious deficiencies, and we know we can improve it with scale, and that the “hard” improvements require general intelligence and contextual life experience, so it’s inevitable that we’ll develop general AI and synethesized life experience just as a side-effect of throwing more scale at language models. this seems, like, extremely insane to me? it’s correct as a theoretical asymptotic result, but that has to be the least efficient way imaginable to get there.

It’s sort of like, OK, Sagan has that “to make an apple pie you must first create the universe” thing, like let’s say we have some AI that simulates apple pie, and then you check it against known pies and people’s judgment of how realistic the pie is, presumably we can design a complete, perfect-fidelity simulation of the entire universe by throwing sufficient compute at the pie simulator! It’s taking an argument about how hard something is to argue that it’s easy, and it’s interesting as a thought experiment, but it seems likely to me that these things will, in fact, turn out to be hard, even if we end up being surprised about how well you can do before the diminishing returns become severe.

(This is not even getting into the degree to which asymptotically-perfect text completion, given a corpus of texts from different periods with questionable date labelling, may well entail an ability to accurately predict the future.)

I think it’s a mistake to evaluate how impactful current AI tech will be on society based on how “impressive” it is to interact with for 15 minutes. It might even be a mistake to evaluate it based on how well it does on benchmarks. 

Ultimately what matters is actual, observed, major use cases that majorly impact society or the economy. The fact that we’ve had GPT-3 for years, and GPT-2 for even longer, and we haven’t seen them make much impact on the real world, makes me skeptical of the impact that LLMs alone will have. 

I’m not saying they’re useless or there won’t be successful products based on LLMs, but this idea that we’re close to AGI, or that GPT-3 or GPT-4 will automate away a ton of office jobs seems false to me based on what we’ve seen so far.

One other AI take: I think there are two competing models for how AI progress will go.

One model is the exponential growth model, where the rate of progress in AI development will continually increase over time. Progress in AI will lead to greater investment and better tools to create AI (some of which will be AI tools), which will lead to better AI, which leads to more investment and better tools, etc. 

Here’s one of the more famous illustrations of this: 

If you believe this model, you think that GPT-3 is the just the beginning, and soon we’ll have AIs that can automate away many office jobs if not straight up AGI.

The other model is what I think of as the diminishing returns model. In this model, recent AI progress due to LLMs and transformers is an S-curve, and we’re somewhere in the middle of that curve.

This curve is from a blog post about entrepreneurship but I think it applies to AI as well under this model.

The idea is that progress is really hard initially, you don’t know what you’re doing, and then suddenly you have a breakthrough and you’re making lots of progress quickly. However, this phase of rapid progress, though impressive, is short-lived. All the low hang fruit is picked quickly, and the remaining problems become more and more difficult to solve. Eventually even minor progress is expensive and time consuming, if it’s even realistically possible.

A lot of technology works this way. Think about smartphones. The jump from, say, the Blackberry to the iPhone 1 was huge. And then for many years it seemed like each new iPhone release made major improvements that immediately made the previous models obsolete. However, this pace of improvement has slowed dramatically. How different is an iPhone 14 from an iPhone 11, or even an iPhone 7? Better, but not that much better.

To me, this seems to be the mental model of AI researchers who are more cautious in their forecasts about AI progress. We’ve made lots of progress on LLMs, but we’re approaching the point where further improvements based only on LLMs and scaling are going to be more and more difficult to attain, and we’ll need new breakthroughs to make further progress.

I think that, from an outside view perspective, the latter model is more likely to be correct, but we’ll see. 

I think it’s a mistake to evaluate how impactful current AI tech will be on society based on how “impressive” it is to interact with for 15 minutes. It might even be a mistake to evaluate it based on how well it does on benchmarks. 

Ultimately what matters is actual, observed, major use cases that majorly impact society or the economy. The fact that we’ve had GPT-3 for years, and GPT-2 for even longer, and we haven’t seen them make much impact on the real world, makes me skeptical of the impact that LLMs alone will have. 

I’m not saying they’re useless or there won’t be successful products based on LLMs, but this idea that we’re close to AGI, or that GPT-3 or GPT-4 will automate away a ton of office jobs seems false to me based on what we’ve seen so far.

the LLMs have clearly demonstrated the ability of transformers to approximate complex functions and probability distributions, even if they have multilevel structure the way language does.

Whether you can code an AGI with this new approximation technique remains to be seen.

There’s some argument that the LLMs are already doing something AI-like. Basically, perfect prediction of like, what a human from the training dataset would write, requires the knowledge and reasoning of a human. So a “perfect” LLM would need humanlike knowledge and intelligence.

I guess from there, you could say as LLMs approach perfection with scaling, they are approaching intelligence. The question of course is whether they’re actually doing that, or if they’re just approaching the optimal performance achievable without knowledge or reasoning.

Reading about chinchilla made a big impression on me… seems to me making the LLMs better requires scaling input data, not scaling weights. That doesn’t look like approaching intelligence as the “thinking” part gets more sophisticated, it looks like getting better as you have a bigger database to blindly-ish copy from.

I mean, if you’re looking for use cases… there’s a lot of people on HN now saying they’ve been using ChatGPT to answer programming questions and write simple pieces of code.

This definitely is a portent of… something. I don’t entirely know what yet, but whatever it is, it is big.

Is it though? Like how much faster will a programmer do their work with this versus googling and copying and pasting code already online? Probably somewhat faster, but I doubt it’s a big enough change to radically increase productivity or mean we’ll need fewer programmers in the near future.

To be revolutionary, it has to significantly and reliably outperform a search engine.

I think it’s a mistake to evaluate how impactful current AI tech will be on society based on how “impressive” it is to interact with for 15 minutes. It might even be a mistake to evaluate it based on how well it does on benchmarks. 

Ultimately what matters is actual, observed, major use cases that majorly impact society or the economy. The fact that we’ve had GPT-3 for years, and GPT-2 for even longer, and we haven’t seen them make much impact on the real world, makes me skeptical of the impact that LLMs alone will have. 

I’m not saying they’re useless or there won’t be successful products based on LLMs, but this idea that we’re close to AGI, or that GPT-3 or GPT-4 will automate away a ton of office jobs seems false to me based on what we’ve seen so far.

Back On Here for the first time in over a year. Hey everyone. Glad to see there’s still a lot of activity (if anything my dash feels slightly more active than it did when I logged on last?). 

The UI is different and looks nice.

I don’t know if I’ll be posting or even lurking again. I’ve been (largely unsuccessfully) trying to reduce my screen time and especially my social media time. I basically replaced Tumblr with a mix of Twitter (mostly lurking), Reddit, and Discord. If anything those felt just as if not more addicting than Tumblr.

On the other hand, I do think I have a few posts in me. A lot has happened in my life and I have Thoughts on many things. We’ll see.