Avatar

Noted Toad Anthropologist

@froodycartographer / froodycartographer.tumblr.com

Came for frog show, staying because there are a lot of maps I want to yell about.

when your pet comes to you from another room, the preceding moments meant that they were alone somewhere and thought about YOU, an animal brain literally thought about you and came to you to see what you were doing. that’s love, unconditional.

Avatar

people talk a lot about the excessive prevalence of combat in ttrpgs (I'd say "it's fine to have games like that but we probably do have too many") but I think one of the basic issues is that nobody has ever come up with a mechanically compelling conflict mechanism + play loop for something other than combat, something that does what mid-crunch combat games manage to do in wedding a predictable "game-like" mechanical framework to a diegetic framework that not just contextualizes the mechanics but also structures departures from them (via improvisation and rulings-not-rules). it's easy to have one half of this but I have not seen anybody do both for anything other than physical conflict.

This is actually not that uncommon in medium-crunch games, but I've never seen it done in a way that really approached the mechanical depth of, e.g., D&D with no spellcasters or magic items. I'll be honest that CofD/nwod2e's implementation is among my least favourite -- it kind of reminds me of D&D 4e's skill challenges in that it adds a lot of process but doesn't get that much out of it. I've seen what I would characterize as "good" social conflict systems in RPGs, though -- it's just that, in general, I don't think that even those systems could bear the focus of the game to the same extent that combat bears the focus of dungeon crawlers.

One thing I'd considered elaborating on in my original post is that social and political stuff is in a tricky spot. It's the most obvious choice of a central conflict mechanism other than battle, but intricately mechanical social systems -- those with the same technical and strategic complexity as D&D combat -- are deeply offputting to many people, and there's this fundamental dilemma where, if your central conflict mechanism is about persuasion, you want players to be mechanically persuadeable by NPCs in the same way that they can be killed or poisoned or whatever in combat, but a huge swath of players will outright refuse to play a game where that's possible.

Which hooks into the other thing, which is, okay, as I alluded to above, an engaging system needs to be linked up with the narrative enough that it's possible for the narrative to sometimes supersede the system in commonsensical ways (e.g. "you can't really use that huge waraxe here, it's too narrow"). There's lots of room for rulings that give a meaningful sense of context to the bare mechanics, and that's what separates a mid-to-high-crunch TTRPG from a board or video game. But social rulings tend to be super contentious, because so many of the fundamental constraints are about belief or volition. Those are notoriously fuzzy, and to the extent they can supersede mechanics, it's hard to trust players not to let their desire for a specific outcome interfere with their intuition for what's realistic. And no social system has the resolution to describe what's realistic in the mechanics; it's too complex! So I think this is why, in practice, you basically never see this even though it's such an intuitive idea.

That's the main issue, but I think there's a secondary consideration in that squad tactics aren't really a thing in most social situations, but you need crunch-heavy RPG conflicts to use squad tactics because there's a squad and everybody wants to participate meaningfully. I think you could conceivably make this work if you zoomed out to long-term large-scale politics, but that radically changes the game because now you're roleplaying like a history instead of like a novel.

This stuff is very cool (OK I only looked at the first two, but they were cool), but as it regards the topic -- I feel like by putting the emphasis on procedure and by emphasizing OSR in general, you mean to hint that you think the way I've framed the problem is wrong: that I shouldn't be focusing on things that work in the same style as modern D&D's play loop but looking at the wider concept space. And I want to try to explain a bit better why I'm doing that, because it's not just an unexamined preference.

The thought that got me on this tangent was that I think that this play pattern I've been talking about is one of the things that leads to the overwhelming dominance of combat-oriented games -- obviously, there are many broader cultural factors as well! -- but I think the reason that happens is because it's overwhelmingly popular. On the whole, players seem to prefer games where the central play activity is mechanically dense and gamist, even in contexts where they ignore all that stuff in actual play. And so the things that that style favours are themselves favoured, incidentally, and so promoting other play styles -- making them popular, not just available -- does in part depend on being able to express them in this idiom. (I had a similar post about why sprawling unfocused games so reliably dominate tight and focused ones, though it's not relevant to this topic.)

It's common to chalk this up to a contagion effect from D&D, but I'm not so sure -- as the first article here aptly notes, D&D evolved into this mode because it was the most popular, contrary to design intent. And I'm guessing that if you weight by play time, this style of play dominates everything else even if you exclude D&D and derivatives like Pathfinder. (Maybe not? I'm no longer very plugged in, and this relies on market research that isn't available. But if it's changed, it's gotta be only in the past 10 or 15 years.) OSR has built something that is valuable and cool but also relatively unpopular, and I don't think that or any of the other infinity of novel indie games will change the general tendency as long as it's just them doing it.

I am actually really tempted to get into the proceduralism stuff here in more depth because it's interesting in its own right! But the thread is already getting a little long to go on a digression, probably. Another day!

Freedom of reach IS freedom of speech

The online debate over free speech suuuuucks, and, amazingly, it’s getting worse. This week, it’s the false dichotomy between “freedom of speech” and “freedom of reach,” that is, the debate over whether a platform should override your explicit choices about what you want to see:

It’s wild that we’re still having this fight. It is literally the first internet fight! The modern internet was born out of an epic struggled between “Bellheads” (who believed centralized powers should decide how you used networks) and “Netheads” (who believed that services should be provided and consumed “at the edge”):

The Bellheads grew out of the legacy telco system, which was committed to two principles: universal service and monetization. The large telcos were obliged to provide service to everyone (for some value of “everyone”), and in exchange, they enjoyed a monopoly over the people they connected to the phone system.

That meant that they could decide which services and features you had, and could ask the government to intervene to block competitors who added services and features they didn’t like. They wielded this power without restraint or mercy, targeting, for example, the Hush-A-Phone, a cup you stuck to your phone receiver to muffle your speech and prevent eavesdropping:

They didn’t block new features for shits and giggles, though — the method to this madness was rent-extraction. The iron-clad rule of the Bell System was that anything that improved on the basic service had to have a price-tag attached. Every phone “feature” was a recurring source of monthly revenue for the phone company — even the phone itself, which you couldn’t buy, and had to rent, month after month, year after year, until you’d paid for it hundreds of times over.

This is an early and important example of “predatory inclusion”: the monopoly carriers delivered universal service to all of us, but that was a prelude to an ugly, parasitic, rent-seeking way of doing business:

It wasn’t just the phone that came with an unlimited price-tag: everything you did with the phone was also a la carte, like the bananas-high long-distance charges, or even per-minute charges for local calls. Features like call waiting were monetized through recurring monthly charges, too.

Remember when Caller ID came in and you had to pay $2.50/month to find out who was calling you before you answered the phone? That’s a pure Bellhead play. If we applied this principle to the internet, then you’d have to pay $2.50/month to see the “from” line on an email before you opened it.

Bellheads believed in “smart” networks. Netheads believed in what David Isenberg called “The Stupid Network,” a “dumb pipe” whose only job was to let some people send signals to other people, who asked to get them:

This is called the End-to-End (E2E) principle: a network is E2E if it lets anyone receive any message from anyone else, without a third party intervening. It’s a straightforward idea, though the spam wars brought in an important modification: the message should be consensual (DoS attacks, spam, etc don’t count).

The degradation of the internet into “five giant websites, each filled with screenshots of text from the other four” (h/t Tom Eastman) meant the end of end-to-end. If you’re a Youtuber, Tiktoker, tweeter, or Facebooker, the fact that someone explicitly subscribed to your feed does not mean that they will, in fact, see your feed.

The platforms treat your unambiguous request to receive messages from others as mere suggestions, a “signal” to be mixed into other signals in the content moderation algorithm that orders your feed, mixing in items from strangers whose material you never asked to see.

There’s nothing wrong in principal with the idea of a system that recommends items from strangers. Indeed, that’s a great way to find people to follow! But “stuff we think you’ll like” is not the same category as “stuff you’ve asked to see.”

Why do companies balk at showing you what you’ve asked to be shown? Sometimes it’s because they’re trying to be helpful. Maybe their research, or the inferences from their user surveillance, suggests that you actually prefer it that way.

But there’s another side to this: a feed composed of things from people is fungible. Theoretically, you could uproot that feed from one platform and settle it in another one — if everyone you follow on Twitter set up an account on Mastodon, you could use a tool like Movetodon to refollow them there and get the same feed:

A feed that is controlled by a company using secret algorithms is much harder for a rival to replicate. That’s why Spotify is so hellbent on getting you to listen to playlists, rather than albums. Your favorite albums are the same no matter where you are, but playlists are integrated into services.

But there’s another side to this playlistification of feeds: playlists and other recommendation algorithms are chokepoints: they are a way to durably interpose a company between a creator and their audience. Where you have chokepoints, you get chokepoint capitalism:

That’s when a company captures an audience inside a walled garden and then extracts value from creators as a condition of reaching them, even when the audience requests the creator’s work. With Spotify, that manifests as payola, where creators have to pay for inclusion on playlists. Spotify uses playlists to manipulate audiences into listening to sound-alikes, silently replacing the ambient artists that listeners tune in to hear with work-for-hire musicians who aren’t entitled to royalties.

Facebook’s payola works much the same: when you publish a post on Facebook, you have to pay to boost it if you want it to reach the people who follow you — that is, the people who signed up to see what you post. Facebook may claim that it does this to keep its users’ feeds “uncluttered” but that’s a very thin pretense. Though you follow friends and family on Facebook, your feed is weighted to accounts willing to cough up the payola to reach you.

The “uncluttering” excuse wears even thinner when you realize that there’s no way to tell a platform: “This isn’t clutter, show it to me every time.” Think of how the cartel of giant email providers uses the excuse of spam to block mailing lists and newsletters that their users have explicitly signed up for. Those users can fish those messages out of their spam folders, they can add the senders to their address books, they can write an email rule that says, “If sender is X, then mark message as ‘not spam’” and the messages still go to spam:

One sign of just how irredeemably stupid the online free expression debate is that we’re arguing over stupid shit like whether unsolicited fundraising emails from politicians should be marked as spam, rather than whether solicited, double-opt-in newsletters and mailing lists should be:

When it comes to email, the stuff we don’t argue about is so much more important than the stuff we do. Think of how email list providers blithely advertise that they can tell you the “open rate” of the messages that you send — which means that they embed surveillance beacons (tracking pixels) in every message they send:

Sending emails that spy on users is gross, but the fucking disgusting part is that our email clients don’t block spying by default. Blocking tracking pixels is easy as hell, and almost no one wants to be spied on when they read their email! The onboarding process for webmail accounts should have a dialog box that reads, “Would you like me to tell creepy randos which emails you read?” with the default being “Fuck no!” and the alternative being “Hurt me, Daddy!”

If email providers wanted to “declutter” your inbox, they could offer you a dashboard of senders whose messsages you delete unread most of the time and offer to send those messages straight to spam in future. Instead they nonconsensually intervene to block messages and offer no way to override those blocks.

When it comes to recommendations, companies have an unresolvable conflict of interest: maybe they’re interfering with your communications to make your life better, or maybe they’re doing it to make money for their shareholders. Sorting one from the other is nigh impossible, because it turns on the company’s intent, and it’s impossible to read product managers’ minds.

This is intrinsic to platform capitalism. When platforms are getting started, their imperative is to increase their user-base. To do that, they shift surpluses to their users — think of how Amazon started off by subsidizing products and deliveries.

That lured in businesses, and shifted some of that surplus to sellers — giving fat compensation to Kindle authors and incredible reach to hard goods sellers in Marketplace. More sellers brought in more customers, who brought in more sellers.

Once sellers couldn’t afford to leave Amazon because of customers, and customers couldn’t afford to leave Amazon because of sellers, the company shifted the surplus to itself. It imposed impossible fees on sellers — Amazon’s $31b/year “advertising” business is just payola — and when sellers raised prices to cover those fees, Amazon used “Most Favored Nation” contracts to force sellers to raise prices everywhere else.

The enshittification of Amazon — where you search for a specific product and get six screens of ads for different, worse ones — is the natural end-state of chokepoint capitalism:

That same enshittification is on every platform, and “freedom of speech is not freedom of reach” is just a way of saying, “Now that you’re stuck here, we’re going to enshittify your experience.”

Because while it’s hard to tell if recommendations are fair or not, it’s very easy to tell whether blocking end-to-end is unfair. When a person asks for another person to send them messages, and a third party intervenes to block those messages, that is censorship. Even if you call it “freedom of reach,” it’s still censorship.

For creators, interfering with E2E is also wage-theft. If you’re making stuff for Youtube or Tiktok or another platform and that platform’s algorithm decides you’ve broken a rule and therefore your subscribers won’t see your video, that means you don’t get paid.

It’s as if your boss handed you a paycheck with only half your pay in it, and when you asked what happened to the other half, your boss said, “You broke some rules so I docked your pay, but I won’t tell you which rules because if I did, you might figure out how to break them without my noticing.”

Content moderation is the only part of information security where security-through-obscurity is considered good practice:

That’s why content moderation algorithms are a labor issue, and why projects like Tracking Exposed, which reverse-engineer those algorithms to give creative workers and their audiences control over what they see, are fighting for labor rights:

We’re at the tail end of a ghastly, 15-year experiment in neo-Bellheadism, with the big platforms treating end-to-end as a relic of a simpler time, rather than as “an elegant weapon from a more civilized age.”

The post-Twitter platforms like Mastodon and Tumblr are E2E platforms, designed around the idea that if someone asks to hear what you have to say, they should hear it. Rather than developing algorithms to override your decisions, these platforms have extensive tooling to let you fine-tune what you see.

This tooling was once the subject of intense development and innovation, but all that research fell by the wayside with the rise of platforms, who are actively hostile to third party mods that gave users more control over their feeds:

Alas, lawmakers are way behind the curve on this, demanding new “online safety” rules that require firms to break E2E and block third-party de-enshittification tools:

The online free speech debate is stupid because it has all the wrong focuses:

  • Focusing on improving algorithms, not whether you can even get a feed of things you asked to see;
  • Focusing on whether unsolicited messages are delivered, not whether solicited messages reach their readers;
  • Focusing on algorithmic transparency, not whether you can opt out of the behavioral tracking that produces training data for algorithms;
  • Focusing on whether platforms are policing their users well enough, not whether we can leave a platform without losing our important social, professional and personal ties;
  • Focusing on whether the limits on our speech violate the First Amendment, rather than whether they are unfair:

The wholly artificial distinction between “freedom of speech” and “freedom of reach” is just more self-serving nonsense and the only reason we’re talking about it is that a billionaire dilettante would like to create chokepoints so he can extract payola from his users and meet his debt obligations to the Saudi royal family.

Billionaire dilettantes have their own stupid definitions of all kinds of important words like “freedom” and “discrimination” and “free speech.” Remember: these definitions have nothing to do with how the world’s 7,999,997,332 non-billionaires experience these concepts.

[Image ID: A handwritten letter from a WWI soldier that has been redacted by military censors; the malevolent red eye of HAL9000 from 2001: A Space Odyssey has burned through the yellowing paper.]