Avatar

Conversations

@mentalisttraceur-conversations / mentalisttraceur-conversations.tumblr.com

Sideblog for conversations. One rule: You understand that meaning is relative: my words might be tuned for the person I am talking to.

Deontological Gridlock

If you take one or more rules as absolutes that ought never be violated, like typical deontology does, then what happens if you are ever faced with a situation where every choice available violates at least one of them?

You get into a gridlock - all possible ways events could progress from that situation are blocked. You want some standard to be met but that option does not exist. So you must either fix the rules or have a way of weighing things to pick the lesser evil.

Maybe you think that such gridlock is always the result of a flawed or incomplete set of rules - that for a correct set of rules, all possible problems present a choice that does not require violating any of them. Or maybe you think deontology inevitably leads to gridlock.

Either way, I consider this a very useful name for a fairly common variant of logical contradiction that often occurs - and is often overlooked - in deontological thinking and values.

this dichotomy isn't right.

the main myth of divine command deontologists is Abraham and Isaac, with the two duties "do whatever the Word of the Lord says" and "be father to a nation" and at the moment just before the two absolutes are about to come into contradiction -- a contradiction that father Abraham trusts is just apparent rather than real -- the solution appears in the form of the Angel of the Lord.

That's what happens. If you, the deontologist, ever find yourself in a situation where two genuine duties collide, you will find yourself in the presence of an angel, since God set up duties not to be followed for their own sake but because they are his sovereign will.

If no angel is forthcoming, then at least one of the things you thought was a divine command isn't.

That seems to pretty clearly fall in the "result of a flawed or incomplete set of rules" case.

I'm also not really asserting a dichotomy, just listing two options with good-enough coverage over the possibility space for the point I am making - it's possible there are some cases that technically really don't fit either of those, but even if so, that wouldn't detract from my point that when people think in terms of absolute rules which must always be followed, they often miss how in some cases that leads to contradictions where the option they assert is morally necessary is not an option that exists.

(Also just for the record, I think I reject almost every other premise/axiom entailed by what you've said here, but luckily that too is largely orthogonal - my point and the value of this definition/concept is still largely the same even if there is a God and a correct set of rules set up by divine command.)

Avatar

fascinating

I like how it’s shocked you took a screenshot like they know they’ve been caught

Okay so if I had to guess, this AI has access to an API it can use to query "where/what is the [...] nearest to [user I am chatting with]", but does not have access to the location itself.

If the AI is clever enough, it could probably realize that it could query the location API a bunch of times and triangulate the user ultra-precisely... but it's probably just a snapshot that was grown up to a point, not something that can grow any more. And I'm not even sure how much "working memory" (saved state) it gets between messages within a conversation - let alone between conversations.

And the AI was clearly never trained to explain (or perhaps even "understand") this inconsistency between what results it can query (and the knowledge that implies) vs what it knows or has direct access to. So it's just as confused as you are.

Speaking Hexadecimal

Fluently and Unambiguously

I previously proposed a way of saying hexadecimal numbers clearly and efficiently in English, but that was only good for situations where it was otherwise unambiguous that the numbers were in base-16, and it still had some room for getting "wires crossed" with base-10. Now I finally have a proposal I'm satisfied with to finish the job:

We first add distinct words for the six extra "digits":

A is alf, B is brav, C is char, D is delt, E is eck, and F is fost. These are based on the pronunciation of the first six NATO phonetic alphabet words: "alpha", "bravo", "charlie", "delta", "echo", and "foxtrot", except that: we simplify "foxt" to "fost" to make it easier to say, we change the spelling of "alph" to make it more accessible to people not familiar with English's "ph", and the spelling of "eck" makes it obvious that it's a K sound, not a CH sound.

Then we replace "-ty" with "-tex". "-tex" is meant to be evocative of "hex", but the "t" fits the pattern of English number words better:

So 20 in hexadecimal is twentex, not "twenty". 21 is twentex-one, 22 is twentex-two, and so on. 2A is twentex-alf, 2B is twentex-brav, and so on. 30 is thirtex, 31 is thirtex-one, [...], 3A is thirtex-alf, and so on. Fortex, fiftex, sixtex, seventex, eightex, ninetex, alftek, bravtek, chartek, deltex, ecktex, and fostex.

English has special words for 10-19, but we can just use the same regular pattern in hexadecimal for 10-1F as for 20-FF. So 10 is ontex. It's "ontex" and not "onetex" to match the speed and distinctiveness that we get with twenty, thirty, forty, and fifty having slightly different pronunciations and spellings versus two, three, four, and five. 11 is ontex-one, 12 is ontex-two, 13 is ontex-three, and so on.

100 is "hunhex". This continues the mnemonic pattern - English number word, with a hexadecimal-hinting ending. 101 is "one hunhex and one," or just "hunhex and one" for short, just like we say decimal hundreds. 201 is "two hunhex and one", 2D4 is "two hunhex and deltex-four", and so on, all the way up to FFF - "fost hunhex and fostex-fost".

Incidentally, the modern English quirk of saying a number like 2463 as "twenty-four (hundred), sixty-three" instead of "two-thousand, four-hundred, and sixty-three" works really well for hexadecimal numbers: for example, 1AD4 is often written as 1A D4, and can be read as "ontex-alf (hunhex), deltex-four".

In fact, unlike decimal, in hexadecimal it is far more natural and useful, especially given modern technology, to do groups of two. So we don't even bother with another irregular word like "thousand" - instead, we just go directly to using the same Latin prefixes that large numbers in English use (billion, trillion, quadrillion, and so on), for multiples of two more hex digits:

So 10000 is a bihex, 1000000 is a trihex, 100000000 is a quadrihex, 10000000000 is a quintihex, 1000000000000 is a sextihex, 100000000000000 is a septihex, 10000000000000000 is an octohex, and so on. Technical people will appreciate that an we're basically counting bytes here, and that a hunhex is one larger than the maximum value in a 1-byte unsigned integer - ditto bihex for 2 bytes, quadrihex for 4 bytes, and octohex for 8 bytes, and so on.

@lady-inkyrius re: tag: yeah, I've felt the temptation to shorten it to "unhex" too. Personally, I didn't how how it looked like "un-" applied to the verb "hex", but I think I endorse people trying it as an alternative about as much as I endorse people trying "hundex" (more discussion on that alternative and generalizable considerations here).

One thing I really like about unhex is that omitting that first "h" can be thought of as a casual informal optimization - like we don't have to say "the word is unhex", we could still teach it as "hunhex" and when people naturally leave out that "h" when talking fast that's fine.

On Python `__slots__`

Basically: every hand-crafted Python class should have __slots__, because of these reasons, from most important to least important:

  1. it explicitly tells anyone looking at the class what state the class can have, and if you are naming those instance variables well, it helps significantly streamline an unfamiliar reader's interpretation of the class;
  2. it turns a whole class of usage mistakes into visible errors which get raised exactly where the usage error actually is - otherwise stuff like typos in attribute assignments turn into silent misbehavior which at best leads to a raised error elsewhere when trying to use the attribute that was supposed to be assigned, but may well lead to your code just doing the wrong thing and silently moving on; and
  3. it improves performance on Pythons with fairly weak automatic optimizations. (But JITed Pythons like PyPy can often optimize hot loops so that classes without slots perform just as well as classes with slots.)

Even radical portability isn't really a good reason to omit `__slots__`, because on versions of Python that don't recognize `__slots__` it'll just be another regular class attribute which does nothing.

Very minor caveats to this "basically" recommendation:

  1. If you don't include "__weakref__" in your list of slots, your class instances cannot be weakly referenced - for classes internal to a project this probably won't matter, but if you're releasing a library for public use, there is a good chance that someone somewhere will eventually have a good reason to take weak reference to your class (but people can still subclass your class either without defining slots or with `__slots__ = ["__weakref__"]` and their subclass can be weakly referenced, so that might be good enough depending on what usage of your class you support and expect).
  2. Python disallows multiply inheriting when both base classes have non-empty `__slots__` defined on them, so if everyone is releasing libraries for public use with slotted classes, eventually some user somewhere will be frustrated when a legitimate use-case for multiple inheritance becomes impossible (on the other hand, if you aren't consciously intending to support multiple inheritance, this is probably actually a good thing, because in Python multiply inheriting from classes that aren't meant for it has decent odds of bugs anyway, including some non-obvious ones).

For non perfomance critical applications I prefer just declaring all attributes in the init function. For most of them you have to do it anyways to assign the initial value, so doing both that and __slots__ feels more clutterred and difficult to read.

I have very vivid memory of typing a way-too-extensive reply to this, but it's not showing up in the notes. Do you have any memory of seeing a reply from me?

[Edit: pretty sure this blog is shadowbanned, and this was just the first time I could notice it in a form sufficiently distinct from "did this person block me or something?"]

Tumblr dashboard so nice when you know CSS and can just remove all of the elements you don't want.

Gee I'd sure love to paste that CSS for everyone to use. If only Tumblr's web app editor made it even moderately workable to paste a block of code. I even have insanely high tolerance for hand-fixing the formatting, but Tumblr's web interface defeats even that nowadays.

And I don't want to publicly expose the GitHub gist where I keep a backup of it. So y'all get a screenshot instead, sorry.

(That's ADlDx, by the way not AD1Dx nor ADIDx, since I know text-in-image recognition often just ignores all techniques used for over a decade by unambiguous fonts to aid parsing text from visual data.)

(if anyone sees transcription errors lmk)

/* Tumblr hide noise: */ /* 1. Side bar: */ body#tumblr aside div, body#tumblr aside img, /* 2. Top "Following, For you, Your tags, ..." bar: */ body#tumblr .ZaYRY, /* 3. Bars above and between posts: */ body#tumblr .HphhS, /* the first half, header text */ body#tumblr .kDCXR div, /* the second half, content */ body#tumblr .kDCXDR img, /* 4. Top "Tumblr Live" bar: */ body#tumblr .RAEnv, body#tumblr .ADlDx {    visibility: hidden !important;    padding: 0px;    margin: 0px;    height: 0px; }

@lesbianchemicalplant yeah "body#tumblr .kDCXDR img," should be "body#tumblr .kDCXR img," (no second D).

Thanks for transcribing! Very appreciated!

This is sort of a branch off the idea of my previous post this week about “but is it really ‘worse’? how do you know? where’s your data?” [I’m on my iPad so it’s refusing to do a link properly, but here: https://findingfeather.tumblr.com/post/709250005943320576/unsolicited-advice-of-the-morning-any-time-you] but it is a bit separate: 

Especially if someone is presenting something as “a study” or “scientific proof” that a thing is worse than another, or a problem is worse now, or whatever, take a moment to make sure they’re comparing the same thing. 

This was something I was reminded of because of a terrible news piece that I’m not even gonna link because it really is terrible, purporting that Tindr and other apps are “gameifying” dating and manifestly ruining it and making it “shallow” and so on. 

Now the thing is, the thing that this article (and the “experts” in it who all should damn well know better, since half of them should have had training in this) does is that it requires that you accept the proposition that there was ever a time or model of dating that wasn’t gameified, and wasn’t shallow. 

Worse, it does that without actually telling you; it does it by begging the question. It does it by jumping right to sentences about dopamine and how swiping is like gambling and how this means you’re making choices “based on a brief two sentence bio and a photo” and that this is shallow, and terrible, and crucially represents a massive change from previous generations

It also made the amazing claim that Gen Z “knows no other way to date”, which made me laugh aloud since I’m pretty sure Gen Z started with “dating” the same way Millennials and Xers and indeed even Boomers did, which was asking people in our school classes to some kind of event (or in the school in general, or your friend from some other organized youth thing, or whatever), and that apps didn’t come out until people were out of high school. 

But okay, so if my younger readers are indeed as unaware of other methods as the article implied: there was no better method. 

Because if you ever dated before the apps, you will know: it was just as fuckin’ shallow, you made initial decisions just as quickly based on as many superficial contexts, there were just as many “games” (“how many girls’ numbers can I get tonight?”). 

Somewhere in their heads, the writers of the article and the researchers they were interviewing and discussing things with had made up this ideal version of “dating”, where people always chose who they were dating, who they were open to relationships with, with thought, with consideration, after platonically (somehow, who knows how?) finding out enough information to Know Them As A Person, etc. That it was (to use the word from one of the quoted researchers) “intentional”. 

This, of course, is fuckin’ nonsense. 

I mean of course yes: there were people who dated that way. There are STILL people who date that way. There are still people who make use of apps as tools to date that way! If you want to be intentional about dating you will be intentional about dating and no, the fact that it’s on an app won’t stop you. 

But the fact of the matter is most people have never fucking been that way. Tindr is not making us this way; tindr is how it is because we are this way

You know what there was before the apps? LET’S SEE. 

- lonely hearts columns in newspapers - “speed dating”. [https://en.wikipedia.org/wiki/Speed_dating] - literal fucking singles newspapers [https://www.atlasobscura.com/articles/singles-news-1970s-personal-ads-dating no really] - asking strange women for their phone numbers because you thought they were hot from across the bar - giving strange men your phone number because you thought they were hot from across the board - getting set up by your friends with someone you’d never seen - getting set up by your parents with someone you’d never seen - getting asked out by someone you ran into on the street - asking out someone you ran into on the street

Now not all of these practices have ended but let me ask you: that seem all that intentional to you? Those methods seem deep and thoughtful and not subject to shallow shit like appearances and five seconds of conversation? 

Of course there were still things like “meet the nice guy at some group activity and spend six weeks getting to know each other and THEN date” and there still fucking is; but much like “Singles News”, that’s not what dating apps are for. Dating apps are for when all those kinds of methods have been failing you miserably for one reason or another, and you need some method of reaching out to the wider world to go “hey! I am theoretically interested in romance and/or sex! Who else is up for that!” 

And if you’re comparing the now (apps) to the then (all the above), you need to be comparing those things. Not the apps to some idealized version that doesn’t exist. 

Same thing happens in my field about Early Literacy all the time, and I scream about it all the time. 

Stop me if you’ve heard this one. The study goes like this! One group of children are shown Sesame Street or some other educational children’s programming. The other group of children are read books by their parents, usually in a specific place observed by the researchers (ie not at home). At the end of the study the second group of children have progressed way further than the first group of children. 

And this proves that screen-time is bad. Or at least inherently inferior. Sesame Street is useless. Right?

Except it has fucking nothing to do with that because - and let me assure you this as a former childcare professional - screentime is never actually replacing high-quality, positive, child-focused parent-child interaction

They are comparing what happens when a parent has semi-guided, observed, child-focused time, with no interruptions, and at least for that time no other responsibilities, which the parents are by default going to have to be setting aside time to do specifically this (and thus reorganizing their schedule away from other things). 

And you know what? Sure! Easy win: it is really, really good for children to have a solid amount of that kind of time. 

But when you have the time and energy to do that is not when you fucking turn on “screens”. 

Mostly, in the really real world, “screens” fill in time when the parent is busy, or exhausted, or stressed, or angry; they fill in time when at best the adult might be totally absent and the child is in a controlled environment (like a playpen) while the parent Does Other Things, and frankly most of the time, parents turn to screens when what the non-screen alternative is, is being stressed, short, angry, frustrated, or otherwise interacting negatively with the child, usually because of absolutely needed things. 

You turn on Sesame Street when you need to make dinner; you hand over the iPad on a Toca Boca app when you need your child to shut up long enough for you get through this grocery store without a meltdown; you put on a movie when you need five minutes to yourself as an adult without endless questions. 

And if the problem is simply that children are not getting enough high quality adult time, then the problem is not “make screens go away”, because if you just replace the screens with children being anxious and bored and then being snapped at by their overstressed parent, you’re not improving anything. 

The studies are assuming that what replaces “screen time” is the reading they are observing in their study space. And that’s an unfounded assumption. 

Finally, this is the same logic that the streaming companies are currently using to say that people sharing passwords are “costing them millions of dollars”. The comparison is to this fictional ideal: they assume that what will replace the status quo (shared passwords; screen time; fast paced shallow app dating) would be their Perfect Scenario (everyone paying for their own account; High Quality Parent-Child time; Deeply Thoughtful, Intentional Romantic Connection). 

You cannot safely or honestly assume this. You must stop and ask yourself, is that actually what I’m comparing? Is that actually what happens in absence of this thing I’m identifying as bad? Do I know that? And then you have to check. 

So I must confess I got turned off a couple paragraphs in (even though at a skim I think I might agree with some of the later points), but I think this is worth noting:

The big qualitative difference of in-person quote unquote dating - or rather approaching/asking-out/etc - which prior to apps wasn’t nearly as widespread because it wasn’t almost-free to do at scale: you can immediately use all interpersonal soft skills and communication channels. You’ve got body language and tone and facial expressions, eye contact, how you speak, etc.

As a result you can’t help but sniff each other out so so much more (even literally, when you consider that your subconscious mind is processing their pheromones), before you even have the opportunity to accept or deny each other.

When I approach a girl spontaneously in person, or I am paid the high compliment of a girl spontaneously approaching me, within five minutes I know her more than I ever could through any app profile.

Congrats! You did indeed turn off, and missed several points. Well done.

I also promise you’re making some amazing assumptions about the accuracy of what you “know” about that girl in that time. This is not uncommon among cis guys who consider themselves “intellectual” but lack the intellectual humility to question such assumptions on the regular; if any of those categories don’t describe you, I advise avoiding emulating them so much.

Thanks! I thought it was well done too!

  1. Gestured at one aspect that wasn't mentioned,
  2. didn't miss any points that invalidated my addition, and
  3. made it clear with my disclaimer that I might even fully agree with, endorse, and commend all your remaining points.

Anyway here stack up more buffs to your diss: cis+straight+white. I think if you do that my assumptions get 69% wronger.

On a less salty and more "genuine request for advice on social norms" note. Is there a tactful way to say "I understand your sentiment but I think you're being too extreme/absolutist with it, look, here are some reasonable counterexamples, and I think they have some genuine relevance"? Is there a social norm I'm unaware of that causes people to flip out like you've told them the worst insults you could think of when you try to express such a thing however cautiously?

I understand that there are situations where someone is venting and doesn't presently want disagreement but this isn't the kind of situation I mean, I'm thinking of people who are clearly trying to make provocative statements to convince others.

so to get it out the way, firstly there are probably some people who just don't like you disagreeing and would take issue with whatever you said. but assuming that's not the case.

I suspect this is one of those situations where something that was originally a sincere expression started getting commonly abused precisely because people were likely to take it in good faith, and now people are more likely to assume it's not in good faith. (tbh this is probably a problem that eventually occurs with any similar expression)

so people may say "I understand your sentiment but [...]" without in fact bothering to understand your sentiment at all. and sometimes of course genuinely think they do but be very off. the latter especially seems pretty common!

also there's probably some elements of like...group-y bullshit where you disagreeing with the extreme position is obviously a sign that you're on the side of the bad guys or whatever; weakening the claim is ceding ground etc. especially if the extreme statement is being made that way almost explicitly to push severely against those others.

anyway in practical terms, my experience is that emphasising the part you understand and showing this understanding, and maybe phrasing it as potentially suggestive of stronger endorsement than you actually mean — "you're right about [...]", "I generally agree", etc. and elaborating on the constraints under which their claim does apply, and then the counterexamples as a "but also", works somewhat better.

I suspect this is one of those situations where something that was originally a sincere expression started getting commonly abused precisely because people were likely to take it in good faith, and now people are more likely to assume it's not in good faith. (tbh this is probably a problem that eventually occurs with any similar expression)

Ah yes that's probably it! Thanks!

You can tell I'm far too jaded and tired and bitter now because here you two are doing good-faith idea-fitting and I'm just like "they don't deserve, and in fact are harmfully enabled by, all this charitable effort and emotional labor. Excessive absolutisms should be socially unacceptable to the point that it never feels safe to say them in public."

I feel like I managed to last all of a couple years back around 2016-2018 with hardcore commitment to this kind of charitable engagement, and then I just couldn't keep feeling like it was worth it, because it felt like a significant super-majority of the time the payoff was not worth the effort.

It did get me a few pretty good and loyal friendships which I think I would've repelled with my hostility otherwise... probably if I had the temperament to focus on and be affected by that more I'd say it was worth it, and I would've probably kept up with it and it would've paid off more.

Now that I know more about one recent example of this, it sounds like I would have been wrong if I had advised this, but also that I would've understood where the person's absolutism was coming from.

On a less salty and more "genuine request for advice on social norms" note. Is there a tactful way to say "I understand your sentiment but I think you're being too extreme/absolutist with it, look, here are some reasonable counterexamples, and I think they have some genuine relevance"? Is there a social norm I'm unaware of that causes people to flip out like you've told them the worst insults you could think of when you try to express such a thing however cautiously?

I understand that there are situations where someone is venting and doesn't presently want disagreement but this isn't the kind of situation I mean, I'm thinking of people who are clearly trying to make provocative statements to convince others.

so to get it out the way, firstly there are probably some people who just don't like you disagreeing and would take issue with whatever you said. but assuming that's not the case.

I suspect this is one of those situations where something that was originally a sincere expression started getting commonly abused precisely because people were likely to take it in good faith, and now people are more likely to assume it's not in good faith. (tbh this is probably a problem that eventually occurs with any similar expression)

so people may say "I understand your sentiment but [...]" without in fact bothering to understand your sentiment at all. and sometimes of course genuinely think they do but be very off. the latter especially seems pretty common!

also there's probably some elements of like...group-y bullshit where you disagreeing with the extreme position is obviously a sign that you're on the side of the bad guys or whatever; weakening the claim is ceding ground etc. especially if the extreme statement is being made that way almost explicitly to push severely against those others.

anyway in practical terms, my experience is that emphasising the part you understand and showing this understanding, and maybe phrasing it as potentially suggestive of stronger endorsement than you actually mean — "you're right about [...]", "I generally agree", etc. and elaborating on the constraints under which their claim does apply, and then the counterexamples as a "but also", works somewhat better.

I suspect this is one of those situations where something that was originally a sincere expression started getting commonly abused precisely because people were likely to take it in good faith, and now people are more likely to assume it's not in good faith. (tbh this is probably a problem that eventually occurs with any similar expression)

Ah yes that's probably it! Thanks!

You can tell I'm far too jaded and tired and bitter now because here you two are doing good-faith idea-fitting and I'm just like "they don't deserve, and in fact are harmfully enabled by, all this charitable effort and emotional labor. Excessive absolutisms should be socially unacceptable to the point that it never feels safe to say them in public."

I feel like I managed to last all of a couple years back around 2016-2018 with hardcore commitment to this kind of charitable engagement, and then I just couldn't keep feeling like it was worth it, because it felt like a significant super-majority of the time the payoff was not worth the effort.

It did get me a few pretty good and loyal friendships which I think I would've repelled with my hostility otherwise... probably if I had the temperament to focus on and be affected by that more I'd say it was worth it, and I would've probably kept up with it and it would've paid off more.

Schizoid personality disorder might be the expression of a normal healthy mechanism for leaving a tribe when it's not a good fit.

I could be wrong of course. Also, rare is a label that doesn't conflate multiple similar things, so this might only apply to some schizoids.

But to the extent that I understand or possess schizoid traits, it seems like it could be the result of a drive to leave that had nowhere to go for too long.

It seems like schizoids can actually be fine with connecting with the right people, and are drawn to having social interactions with them, once they actually finally meet one.

It seems like a lot of the schizoid aversion and disinterest to interaction is not really against interaction with all possible minds, but rather interaction as it is expected to happen with people.

When counter-examples in one's life are rare or non-existent, a lack of desire to bond or socialize with the wrong people would naturally generalize to all people, or seem as if it is a personality trait.

Point being, I strongly suspect that understanding - let alone treating - a schizoid's social withdrawal must start with supposing that it might be a natural brain mechanism firing as it should and figuring out what would make it rational or healthy.

Nearly 30 years of life and I have yet to find the “right” people to “connect” to.

In fact one way to differentiate Cluster A PDs from anxiety disorders is that there’s never a level of comfort that’s reached within interpersonal relationships, there is no such thing as “warming up to people” for us.

Everything has always been temporary. I find a person or two I can tolerate long enough for them to serve their purpose (every time it has to do with a change of environment ie work, school, hobbies, etc) and then I severe the ties and never look back.

I just remembered I have a link to a paper the explains the Schizoid split and isolation mechanism:

It’s not a “healthy mechanism gone wrong”.

Here’s a paper that’s about splitting in general, (NPD + BPD heavy):

It’s from a brain that was never able to develop healthily, forced to survive abusive environments and never being able to adjust.

  1. We would generally expect brains to have mechanisms for motivating us to distance ourselves from people who are bad for us, and that such distancing would be healthy if possible.
  2. If there are no right people to bond to, or distancing is impossible (especially during formative years etc etc) then we would expect such healthy mechanisms to break down and go wrong, and one reasonable way for that to look would be schizoid experiences.

Anyway, the part you seem to be most knee-jerking against doesn't have to be true for you (which should be very clear from my post I think) but I personally know at least one schizoid for who it is true. (I also never said it would necessarily be a "warming up" transition as opposed to a sudden transition or approximately "at first sight". Not that this is very relevant to my point, but I find it offensive when I am interpreted as if I said a narrower set of possibilities than I did.)

More generally, here's a thinking pro-tip: any time someone says "[possibility] might be true (in some cases)" and you think "[possibility] can certainly never be true", you are almost inevitably going to be wrong in at least one way that matters.

Also, sorry, but I'm far too offendable like a narcissist for the style of speaking you chose here (even though I think I understand and empathize with why you did so, and I do see some room for reality not matching how it came across to me). Point being:

The next time you come at me full of absolutist confidence as if I'm definitely entirely wrong I might just block you.

But if you can speak in a way that doesn't reek of that to me, in a way that shows more signs of you being the kind of person who proactively looks for ways you might be wrong or missing some complication of perspective (and values that more than any particular belief/perspective/conclusion), then I am fine with you pointing out what you think are errors or omissions in my thinking or knowledge.

I am usually eager to spot anything I can improve, and grateful when people actually manage to help with that (and even if I find your style offensive, even if I block you, even if you never hear positive feedback from me... I will probably eventually digest any correct corrections/enhancements/insights within your replies).

Sharing your counter-example experiences and relevant articles is also welcomed, so thank you for that.

I have this thing where I want to be perceived as something like a 'hyperagent' of sorts — in exactly the way people sometimes try to characterise 'evil' people that is often criticised — not influenced by any external factors and in fact doing everything ever strictly as an intentional thought-out decision fully in my control. as a result, attempts to attribute things to more 'sympathetic' motives such as trauma can often feel threatening if those things overlap with me.

ironically this reaction is in fact likely caused by trauma.

because beliefs or decisions attributed to things like trauma automatically acquire a level of illegitimacy — they're not rational, not what you should've would've could've otherwise believed — they should not be followed, they should be fixed.

and when you have experienced things like gaslighting, heavy dissociation, or psychosis, in a terrifying and dreadful way, you might end up feeling that it's a critical need for you to be able to trust yourself, and for others to trust you with regard to yourself to bolster that.

furthermore, if you've experienced various violations of autonomy claimed to be in your own best interests — which again if you have trauma and mental illness, especially things like psychosis, is very common — there's even more reason that others not trusting you to be rational and in control, suggesting that you're being somehow unduly influenced or whatever, can be incredibly dangerous.

Ever since I grokked "prediction trees", it became rather obvious that trauma artifacts are not illegitimate.

Often enough, they are exactly what you could/would/should rationally believe or follow if faced with enough risk of similar problems.

They do not need fixes - just context, relief, and counterweights, so that you can better discern and more freely choose whether to use them.

if you were reliably informed that some belief I held (which you couldn't otherwise assess) was the result of a trauma response, would you consider that belief equally likely to be correct as one that you believed I'd come to by impartial reasoning? if not then it does carry some level of illegitimacy.

Yes.

Depending on what the belief was about, I might even consider it more likely to be correct. Of course in some cases it would be less, but in my experience, a lack of trauma is a kind of deep ignorance that makes the most correct perspectives almost unreachable.

I reject the false dichotomy of trauma vs impartial, as if "impartial" is a lack of trauma - it possibly requires trauma in humans.

We approximate rationality, as inherently limited and not all that rational creatures, in large part by tenuously balancing not-so-rational emotional forces inside ourselves so that our motivations keep us on the right effortful thinking long enough. The traumatized are often imbalanced, but the untraumatized seem to always be imbalanced.

Of course this is not a common view, so you are right w.r.t. it being delegitimizing in the eyes of most people, but I personally perceive a lack of trauma traces in human minds as confidently delegitimizing overall... the untraumatized are a specialized breed, experts at one rather niche corner-cutting strategy which is adaptive at scale when you can afford to lose people - highly efficient for those who make it but not very reliable and the attrition rate is substantial. (And a bit parasitical - they buy their success with our vigilance, because as can be seen with literally every possible dire but unlikely problem, they overwhelmingly fail to maintain the safeguards that create the relative safety in which they thrive.) At least the traumatized have some chance of being right more generally.

So if I know nothing about the specific belief, yeah, roughly even odds of correctness to a trauma-shaped belief vs a trauma-free belief.

I thought Lie groups would be my favourite course this semester and maybe I'd like it better if it were taught differently but both the lecturer and the books are so bad at explaining why we're doing anything.

Seems like it would be so easy and basically zero effort for the book or instructor to just… provide a little tree diagram showing the dependencies between the topics.

After grokking why Lisp code is formatted how it is, you start having Forbidden Style Thoughts in curly brace languages like "what if I... just..."

if(...){
    foo(...);}
Avatar

i tried it with js. dont do it.

Haha that's extra funny given that JS literally started as a Lisp that management said must look like a curly brace language at the last minute.

But yeah the Serious Vision(tm) is one day we'll have editors/IDEs that display code in whatever equivalent form is best for us individually, and then save it to disk in some canonical form that's decent if you have the misfortune of needing to view/diff/edit it raw.

Avatar

I’d say this makes me angry, but it only makes me sad.

Avatar

IDEs are a scam by monitor manufacturers to sell bigger screens

A more serious version of this: there's definitely a pretty big symbiotic relationship between IDEs and bad code.

Almost every feature of an IDE helps you brute force your way through code problems:

  • Needless lack of locality/proximity between parts you need to understand at the same time? Go to definition, little pop-ups/hover-overs.
  • Needlessly hard-to-remember naming, semantics, and logical "shape"? Auto-complete, plus all the stuff from the previous point.
  • "Subjective" stylistic choices that make code harder to visually or mentally harder parse? Syntax highlighting, nicer automatic formatting of how code is displayed, plus many of the things from both previous points.
  • Error-prone constructs/patterns? Ever-improving static analysis, on-the-fly error detection and auto-correct, etc.
  • Overly complicated and brittle setups for the dev environment, testing, building, or packaging? Hide it behind a button, freeing most contributors to not notice until it's too late and the time and effort it would take to untangle or switch is too high.

These are all good force multipliers, of course. But they also enable and even empower worse coding - less considerate/thoughtful coding which is worse for long-term maintenance and improvement in several ways - while making all those problems more tolerable, and in return such code makes IDEs far more necessary to actually be productive.

I normally code in vi. Over longer time scales in projects with good and fast tests, that doesn't seem to slow me down on average. In a familiar code base I don't even need code highlighting, and the lack of it helps me notice other readability factors that I could be improving. But at short time scales, when taking over a code base, I have to pull out and IDE because it's like swimming into tar suddenly, not because it inherently has to be that way, but because the code base is written in a way that is hard to follow and reason about (in a way that very obviously smells of leaning on IDEs so hard that the coder(s) never had to really understand what makes code optimal for humans to work with).

Anonymous asked:

Hi anon

Idk if tumblr ate the last ask I sent you, but if not, nvm this ask. Just doing my due diligence to make sure.

Yep, I got your ask! Probably going to be another day or two before I reply.

Avatar

I saw some tags on one of my big posts about anti nonsense, and the person had some of the right ideas about yes, tagging is important, and minors should be parented and have things filtered online. Yes, those are all good things. But then they went on to say that you shouldn’t be creating dark or NSFW content for kid’s shows, and, wow, that just gets into the policing area. Stop policing content! We can literally create whatever the fuck we want!

One of the takes I saw on that once seemed a bit wtf at first but then it turned out that they were talking about the particular section of fandom on disney.com or mylittleloudhouse.com or whatever safeplaygroundforkids.co.uk site and not fandom per se but the section of it on the official site for the kids’ show.

Avatar

Damn, yeah, that is not where to post that stuff. If we’re asking minors to stay in their lane then adults have to do the same thing like using AO3 and tumblr and tagging responsibly. It is unfortunate that not everyone is going to be responsible online.

Yeah.  I will also add that for tumblr (which is 13+), anything “spicier” than say PG-13 should probably not be tagged in the “main” tag for the fandom without at least a general here-be-dragons warning in the bio.    NSFW MLP FIM content goes in #clop and not #mylittlepony as a cod example.    

Avatar

I do think any content can go in the main tag since it is still related to the fandom, but you’re very correct about warnings. Warnings are important for everyone.

A problem is that tags are used to solve three different problems, in somewhat incompatible ways:

  1. Discovery (when someone thinks "My Little Pony is a cute fun innocent show, I'd like to see some of that!", they're probably going to go in the most obvious main tag, looking for the typical content for that thing, not cartoon horses fucking)
  2. Filing (a good filing system needs a single common reference to everything related to My Little Pony, so a content creator making cartoon horse fucking among other things probably wants to use the obvious main tag to categorize or be able to quickly find their work)
  3. Warnings (good warnings are in a form that computers can automatically use for you, so someone can for example configure their stuff to hide all posts with fucking and then browse for My Little Pony posts in general - or anything else - without having to filter out the cartoon horse fucking specifically)

What I’m getting from people responding to you is that a lot of people genuinely don’t seem to know you can just walk out into nature and that that is safe to sit in a field or walk a bit in the woods you don’t need to learn a bunch of definitions or prepare for scenarios to exist in the world

Avatar

Part of what I'm getting at in that post is that many, many people don't live in a place where they can walk out in the woods or sit in a field, and that this is bad. Woods and fields for everyone.

Avatar

I'm trying to be polite but it's slipping more and more out of my power. Have you never heard of writing? Have you never heard of communication of ideas through words, a connection forged between people via the bridge of language, instead of interpretation as a weapon for wrenching the most starchy, rigid, uncharitable possible reading from a sentence, as if reading is a competition with the writer and you win by understanding as little as possible of what the text intends to communicate?

*sigh*

Learning a plant's name is a good idea because if you don't know what it's called, you can't google it, can't read about it, and can't recognize any connection to your own life when you hear about it.

It's not "learning a bunch of definitions." They're called nouns.

As an expert in getting mad at people for how they interpret my words (I'm probably the world's foremost practitioner of "you should've interpreted my words like so, and it was your moral fault that you didn't"), I get the frustration, I do.

I also know how viscerally unpleasant it can be to get too many people reacting to a post in a disagreeable and frustrating way - too many to stymie, too many to address them, and where it feels like every time you respond, more people throw the same thing you just responded to at you.

I'm going to assume you were experiencing something negative like that with the post this was about. Whenever I've been in similar situations, it's an uphill battle against accumulating defensiveness while losing coping spoons.

Having said that:

The more we play with rhetoric in a way that leaves ambiguity for interpretation, the more we will get a spectrum of genuine, earnest, and ultimately reasonable from that person's perspective misinterpretations.

Personally, I probably would not have read "we must learn their names" at the end of the original post so literally as telling people to learn what thing each name is *ahem* defined to refer to (to me it would've likely parsed more as a somewhat awkward but valid way to punctuate the whole thesis with a call to get acquainted with nature in any of a wide array of ways), but I think it's totally reasonable and frankly to be expected that some people will take the words you used at face value.

Especially given that your reply here kinda sounds like you maybe did have literally learning names as part of your meaning. If so then people even interpreted you correctly when they thought you meant literally learning what things those names refer to, and the worst mistake they can be accused of is... what, exactly? The one-two punch of

  1. a different hair-splitting in the taxonomy of what associations between words and things are "definitions" vs [some other thing], and
  2. thinking that the thing you chose to say twice as the very last two sentences is a more significant part of your point than you actually meant it to be?

Like, I don't know exactly what the standard should be - how many possibilities should the reader idea-fit? and what subset of that idea-fitting should the author feel entitled to (in the sense that they can rightly blame the reader for failing to have charitably interpreted)? It's heavily context-dependent too. I do know that when I look at this case, I think it's totally sensible and to-be-expected that some people would focus on (and push back on) a larger than you intended emphasis on "learn their names".

And, on further thought, it's especially sensible for a reader to do that in response to that third party ask at the start here which is already somewhat uncharitably characterizing readers arguably slightly overindexing on "learn their names" as "a lot of people genuinely don’t seem to know [...] you don’t need to learn a bunch of definitions".

Anonymous asked:

About arguments

I have a bad habit of being hostile. Because of my underlying philosophy of kill or be killed; though I am trying to dissolve it.

Well if I had perceived it as hostile I would've been more likely to remember it as such.

Anyway, now I'm curious: you got this kill-or-be-killed mindset and and hostility habit, and yet you sent me very cutesy "hiiiiiii" anons - tell me about the vibe/feelings/motivation that are in/behind the "hiiiiii" asks.