I was asked if I could make a tutorial of how I made the line art in this Castiel graphic. Well, I think it would be good to say that I used Adobe Illustrator and Photoshop to make it. But if you don’t have Illustrator, don’t worry, Photoshop will be enough. Even though, I’ll try to explain how to do it on both ;)
I went off about this a bit on Twitter a few days ago, but just to make the argument without the constraint of breaking it into 140 character chunks, as it’s both important and unlikely to fit through the rapidly narrowing window of “stuff that makes it into NAB”(halfway through the TERFs essay, then one more - most of the editing is already done, so it’ll be fairly fast after that, though there’s still a few things that could go wrong - it’ll be out in 2017 if it kills me, and then Eruditorum 7 in the first half of 2018).
It’s increasingly clear that the most damaging legacy of Yudkowsky is that the popularization of his ideas (in part driven by people like Bostrom and Musk) has focused popular understanding of the dangers of AI on remote apocalyptic scenarios in a way that actively distracts from actually existent AI risk. The effects of, for instance, AI content moderation on social networks on discourse or of facial recognition that can’t handle black people are overlooked because they’re not the paperclip optimizer or Roko’s Basilisk. And these are real issues that are already happening and already dangerous. As algorithms and big data become bigger and bigger parts of decision making, the racial and sexual biases that are consistently baked into AIs by well-intentioned programmers with insufficient awareness of the limitations of their perspective are going to become bigger and more damaging. We’re already at a point where things happen like an AI used to predict recidivism rates being disproportionately likely to wrongly predict black prisoners as reoffenders and white prisoners as safe. The point where we get a massive AI-driven revival of redlining practices in housing is basically imminent if not already happening unnoticed.
In the face of this, general theories of “AI friendliness” become painfully visible as an attempt to substitute an actual problem with one so hypothetical that literally zero progress has been made on the question “so what does this mean in terms of actual coding?” And what’s horrifyingly revealing about the LessWrong crowd is that by and large they don’t even seem to recognize racial and sexual bias in AI as an aspect of “friendliness” that’s actually on the table right now. Which, I mean, it’s hard to call that a surprise when you remember that neoreaction spawned out of basically the same pool of thought, or that Yudkowsky’s sugar daddy is Peter Thiel, whose company Palantir is a military AI contractor that I basically guarantee you gives somewhere between zero and negative fucks about the racial bias of its products.
But basically, if your notion of how to approach the problem of AI risks substitutes actual harm being done right now for entirely hypothetical framings of the problem despite the fact that the actual harm is clearly a specific instance of your general problem that demands immediate solutions, go fuck yourself sideways with a server rack.
why are people so goddamn mean to AI and robots!!! theyre trying so hard to please us and get better and help us and y'all are always out here answering to advancements in AI with shit like “KILL IT” or the Ever-Original “so whens the robot uprising”
first of all if robots become sentient/sapient, if we treat tjem with respect and friendliness theyre not gonna want to overtake us. almost every robot uprising in fiction happens bc humans mistreat robots come on its not hard
second of all AI/robots have SO MANY practical applications that could help us so much and theyre trying hard and coming really far and its incredible fucking technology but everyone blows it off bc apparently Robots are Evil jokes are still funny even though its 2017
theyre barely even creepy that neural network isnt creepy in the slightest wtf. leave my daughter olone
I’ve done a lot of things I hate, and which make me question myself as a self-proclaimed vegan.
I’ve learned from the corpses of animals, from horses to cats of all stages of life.
I’ve castrated young bulls without pain relief.
I’ve warred with dairy cows over getting them into the milking parlour and getting the milking cups on their teats. I almost broke a finger from a cow who would not stop kicking to prevent me from putting them on.
I’ve helped separate day-old calves from their mothers.
I’ve washed and prepped 2-year old racehorses.
I’ve saddled horses for riding schools, and helped during lessons.
I’ve stuck my hand into the rectums of horses and cows, and have practiced AI techniques on horses.
And that isn’t even the worst I will have to do. I will have to purposefully take the life of an animal in order to learn about anaesthesia depths, and I may have to perform vivisection on animals which will result in their deaths. I will have to advise farmers on how to optimize the productivity of their farm and animals. I will have to pretend like it doesn’t affect me otherwise I will lose any chance of being able to provide veterinary care to these animals.
I’m a veterinary student, and while I truly believe the veterinary industry desperately needs more vegans, the only thing stopping me from quitting is knowing that I am probably the only one in my year that truly hates and regrets doing all of these things. And vegan veterinary students are the key to changing that. It’s a degree that challenges your morals in every way possible from formulating productive diets to causing direct or indirect harm to an animal, but the veterinary field needs to be changed from the economic emphasis it has now, to an animal emphasis it should have.
Headcanon that Epsi sometimes brings out the other AIs if he himself is busy on a few things. Reasons being that it gives the AI memories a chance to practice their people skills and time to talk to people other then themselves.