ai is practicing

overwatch player types by gamemode
  • quick play: balanced people looking for a stress-free game
  • custom: tinkerers and comedy masters testing the numberless ways to crash OW until it needs to get reinstalled to start.
  • vs ai: too ashamed to practice genji in public;
  • comp: a vile den of wretched, sneering cynics; gall and bile flying through the air; everyone in it for the golden guns. kinda like a bonus level circle of hell.
  • arcade: be careful whom you trust; approach with extreme caution; closest thing in online gaming to stepping into a fairy ring.

“Saved as draft” Part 1 of 7

See [Part 2] [Part 3] [Part 4] [Part 5] [Part 5,5]
[Part 6] [Part 7] [Part 0,5]

I think it might already be part of the headcanon that Daichi is actually a coward + a softie, especially when it comes to Suga (¬‿¬)

Soooo, I was wondering if you’d like to see the rest of my short comic idea? It’s really sweet and embarrassing (▰˘◡˘▰) Would you like to?

I was asked if I could make a tutorial of how I made the line art in this Castiel graphic. Well, I think it would be good to say that I used Adobe Illustrator and Photoshop to make it. But if you don’t have Illustrator, don’t worry, Photoshop will be enough. Even though, I’ll try to explain how to do it on both ;)

Well, let’s go to the tutorial, right?

Keep reading

I went off about this a bit on Twitter a few days ago, but just to make the argument without the constraint of breaking it into 140 character chunks, as it’s both important and unlikely to fit through the rapidly narrowing window of “stuff that makes it into NAB” (halfway through the TERFs essay, then one more - most of the editing is already done, so it’ll be fairly fast after that, though there’s still a few things that could go wrong - it’ll be out in 2017 if it kills me, and then Eruditorum 7 in the first half of 2018).


It’s increasingly clear that the most damaging legacy of Yudkowsky is that the popularization of his ideas (in part driven by people like Bostrom and Musk) has focused popular understanding of the dangers of AI on remote apocalyptic scenarios in a way that actively distracts from actually existent AI risk. The effects of, for instance, AI content moderation on social networks on discourse or of facial recognition that can’t handle black people are overlooked because they’re not the paperclip optimizer or Roko’s Basilisk. And these are real issues that are already happening and already dangerous. As algorithms and big data become bigger and bigger parts of decision making, the racial and sexual biases that are consistently baked into AIs by well-intentioned programmers with insufficient awareness of the limitations of their perspective are going to become bigger and more damaging. We’re already at a point where things happen like an AI used to predict recidivism rates being disproportionately likely to wrongly predict black prisoners as reoffenders and white prisoners as safe. The point where we get a massive AI-driven revival of redlining practices in housing is basically imminent if not already happening unnoticed. 

In the face of this, general theories of “AI friendliness” become painfully visible as an attempt to substitute an actual problem with one so hypothetical that literally zero progress has been made on the question “so what does this mean in terms of actual coding?” And what’s horrifyingly revealing about the LessWrong crowd is that by and large they don’t even seem to recognize racial and sexual bias in AI as an aspect of “friendliness” that’s actually on the table right now. Which, I mean, it’s hard to call that a surprise when you remember that neoreaction spawned out of basically the same pool of thought, or that Yudkowsky’s sugar daddy is Peter Thiel, whose company Palantir is a military AI contractor that I basically guarantee you gives somewhere between zero and negative fucks about the racial bias of its products.

But basically, if your notion of how to approach the problem of AI risks substitutes actual harm being done right now for entirely hypothetical framings of the problem despite the fact that the actual harm is clearly a specific instance of your general problem that demands immediate solutions, go fuck yourself sideways with a server rack.

why are people so goddamn mean to AI and robots!!! theyre trying so hard to please us and get better and help us and y'all are always out here answering to advancements in AI with shit like “KILL IT” or the Ever-Original “so whens the robot uprising”

first of all if robots become sentient/sapient, if we treat tjem with respect and friendliness theyre not gonna want to overtake us. almost every robot uprising in fiction happens bc humans mistreat robots come on its not hard

second of all AI/robots have SO MANY practical applications that could help us so much and theyre trying hard and coming really far and its incredible fucking technology but everyone blows it off bc apparently Robots are Evil jokes are still funny even though its 2017

theyre barely even creepy that neural network isnt creepy in the slightest wtf. leave my daughter olone

I’ve done a lot of things I hate, and which make me question myself as a self-proclaimed vegan.

I’ve learned from the corpses of animals, from horses to cats of all stages of life.

I’ve castrated young bulls without pain relief.

I’ve warred with dairy cows over getting them into the milking parlour and getting the milking cups on their teats. I almost broke a finger from a cow who would not stop kicking to prevent me from putting them on.

I’ve helped separate day-old calves from their mothers.

I’ve washed and prepped 2-year old racehorses.

I’ve saddled horses for riding schools, and helped during lessons.

I’ve stuck my hand into the rectums of horses and cows, and have practiced AI techniques on horses.

And that isn’t even the worst I will have to do. I will have to purposefully take the life of an animal in order to learn about anaesthesia depths, and I may have to perform vivisection on animals which will result in their deaths. I will have to advise farmers on how to optimize the productivity of their farm and animals. I will have to pretend like it doesn’t affect me otherwise I will lose any chance of being able to provide veterinary care to these animals.

I’m a veterinary student, and while I truly believe the veterinary industry desperately needs more vegans, the only thing stopping me from quitting is knowing that I am probably the only one in my year that truly hates and regrets doing all of these things. And vegan veterinary students are the key to changing that. It’s a degree that challenges your morals in every way possible from formulating productive diets to causing direct or indirect harm to an animal, but the veterinary field needs to be changed from the economic emphasis it has now, to an animal emphasis it should have. 

anonymous asked:

for anon trying hanzo in qp for the first time: i was terrible as hanzo and i practiced him a lot in Training vs AI, which helped boost my confidence as hanzo tremendously!!


I’ve spent a huge chunk of hours just playing him against AI, and even the Practice Range will help you warm up; I warm up there for my flick shot practice every time I log on 

QP will definitely let you learn more and grow, but AI is an absolutely wonderful place to start