So I know a lot of people headcanon Rami Malek as Kaz for physical similarities, but I think his portrayal of Elliot Alderson in the tv show Mr. Robot deserves him the position instead.
Unable to stand touch?
Leading a group of criminals to pull off the biggest heist of their time?
Struggling with social relationships and suffering anxiety attacks?
Haunted by the past?
(also I highly recommend for everyone to watch Mr. Robot not only for Rami’s outstanding performances, but also for the sheer ingenuity of Sam Esmail [the show’s director], the show’s complex social commentary, and the unique lives of each character shown)
a kaz/inej fanmix (I would come for you. And if I couldn’t walk, I’d crawl to you, and no
matter how broken we were, we’d fight our way out together - knives
drawn, pistols blazing. Because that’s what we do.) (listen) (download)
i. game of survival - ruelle | ii. shadow preachers (male pitched) - zella day | iii. human - aquilo | iv. sights - london grammar | v. our tired souls - muhr| vi. demons - jasmine thompson| vii. sweat - ry x | viii. here with me - susie suh x robot koch
new more blogs 2 follow,, plz reblog if you post:
• percy jackson/kane chronicles/magnus chase/etc
• six of crows
• the raven cycle
• all for the game
• mr robot
• basically any young adult books
• wonder woman/any dc chars
There's Nothing Artificial About Artificial Intelligence
year, I changed my college major from computer science to philosophy.
My parents were far from thrilled. I told them I needed a change, that I
wasn’t getting anywhere with CS. I couldn’t tell them the truth. I
couldn’t tell anyone.
The previous year, I had been selected for an incredible clandestine
internship with the U.S. government. I hadn’t applied to it, filled out a
recommendation, or done any of the usual steps you would when trying to
get an internship. The day after I presented my artificial intelligence
research at a large conference in the Midwest, I received an email that
explained the internship and told me not to tell anyone, in the usual
confusing government jargon. I don’t remember exactly what it said,
because it deleted from my inbox automatically about five minutes after I
read it. The head of my university had been informed, and called me
into her office to congratulate me and urge me to keep the news a
secret. I couldn’t even tell my parents.
The job was with the U.S. Department of Defense. I can’t tell you
where or when, only that it was in an unbelievably nice building. The
other interns and I had accommodations in local residences. There were
four of us. Two have since committed suicide, and as hard as I’ve tried,
I can’t track down the other one.
The other interns’ names were Parker, Craig, and Ila. They all had
impressive CS backgrounds–probably much more compelling than my own.
Like me, they’d been picked rather than applied for this internship;
also like me, they had no idea what we were supposed to be doing here.
We went to a briefing meeting in a long room, where the head of the
program–Dr. Lacey–explained the project to us. The entire project was
an intensive study, backed, of course, by the U.S. government. “This,”
Dr. Lacey declared, “will be the greatest breakthrough in modern
history. We are going to study the nature of the relationship between
artificial intelligences–more specifically, discover whether a bond,
like humans feel, can occur between AIs.”
We started working the next day, writing the programming for two
highly advanced computerized robots. We were going to name them “Adam”
and “Eve,” but the people from the government equivalent of HR thought
that was too tacky. So we went with Chase and Misha. If the uncanny
valley gives you nightmares, don’t worry–they weren’t even remotely
humanoid. They were vaguely human-shaped, but retained the color of
their original metal. We set up communications systems under the Open
Systems Interconnection model, for all the basic computer languages,
English, Mandarin Chinese, and Spanish. The end result was two
six-foot-tall robots that looked like stone monoliths, each equipped
with a highly advanced supercomputer.
Both robots had nearly identical programming, and it was very
cutting-edge. We mostly used Haskell, filling in some gaps with AIML and
Prolog. The result was two robots who could engage in a conversation
with humans, and also answer almost any question they were posed by
plying their built-in computer. A circadian clock was built into their
systems, and the robots “rested” from 11PM to 7:30AM. We used their
programming to regulate their behavior and instill some semblance of
understanding of human culture and interaction, but we didn’t write
anything about Chase in Misha’s or Misha in Chase’s. Two human
agents–Robert and Maria, I think–acted as their primary caretakers,
engaging the robots for six hours every day. At night the robots
retreated to the room they shared. While their communications were all
run through a third computer in the main office, and video cameras
tracked their movements, they received no direct human interaction at
Right away, things got weird. We noticed that the robots, by the
second week, exhibited contrasting personality traits. Chasie, as we had
endearingly come to call him, was quiet, obedient (albeit
good-naturedly cantankerous), and a master of the deadpan. (I feel
ridiculous just typing that, but it’s true. Chase the robot could’ve
played the straight man in every SNL skit.) Mimi, our nickname for
Myesha, was riotous, outspoken, and funny. Chase also adopted his “big
brother” role over Myesha, becoming very protective of her. By the tenth
day, they had adopted noticeable vocal inflections–that is, they
talked like people, emphasizing certain words, increasing or decreasing
their cadence and tone based on what they were saying (e.g. they spoke
more slowly and with a higher voice when they were asking questions). As
they had exactly the same directives and day-to-day experience, we were
thrilled and thought that the personality deviation could be a huge
scientific breakthrough. We would dissect their data and seeing what
imprinted on them to create personality.
There was only one problem: when we pored through their output, nothing
accounted for the personality changes. It didn’t make any sense. We
couldn’t see how Chase and Misha were acting so anthropomorphic, and in
the data there was no explanation.