it's funny how so much pop fear abt ai is "what if it gets too smart" when the main problem rn is that it's wrong all the time. ig it's good to look to the future but

yeah!

AI risk is not about super-intelligent computers, it's about all the people who don't realize that today's computers are very dumb

The greatest risk posed by competent AI is that it gets good enough to make the powerful better at oppression.

So many AI danger stories start with the presumption of success and neutrality, then work from there. We've created intelligence, unbound by the biases and power structures of our society; what next? But this leaves so many unexplored avenues of failure.

What if an AI designed for one specific task is put to work at something superficially similar that it sucks at? What if people want to make an AI do something that can't be quantified, and settle for a quantifiable approximation? What if the metrics of success were designed by the king of insulated techbro who invents bodega boxes without realizing they're just gentrified vending machines? What if the people running the AI f*k up?

Hell, it's rare for super-advanced computer systems to be depicted as tools used by people as opposed to characters in their own right. Some day, we might make computer programs which can claim that level of agency, but that day is not today and it's hard to imagine many situations where that would actually be desirable from the system designer's perspective.