[Image Description: A screenshot of a series of tweets from C.W. Howell (@cwhowell123) Tweet 1: So I followed @GaryMarcus' suggestion and had my undergrad class use chatGPT for a critical assignment. I had them all generate an essay using a prompt I gave them, and then their job was to "grade" it -- look for hallucinated info and critique its analysis. *All 63* essays had Tweet 2: hallucinated information. Fake quotes, fake sources, or real sources misunderstood and mischaracterized. Every single assignment. I was stunned -- I figured the rate would be high, but not that high.
Tweet 3: The biggest takeaway from this was that the students all learned that it isn't fully reliable. Before doing it, many of them were under the impression that it was always right. Their feedback largely focused on how shocked they were that it could mislead them. Probably 50% of them
Tweet 4: were unaware that it could do this. All of them expressed fears and concerns about mental atrophy and the possibility of misinformation/fake news. One student was worried that their neural pathways formed from critical thinking would start to degrade or weaken. One other student
Tweet 5: opined that AI both knew more than us but is dumber than we are since it cannot think critically. She wrote, "I'm not worried about AI getting to where we are now. I'm much more worried about the possibility of of us reverting to where AI is."]
*****
OK, I'm going to try to ask this in the nicest possible way, because clearly I am having an XKCD 2501 moment (https://m.xkcd.com/2501/) and I have massively over-estimated general understanding of what chatGPT does. So I need to correct my biased viewpoint, and for that I need people to explain to me. So.
People who were under the impression that chatGPT is always right, that it's fully reliable ... or who were under the impression that out of 63 essays, you'd expect to get unreliable information in much fewer than 63 cases ... or who were thinking that this unreliability can be easily circumvented by asking chatGPT if its output is accurate ... basically, anyone who is surprised by this thread
this is a genuine and not-condescending question: Why? What experiences or sources or reasoning led you to think that? What is it about chatGPT, or about the way people are talking about chatGPT, that makes you trust it so much more than you would trust your phone's autocorrect function?
Because my industry is clearly not doing it's damn job, and I need to understand where the disconnect is. What are we forgetting to explain, or are explaining poorly, or are using terrible terminology, or whatever it is we're screwing up, that left you with the impression you have/had about this technology?
i think the fact that these algorithms (all of them, whatever their source or purpose) are commonly called "artificial intelligence", specifically (mis)using the word "intelligence".
i'm thinking back to when computers were being used for calculations, ie to take large quantities of data and run them through the multi-variable equations to produce things like trajectories, or weather maps, even code breaking as in ww2. nobody called it "intelligence", it was just doing complex calculations so much faster than any human ever could, that it produced useful results within useful time frames.
now the computers are bigger and faster, but they are still doing the same basic thing, only chewing on millions or billions of pages of language (used without the consent or permission or even knowledge of the original writers, which is often glossed over if it's even mentioned), and it's being called "intelligence" which it is NOT, but that dresses it up in a way than makes it seem more plausible. in fact it is increasingly regressive and lacking in real insight.
I think the moment I finally actually *got* that AI is not actually intelligent was the 2017 Tumblr purge. I mean yes, I always intellectually understood that computers aren't sentient and software can't think. But I think it's hard for us to understand that computers' "brains" work in ways completely different from ours until you watch an AI program identify a loaf of bread as porn.
I watched all the ridiculous misfires and somehow that got it through to me: the AI doesn't 'know' anything. It can analyze and image and find images that are roughly analogous but it doesn't have that thing humans have where you can look at a collection of like 4 distinctive features and go oh, yeah, that's a bunny. Or a sheep, or a train, or whatever. The AI just finds patterns of pixels that are similar to other patterns. One pattern could make a photorealistic portrait of a bunch of onions and the other pattern could be a real photo of a bag of actual severed heads, and the AI would not differentiate between them because it has no way to perceive the things about an image that matter to human beings. It doesn't know what's a horrifying image and what's a soothing one and more to the point, it cannot be taught to know the difference because it, as a machine, is utterly indifferent to human wants, needs, or welfare.
That insight was really chilling to me. All these people talk about AI harming humanity as if that's something still to come in the future. No, friends, the robot revolution has happened already. Our social networks have been scaled up to the point that humans can't manage them effectively, so we have the machines do it. We outsource more and more to these algorithms and now the machines are in charge. Which is a problem because the machines don't actually *know* anything and can't be enabled to care about the damage they do.
So it's the same with ChatGPT now. Surely something able to simulate speech at this level must know what it's talking about? No. Actually it can very much not 'know' anything about language and still simulate conversation well enough to pass the Turing test. A human wouldn't be able to speak a language without knowing a fucking thing about what it means, but that is actually what ChatGPT and all its ilk do. They can say anything but they KNOW nothing, precisely because they are machines. A human couldn't do what ChatGPT does; our brains can't use that amount of data. But ChatGPT also can't do what we do, which is know what words actually mean and what their consequences might be.


















