“When initially launched, the letter lacked verification protocols for signing and racked up signatures from people who did not actually sign it, including Xi Jinping and Meta’s chief AI scientist Yann LeCun, who clarified on Twitter he did not support it. Critics have accused the Future of Life Institute (FLI), which is primarily funded by the Musk Foundation, of prioritising imagined apocalyptic scenarios over more immediate concerns about AI – such as racist or sexist biases being programmed into the machines. Among the research cited was “On the Dangers of Stochastic Parrots”, a well-known paper co-authored by Margaret Mitchell, who previously oversaw ethical AI research at Google. Mitchell, now chief ethical scientist at AI firm Hugging Face, criticised the letter, telling Reuters it was unclear what counted as “more powerful than GPT4”. “By treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI,” she said. “Ignoring active harms right now is a privilege that some of us don’t have.””
The framing of “more powerful than Chat GPT 4” is a clear tell. Of course OpenAI would love a 6-month breather to dominate the market. Dall-E2 had its lunch eaten by better projects from smaller teams, and six months of “don’t outdo us” will give every current player a powerful advantage.
But that’s assuming such a block would be actually implemented.
If NFTs were us not learning the lesson of Pogs, then Chat GPT the return of the Furby. The marketing says its an intelligent AI robot, but in actuality it’s an toy that simulates the experience of one. Only they’re advertising to venture capitalists instead of of 8 year olds, so they don’t have to work as hard on the pitch.
When AI image generation first started happening, the term used the most was “Dream”, because that’s what these AIs do. They hallucinate seemingly coherent stuff that has no context, narrative or reason beyond what is prompted by the user.
In every way that a chat-AI can be accurate, it is not functioning as a generative AI, but as a search engine. Everything that makes it work was a chat-AI involves potential corruption of transmitted data from said search functions. The tech isn’t ready for applications where reason or accuracy are required.
But if it’s so scary that all these “smart, smart people” then nobody thinks about the limitations of the tech and all the flamboyant claims about how many people it can replace suddenly seem believable. The venture capitalists come slithering out to get in on the ground floor. Other companies come to sign deals to integrate your hallucination machine. The stock price goes up.







