562
Students Boo Commencement Speaker After She Calls AI the ‘Next Industrial Revolution’
(www.404media.co)
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
The thing is, that broadly these sorts of hiccups happen all the time, but every time one of them escalates to 'meme' status, they can institute covering for it in pretty short order.
When you use them routinely, you see them do the hiccups regularly on random things you weren't expecting, but if one of those hiccups goes viral, then it stops working.
I managed to get in barely in time to see the seahorse emoji before the meme became self-defeating.
The viral instances only work very briefly to illustrate a behavior, as very well known specific examples will get covered. In your case, at one point suddenly all the LLMs were really good at knowing the letters in strawberry, but you ask about other words they would fall over because they only had that specific thing there. By now, I suspect most have implemented a scheme to ensure a more appropriate mechanism handles counting letters in a word, to spare the embarassment.
I'm glad you've taken a nuance approach to the issue. The technology is constantly changing and there are lots of genuine reasons to be concerned about AI. This just isn't one of them anymore.
I wouldn't say the inability to count the 'r's in strawberry was ever a 'concern', but a demonstrator. It demonstrated two things.
One, a quirk of how tokens work, which innately is a pretty benign limitation in and of itself, perhaps a bit amusing. We don't really need GenAI help to do nitty gritty stuff with the letters.
The more troubling facet was the fact it would spit out something like "There is one r in strawberry" instead of "Due to limitations of the technology, that answer is unavailable". The tendency to spew something that structurally resembles the desired result with apparent confidence and certainty despite no basis for it being true is on display there. This is absolutely still the case broadly. The challenge being humans aren't used to dealing with being bombarded with that baseless certainty and have a hard time gauging the credibility when facts and fiction are presented with equal apparent confidence. Certainly some business leaders and politicians thrive on the confident but dumb answer, but generally we recognize those as bad scenarios, and LLMs firmly share that trait.