The planet burning text summarisers seem to have found their main purpose as confirmation bias machines and I am finding myself arguing with the output of an LLM when talking to people.
Now after many years in the world of office jobs my general perception of most people is
- severe inability to be wrong
- severe inability to do anything about being wrong
- egos so weak they could shatter just by looking at them
- huffing your own farts and doubling down is the name of the game
And so I have to contend with this. People who cannot accept they have made a mistake arguing with me via an llm output they managed to wrestle into agreeing with them because they can’t accept fault. The amount of time i have given learned educated provably correct advice only to hear “but copilot told me this” and the output being some hallucinated drivel cos the person who wrote the prompt is more bothered by trying to be correct than resolving the original issue they were having.
I know a small handful of people who have gone completely off the rails already getting chatgpt to confirm basically any delusion they have and then when i see them will go on about it for hours on end how they broke out of the matrix and see the world for what it is and that we don’t need schools anymore just give everyone an llm.
All it reminds me of is the “do your own research” crowd of mumsnet nazis parroting some random facebook post on how being vegan gives you autism and the only cure is shooting your child in the head. Except its worse because the llm can keep that delusion going for longer and build on it until most people are living in some ai generated dreamland of pure unfiltered confirmation bias.
I think we’re going to hit some major problems not long from now as a significant portion of people start offloading their thinking to these corporate models. I already see a decent chunk just accepting it as an unbiased authority and its scary. A completely new and arguably more effective way to deliver even more extreme propaganda if they chose to do so and very few people even question it.
Oh and the number of people unknowingly sharing sora videos is…. Dire
To add to this I never really understood why a lot of people have a hard time being wrong and have such diabolically weak egos. Like it’s logically infeasible to not be wrong and if you are wrong just idk learn from it? I like being wrong cos it means i can fix it and not be wrong later. The only reasonable response to being incorrect is “oops” followed by “thanks”
I've come to believe that there are in fact some limited use cases for chatbots - I don't use them at all, but a friend used one to help navigate a tricky labour issue at work, likely saving his job (for a time, at least). it makes intuitive sense to me that a bullshit machine would be good at assisting one in navigating bullshit procedural situations. (ofc I would much prefer my friend didn't have to navigate obtuse office politics in the first place and the job itself kinda sucks, but a W is a W.)
but then a co-worker tells me they use it to draft messages on dating apps and the urge to destroy rises up again.
The most use I've had from a chatbot/LLM is in generating attorney speak and some official-looking documents to send to a debt collection agency to get out of a past-due debt that I owed. Was surprised at how easy it was.
Yeah there are use cases for chatgpt etc. It just gets really concerning when folks use it instead of going to a doctor for example.
doctors aren't free and often require month long waits for appointments...
Using an LLM is worse than doing nothing when it comes to healthcare. If you truly need medical advice and your only option is the internet use the mayo clinic or something.
I say to my friends and colleagues that its a capitalist solution to capitalist problems. My neurodivergent ass needs it to translate blunt honesty into fluffy corporate speak and it works wonders. I also use it for my performance goals and all the other HR crap we have to do that benefits nobody.
But yeah its a solution to a problem that never needed to exist.
The underlying technology isn't really inherently bad, presenting it in chatbot format definitely is though.
There's a lawyer who's been using an LLM to parse US labor law (which is almost exclusively case law) and make it much more accessible to regular people. It seems to be pretty good at that.