this post was submitted on 29 Apr 2026
81 points (97.6% liked)

Fuck AI

6917 readers
900 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Hello all, long time lurker, sometimes poster. In my line of work some of my co workers seem eager to turn to the clanker to get an instant answer to any road block. I feel its better to problem solve the old fashioned way. With some good old research and finding a blog that is not AI slop LOL. Do those of you in a support role find any peer pressure to use LLMs?

you are viewing a single comment's thread
view the rest of the comments
[–] originalucifer@moist.catsweat.com 15 points 1 week ago (1 children)

old fashioned for me was combing the newsgroups hoping some poor schlop was in the same boat. painful.

no, i dont particularly think its necessary for young folk to be tortured because i was. if search tools are better to find the same obscure reference, then it doesnt matter.

it matters when they dont understand the solution.. sometimes the journey to finding an answer is a training session all on its own into whatever context.. if youre just handed an answer you might not care why it works which hinders growth. i still dont think we should force people to suffer just because we did. there has to be a happy medium.

ill use the llm tools where they fit and offer efficiency, which is fairly narrow for me.

[–] DietCanesSauce@lemmy.world 2 points 1 week ago (1 children)

This is exactly how I feel. I am rather new to the industry, still in an entry level position, but I have been tasked with building out an AI chatbot in our support team.

I accepted the task in hopes I can make the bot point people to references to read more on rather than give them the answer they seek outright, so that they hopefully understand why something worked and something else didn’t.

My goal is to make it easier to find those obscure references rather than regurgitate the source in a 2000 word slop response.

[–] originalucifer@moist.catsweat.com 2 points 6 days ago (1 children)

2000 word slop response.

omg yes. half the battle is sorting the signal from the noise returned by the llm.. most of which appears as a 'coloring'.. some attempt to humanize the response. copilot spends more time telling me how awesome i am than spitting out the regex or direct link i want. STFU already.

[–] DietCanesSauce@lemmy.world 1 points 6 days ago

Yeah I have actually been pleasantly surprised with how the output can be structured by providing it with additional instructions to specialize its role.

The ability to control its verbosity to a certain degree means that I can cut out the “You are correct, here are 20 bullet points to show you why”. I can also kind of turn it into an internal documentation search engine that can search our support ticket db, codebase, and documentation articles at the same time.

Still very new to designing LLM agents and AI in general, but I am glad my team and our department seems willing to do things right and roll it out slowly even with pressure from the C Suite to roll it out right away. I don’t trust any LLM to do any particular task in my role, but it’s decent at gathering information quickly since it is literally what it’s been designed to do.

I just wish we stopped getting posters generated by copilot for company events. They creep me out tbh.