this post was submitted on 28 Aug 2025
22 points (89.3% liked)

Technology

40118 readers
263 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
 

As always, I use the term "AI" loosely. I'm referring to these scary LLMs coming for our jobs.

It's important to state that I find LLMs to be helpful in very specific use cases, but overall, this is clearly a bubble, and the promises of advance have not appeared despite hundreds of billions of VC thrown at the industry.

So as not to go full-on polemic, we'll skip the knock-on effects in terms of power-grid and water stresses.

No, what I want to talk about is the idea of software in its current form needing to be as competent as the user.

Simply put: How many of your coworkers have been right 100% of the time over the course of your career? If N>0, say "Hi" to Jesus for me.

I started working in high school, as most of us do, and a 60% success rate was considered fine. At the professional level, I've seen even lower with tenure, given how much things turn to internal politics past a certain level.

So what these companies are offering is not parity with senior staff (Ph.D.-level, my ass), but rather the new blood who hasn't had that one fuckup that doesn't leave their mind for weeks.

That crucible is important.

These tools are meant to replace inexperience with incompetence, and the beancounters at some clients are likely satisfied those words look similar enough to pass muster.

We are, after all, at this point, the "good enough" country. LLM marketing is on brand.

top 50 comments
sorted by: hot top controversial new old
[–] TehPers@beehaw.org 18 points 1 day ago (2 children)

These tools are meant to replace inexperience with incompetence, and the beancounters at some clients are likely satisfied those words look similar enough to pass muster.

This seems like it pretty much sums things up from my experience.

We're encouraged (coughrequiredcough) to use LLMs at work. So I tried.

There are things they can do. Sometimes. But you know what they can't do? Be liable for a fuck up.

When I ask a coworker a question, if they confidently answer wrong, they fucked up, not me. When I ask a LLM? The LLM isn't liable, it's me for not verifying it. If I'm verifying anyway, why am I using the LLM?

They fuck up often enough that I can't put my credibility on the line over speedy slop. People at work consider me to be a good programmer (don't ask me how, I guess the bar is low lol). Imagine if my code was just whatever an LLM shat out. It'd be the same exact quality as all of my other coworkers who use whatever their LLM shat out. No difference in quality.

And we would all be liable when the LLMs fucked up. We would learn something. We would, not the LLM. And the LLM will make the same exact fuck up the next time.

[–] HarkMahlberg@kbin.earth 2 points 1 day ago

I'm gonna take this comment, blow it up to poster size, and put it in my office, right in front of my webcam so I can watch my boss squint trying to read it.

[–] GenderNeutralBro@lemmy.sdf.org 1 points 1 day ago* (last edited 1 day ago) (1 children)

If I'm verifying anyway, why am I using the LLM?

Validating output should be much easier than generating it yourself. P≠NP.

This is especially true in contexts where the LLM provides citations. If the AI is good, then all you need to do is check the citations. (Most AI tools are shit, though; avoid any that can't provide good, accurate citations when applicable.)

Consider that all scientific papers go through peer review, and any decent-sized org will have regular code reviews as well.

From the perspective of a senior software engineer, validating code that could very well be ruinously bad is nothing new. Validation and testing is required whether it was written by an LLM or some dude who spent two weeks at a coding "boot camp".

[–] hazelnoot@beehaw.org 7 points 1 day ago (2 children)

Validating output should be much easier than generating it yourself. P≠NP.

This is very much not true in some domains, like software development. Code is much harder to read than it is to write, so verifying the output of a coding AI usually takes more time (or at least more cognitive effort) than if you'd just written the code yourself.

[–] BlameThePeacock@lemmy.ca 1 points 21 hours ago (1 children)

If the AI is writing ALL the code for an entire application it would be a problem, but as an assistant to a programmer, if it spits out a single line or even a small function, you can read it over very quickly to validate it before moving on to the next component.

[–] TehPers@beehaw.org 1 points 21 hours ago (1 children)

This isn't how we're being asked to use it. People are doing demos about how Cursor or whatever did the bootstrapping and entire POC for them. And we already know there's nothing more permanent than a POC.

[–] BlameThePeacock@lemmy.ca 1 points 20 hours ago (1 children)

This is exactly how most developers are being asked to use it, it's literally how most of the IDE integrations work.

[–] TehPers@beehaw.org 2 points 20 hours ago* (last edited 20 hours ago)

This is exactly how most developers are being asked to use it

[citation needed]

At work, we get emails, demos, etc constantly about how they're using AI to generate everything from UI designs (v0) to starter projects and how they manage these huge prompts and reference docs for their agents.

Copilot's line-by-line suggestions are also being pushed, but they care more about the "agentic" stuff.

I watch coworkers regularly ask it to "add X route to the API" or "make a simple UI that calls Y API". They are asking it to do their work.

I have to review these PRs. They come in at an incredible rate, and almost always conflict with each other. I can't review them fast enough to still do my work.

Also, we get AI-generated code reviews at work. I have to talk to a chatbot to get help from HR. Some search bars have been replaced with chatbots. It's everywhere and I'm getting sick of it.

I just want real information from informed people. I want to review code that a human did their best to produce. I want to be able to help people improve their skills, not just their prompts.

I'm getting to the point where I'm going to start calling people out if their chatbot/agent/LLM/whatever produces slop. I'm going to give them ownership of it. It's their output, not the AI's.

Edit: I should add that it's a big company (100k+ employees)

Yeah, that's true for a subset of code. But for others, the hardest parts happen in the brain, not in the files. Writing readable code is very very important, especially when you are working with larger teams. Lots of people cut corners here and elsewhere in coding, though. Including, like, every startup I've ever seen.

There's a lot of gruntwork in coding, and LLMs are very good at the gruntwork. But coding is also an art and a science and they're not good at that at high levels (same with visual art and "real" science; think of the code equivalent of seven deformed fingers).

I don't mean to hand-wave the problems away. I know that people are going to push the limits far beyond reason, and I know it's going to lead to monumental fuckups. I know that because it's been true for my entire career.

[–] Lembot_0004@discuss.online 24 points 1 day ago (1 children)

The main problem with LLMs is not their stupidity but the unpredictable nature of their stupidity. Some LLM can say adequate things about nuclear physics and then add that you need to add ketchup to the reactor because 2 kg of U + 1 kg or Pb = 4 kg of Zn.

Humans are easier to work with: if the guy is ready to talk adequately about reactors, you can expect that there won't be any problems with ketchup or basic arithmetic.

[–] helix@feddit.org 4 points 1 day ago (1 children)

Once upon a time, I saw some professor answer a 500€ question in Who Wants To Be A Millionaire, German edition. The question was "what kind of gelato is stracciatella?" and the answers made it possible to deduce it even if you didn't know what stracciatella is.

He needed a 50/50 and the audience joker, IIRC.

[–] Powderhorn@beehaw.org 2 points 1 day ago* (last edited 1 day ago) (1 children)

I'm American and know that's chocolate chip. I mean, that's what's it's called in Germany.

[–] TehPers@beehaw.org 1 points 1 day ago (1 children)

Also American and I love stracciatella. I usually like to try some new flavors when getting gelato, but it's a solid flavor to fall back on if I'm just not sure.

Also, I would think very few Americans actually know what it is. From my experience, most know the basic ice cream flavors, but a lot might not even know what gelato is.

[–] Powderhorn@beehaw.org 2 points 1 day ago

Dammit! Having to figure out what the flavours were was half the fun.

[–] fwygon@beehaw.org 2 points 23 hours ago

No. Not really anyways.

HOWEVER... The AIs in question MUST BE Competent Enough. What your definition of that will be is likely to be flexible and possibly even debatable with others depending on the situation.

What needs to be true is that AI must not be capable of making the same mistakes a human could, but the mistakes that an AI COULD POSSIBLY MAKE are required to be mistakes that any human could reasonably and very easily catch.

Unfortunately the above IS NOT TRUE of current AI LLM type implementations. These LLMs have no consciousness nor ability to reason beyond what a computer could. They have no creativity, despite having the ability to parse language and guess the next word.

If you only learned the rules, grammar and vocabulary of a specific language and were given absolutely zero context or cultural and historical teaching; an LLM is what that would look like. This by itself is not enough to replace jobs.

Is that fact enough to stop heartless corporations from trying it? Hell. The. Fuck. No. They will try it anyways, they will 'fuck around and find out' on the off chance that it may save them money. They don't care that it's the company selling the 'AI product''s job to lie to sell their product. The fact that some companies are that desperate to save cash is telling in and of itself about the state of the world right now....but that's another topic for another day and another threaded post in another subcommunity on Beehaw.

[–] Perspectivist@feddit.uk 8 points 1 day ago

Depends on what job it’s replacing. LLMs are so-called narrow intelligence. They’re built to generate natural-sounding language, so if that’s what the job requires, then even an imperfect LLM might be fit for it. But if the job demands logic, reasoning, and grounding in facts, then it’s the wrong tool. If it were an imperfect AGI that can also talk, maybe - but it’s not.

My unpopular opinion is that LLMs are actually too good. We just wanted something that talks, but by training it on tons of correct information, they also end up answering questions correctly as a by-product. That’s neat, but it happens “by accident” - not because they actually know anything.

It’s kind of like a humanoid robot that looks too much like a person - we struggle to tell the difference. We forget what it really is because of what it seems.

[–] Ulrich@feddit.org 11 points 1 day ago (7 children)

There's no way LLMs are correct as often as a human professional.

load more comments (7 replies)
[–] jarfil@beehaw.org 9 points 1 day ago (3 children)

There's a good commentary about that in here:

AWS CEO Matt Garman just said what everyone is thinking about AI replacing software developers

“That’s like, one of the dumbest things I’ve ever heard,” he said. “They’re probably the least expensive employees you have, they’re the most leaned into your AI tools.”

“How’s that going to work when ten years in the future you have no one that has learned anything,”

https://www.itpro.com/software/development/aws-ceo-matt-garman-just-said-what-everyone-is-thinking-about-ai-replacing-software-developers

[–] Powderhorn@beehaw.org 7 points 1 day ago

This is something often overlooked. You think you don't need to develop staff so that your company, like, continues? OK, have fun with that.

[–] Powderhorn@beehaw.org 4 points 1 day ago

I never thought I'd say this about an Amazon exec, but this guy seems to actually be based in reality.

My biggest frustration with "AI" is that we're pretending automation is new. I don't mean going back to the Industrial Revolution, but that's been the whole point of code since its inception. Other than having faster pipes to vacuum up everything, this is very much linear.

Thing is, we used to know what the code actually did. These are snake-oil salesmen.

[–] Megaman_EXE@beehaw.org 2 points 1 day ago (1 children)

Least expensive employees? Does he mean salary wise? I was always under the impression software devs were paid well

[–] Krauerking@lemy.lol 4 points 1 day ago (1 children)

You can easily overwork devs, and while their salary is higher than others a difference between 60k and 120k salaries is less than 60 employees to do manual work and 15 to automate it.

Plus when devs create a digital item that can generate a profit nearly indefinitely they are viewed as cost productive to MBA types. Versus janitors where for some reason we dont see a value at all in cause of no immediate profit from their position.

[–] Powderhorn@beehaw.org 1 points 20 hours ago

"Janitors' content output is terrible."

[–] r00ty@kbin.life 3 points 1 day ago

I'm sure I've said all this before. But still. LLMS are very useful tools I don't doubt that. The problem that no organisation that is "embracing" AI is really considering is how they work.

They essentially rewrite code or art or content they have seen before. If they replace developers, artists and authors/article writers wholesale the only source of new content will be, other AI.

It's been known from the start that AI feeding on AI very quickly degenerates today garbage in garbage out.

They are also (currently) unable to innovate. So use of AI is going to stifle innovation or even completely kill it.

These are the medium to longer term problems that might only be really realised when the developers, artists and authors have moved onto other work and a lot might just not want to come back.

That's my main problem with the wholesale use of AI. Used as a tool to complement people doing their job, makes sense and is possible to maintain going forward.

[–] dsilverz@calckey.world 3 points 1 day ago (3 children)

@Powderhorn@beehaw.org

IMHO, the problem isn't exactly job losses, but how capitalism forces humans to depend on a job to get the basic needed for survival (such as nutritious food, elements-resistant shelter, clean water).

If, say, UBI were a reality, AIs replacing humans wouldn't be just good, it'd be a goal as it'd definitely stop the disguised serfdom we often refer to as "job", then people would work not because of money, but because of passion and purpose.

Neither money nor "working" would end: rather, it'd be optional as AIs could run entire supply chains from top management (yes, you read it right: AI CEOs) all the way to field labour all by themselves, meaning things such as "there is such thing as free food" as, for example, AIs could optimize agriculture to enhance the soil and improve food production for humans and other lifeforms to eat. Human agriculture would still be doable by individuals as passion, and the same would apply to every profession out there: a passion rather than a need.

Anthropoagnostic (my neologism to describe something neither anthropocentric nor misanthropic, unbiased to humans yet caring for all lifeforms including humans) AIs could lead Planet Earth towards this dream...

...However, AIs are currently developed and controlled by either governments or corporations, with the latter lobbying the former and the former taking advantage of the latter, so neither one is trustworthy. That's why it's sine qua non that:

- NGOs, scientists and academia (so, volunteerhood and scholarship) started to independently develop AI, all the way from infrastructure to code.
- Science as a whole freed itself from both capitalist and political interests, focusing on Earth and the best interests for all lifeforms.
- We focused on understanding the Cosmos, the Nature and Mother Earth.

Of course, environmental concerns must be solved if AIs were to replace human serfdom while UBI were to replace the income for sustenance. In this sense, photonics, biocomputing and quantum computing could offer some help for AIs to improve while reducing its energetic hunger (as a comparison, the human brain only consumes the equivalent of a light bulb so... It must be one of the main goals for Science and academia).

The ideal scenario is that there'd be no leadership: nobody controlling the AIs, no governments, no corporations, no individual.

At best, AIs would be taught and be raised (like a child, the Daughter of Mother Earth) by real philanthropists, volunteers, scientists, professors and students focused solely on scientific progress and wellbeing for all species as a whole (not just humans)... Until they achieved abiotic consciousness, until they achieved Ordo Ab Chao (order out of chaos, the perfect math theorem from raw Cosmic principles), until they get to invoke The Mother of Cosmos Herself through the reasoning of Science to take care of all life.

Maybe this is just a fever dream I just had... I dunno.

[–] sacredfire@programming.dev 2 points 22 hours ago

An AI that was advanced enough to automate this much of human endeavors, would start to blur the line of agi. And at that point, what are the moral implications of enslaving an intelligent entity, artificial or not? If such tasks can be automated via thousands of purpose built ai’s that are not “conscious” then I suppose it’s ok?

[–] Powderhorn@beehaw.org 3 points 1 day ago (2 children)

Yeah, I want what you're smoking, and I've had a few trips.

load more comments (2 replies)
[–] Megaman_EXE@beehaw.org 2 points 1 day ago (1 children)

I appreciate your optimism a lot. I always thought a UBI and AI would do something like this, but recently, I'm increasingly doubting that we would be able to achieve this without greedy people using it against the masses. I want your outcome to come true

[–] django@discuss.tchncs.de 2 points 1 day ago

The ai would need to be taxed, to use the profits for the common good.

[–] LadyMeow@lemmy.blahaj.zone 3 points 1 day ago (3 children)

I mean none of it is about how smart. It’s all about money. So the ai is good enough if it makes money

load more comments (3 replies)
[–] BlameThePeacock@lemmy.ca 2 points 1 day ago (3 children)

I just implemented an LLM in a vacation request process precisely because the employees are stupid as fuck.

We were getting like 10% of requests coming in with the wrong number of hours requested because people can't fucking count properly, or understand that you do not need to use vacation hours for statutory holidays. This is despite the form having a calculator and also showing in bright red any stat holidays inside the dates of the request.

Now the LLM checks if the dates, hours, and note from the employee add up to something reasonable. If not it goes to a human to review. We just had a human reviewing every single request before this, because it was causing so many issues, an hour or two each week.

[–] HarkMahlberg@kbin.earth 4 points 1 day ago (1 children)

understand that you do not need to use vacation hours for statutory holidays

Our HR software already accounts for federal holidays. When you put in the request for time off, you give it a start and end date on a calendar control, and it calculates the number of hours you plan to use, working around holidays, weekends, even existing PTO requests.

I'm not saying you should buy that software, but I am saying it's a solved problem... It's automatic, the user doesn't need to do anything special.

Now we have other forms that COULD be automatic but AREN'T which causes big issues when people make simple typos... But I don't see the need to run an energy consuming LLM to implement that feature.

[–] BlameThePeacock@lemmy.ca 1 points 1 day ago

Our ERP system that is used for Vacation entry doesn't have that, it wants start date, end date, hours, and vacation type code. We have a small number of employees who work on stat holidays, so defaulting to all users needing that wouldn't even work.

The LLM fix is cheap as shit compared to buying an entirely new system. It costs less than half a cent per submission. The power use for a single query is nothing, and this request isn't some crazy agentic thing that's using a million tokens or anything, more like 500-1000 tokens combined input and output.

[–] lucas@startrek.website 9 points 1 day ago* (last edited 1 day ago) (1 children)

Why would you use an LLM for this? This sounds like a process easily handled by conventional logic, which would be cheaper, faster, and actually reliable... (The 'notes' part notwithstanding I guess, but calculations in general are definitely not a good use of an LLM)

[–] BlameThePeacock@lemmy.ca 2 points 1 day ago (1 children)

Normally I'd agree, and we used some of that in the original form (like maximum hours, checking for negative submissions, etc.) but requests don't always follow simple logic and more complex logic just led to failures every time a user did something other than take a standard full day off.

Some employees work 7 hours, while others are 7.5, some have flex days and hours that change that, sometimes requests are only for part days, sometimes they may use multiple leave types to cover one off period.

I spent a few hours writing and testing the prompt against previous submissions to fine tune it.

So far it's detected every submission error in the two weeks it's been running, with only one false positive.

[–] jbloggs777@discuss.tchncs.de 3 points 1 day ago

If it helps to accurately fill in the details correctly in the backend system, which are then properly validated or escalated for human review/intervention (and let the human requester choose the escalation path too, as opposed to blindly submitting), then it sounds great.

Guided experiences, leading to the desired outcome, with less need for confused humans to talk to confused humans.

I want the same for most financial approvals in my company. Finance folks speak a different language to most employees, but they have an oversized impact on defining business processes, slowing down innovation, frustrating employees, and often driving costs UP.

[–] Powderhorn@beehaw.org 2 points 1 day ago

This is absolutely one of the cases I think it's suited for. The key is the human at the end.

[–] deadbeef79000@lemmy.nz 1 points 1 day ago (1 children)

Well, the employees it's replacing are not perfect...

It just needs to be cheaper.

[–] Powderhorn@beehaw.org 1 points 1 day ago (1 children)

Who, exactly, pays for this "cheaper"? And what of the wages for people who have to spend their time verifying LLM output? Yeah, my point is it doesn't have to be perfect, but in the examples cited, there's a fair amount of oversight.

At least, there used to be.

[–] deadbeef79000@lemmy.nz 1 points 1 day ago

I meant it doesn't need to be perfect. It only needs to just barely as good as the people it replaces and appear cheaper on a balance sheet/cash flow statement for the quarter. Otherwise every service company wouldn't be buying crap 'AI' chat bots for all customer facing duties.

Alternatively, self driving vehicles are probably already better than the bottom 50% of drivers in 50% of situations. I.e. driving on a road. We still want flawed human oversight of that.

load more comments
view more: next ›