this post was submitted on 26 Mar 2026
116 points (97.5% liked)

Ask Lemmy

38805 readers
45 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, toxicity and dog-whistling are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

I see a lot of discussion here about over-hyped AI, and then I see the huge AI bubble at my workplace, in news, in PR statements, etc.

Are there folks who work at companies -- especially interested in those in tech -- that have a reasonable handle on AI's practical uses and its limitations?

Where I work, there's:

  • a dashboard of AI usage by team and individual, which will definitely not affect performance review in any way
  • a mandate to use one AI tool last month, and this month a new one to abandon that tool and adopt a different one
  • quarterly goals where almost every one has some amount of "with AI" in it
  • letters from the CEO asking which teams are using AI to implement features from ticket descriptions, or (inspired by the news) use flocks of agents, asking for positives without mention of asking for negatives
  • a team creating a review pipeline for AI-generated output in our product, planning to review the quality of the output... using AI
  • teammates are writing code and designs and sending them for review without ensuring functionality or pruning irrelevant portions, despite a statement that everyone is responsible for reviewing AI output

Is all the resistance to overuse of AI grassroots and is the pressure for rampant adoption uniform among executives/investors? Or are some companies or verticals not drinking the koolaid?

top 50 comments
sorted by: hot top controversial new old
[–] Tar_alcaran@sh.itjust.works 65 points 3 days ago (6 children)

Not in tech, but LLMs have been great for my safety and compliance consulting business.

Before LLMs, I would spend quite a bit of my regular workday on creating safety plans and coming up with systems to improve conditions and ensure compliance.

Now, with the power of LLMs, management can generate those plans themselves. So instead of me spending my normal workday on it, I get to bill my emergency rate when the hallucinated slop gets rejected and they need something at the last minute.

I can honestly say LLMs have made me thousands of euros.

[–] pntha@lemmy.world 23 points 3 days ago (1 children)

urge to downvote rising… rising…

…calm

load more comments (1 replies)
[–] hperrin@lemmy.ca 16 points 3 days ago (2 children)

AI slop clean up is the new highest paying job.

[–] CanadaPlus@lemmy.sdf.org 9 points 3 days ago

And probably a lot of meh paying ones too, eventually, when the bubble bursts and people realise they'll never actually be able to trust LLMs.

[–] grepe@lemmy.world 2 points 2 days ago

oh, got it! going to found a startup for AI slop cleanup. we could use LLM to automate...

[–] TarnFan@lemmy.world 1 points 2 days ago

You had me in the first half

load more comments (3 replies)
[–] ExtremeDullard@piefed.social 67 points 3 days ago* (last edited 3 days ago) (1 children)

My company is approaching AI like it's been approaching anything for the past 40 years: with extreme caution. It's coming alright, but the engineers are carefully evaluating it for coding, and it certainly isn't being rolled out recklessly.

I'm one of several die-hards who flat-out refuse to use it - not so much because it's AI, but because it's provided by an American company - and my choice is respected. Our CEO sees old-timers like me as the fallback is AI ends up shitting the company's bed.

load more comments (1 replies)
[–] starlinguk@lemmy.world 33 points 3 days ago* (last edited 3 days ago) (2 children)

I work at a renowned tech company that frequently reminds its employees that AI hallucinates. We do a lot of work for the army, a mistake caused by hallucinating AI would be a disaster.

[–] EvilBit@lemmy.world 16 points 3 days ago

Meanwhile we’re just waiting until Hegseth accidentally turns a Bethesda-area Target into a smoking crater because he was drunk-Grokking and fucks up ordering an airstrike to cheer himself up after the mainstream librul media hurt his fee-fees.

[–] redsand@infosec.pub 3 points 2 days ago

Like blowing up a girls school or worse like 9/11 the sequel John has planned?

[–] daychilde@lemmy.world 5 points 2 days ago (1 children)

I'm too old for this shit - too old for the original show, I mean, but for some reason, my brain wants to make that title work:

Who works at a (tech) company that's not delirious about AI?

SPONGE! BOB! SQUARE! PANTS!

It completely doesn't work.

[–] ozymandias117@lemmy.world 1 points 2 days ago (2 children)

I'm not a lyricist, but this is at least closer...

Who works for a place that licks AI's taint

[–] Widdershins@lemmy.world 2 points 2 days ago

I haven't seen the whole show but I have been under the impression that SpongeBob and intelligence don't cross paths very often.

[–] daychilde@lemmy.world 1 points 2 days ago

Well, you put way more into it than I had, so I feel I have a refinement to give back as thanks - it just needs a single extra syllable. Perhaps:

Who works for a place that just licks AI's taint

Now it scans. :)

[–] hperrin@lemmy.ca 19 points 3 days ago (1 children)

I run a tech company that doesn’t use any AI:

https://sciactive.com/human-contribution-policy/

We make an email service, and we have a hard stance against any AI in our product:

https://sciactive.com/2026/01/21/our-stance-on-ai-in-email/

[–] gwl@lemmy.blahaj.zone 2 points 2 days ago (2 children)

Y'all hiring? I'm tired of my place being like "AI IS BEST, YOU SHOULD ALL USE IT"

load more comments (2 replies)
[–] clif@lemmy.world 5 points 2 days ago (2 children)

The one I work at went "all in" about a month ago. I started noticing a dramatic increase in garbage/nonsensical code at the end of last week. I didn't make the connection between the two until Tuesday.

I've got a manager that usually listens and they asked me to try it and take notes because they know I'll tell them the truth. ... I've got a lot of examples prepped for our next meeting.

The hard part is definitively blaming LLMs because I don't have time to track down every single commit and analyze it for LLM usage but there's 100% a correlation.

[–] pageflight@lemmy.world 4 points 2 days ago (1 children)

Yeah, I wish git blame could highlight the lines written by Claude/Codex. Usually when I ask my colleagues 'so did you use AI much for this one' they will say yes. But it makes code review that much harder, especially when they then take my PR comments and feed them to the LLM, so I'm coding by playing telephone with a bot.

Unfortunately they'll never do that because they're owned by Microslop and they can't allow any marring of AI's reputation

[–] Butterpaderp@lemmy.world 1 points 2 days ago

We have offshore devs that I think found the copilot button in vscode recently...seeing lots of em dashes in code review today 🫠

[–] gwl@lemmy.blahaj.zone 4 points 2 days ago* (last edited 2 days ago) (1 children)

who works at a tech company that's not delicious about AI?

-- OP

I work at Tech Company that loves AI

-- people with poor reading comprehension replying to this thread

[–] clif@lemmy.world 3 points 2 days ago (1 children)

I required an outlet to bitch regardless of my ability to reed werds gud.

I'm sure I'm not the only one : D

[–] gwl@lemmy.blahaj.zone 1 points 2 days ago

Honestly, fair

[–] Korhaka@sopuli.xyz 12 points 3 days ago (1 children)

I just use AI to fill in the stupid forms HR make us do and don't verify its output because I don't respect it. Kills 2 birds with 1 stone.

[–] apftwb@lemmy.world 4 points 2 days ago (2 children)

Please God, give me an AI agent that can watch the video and do quiz for the yearly mandatory HR training

load more comments (2 replies)
[–] kersploosh@sh.itjust.works 12 points 3 days ago (1 children)

Medical device industry here. Some of our software and electrical engineers are using Claude as a sounding board for ideas, or as a starting point to find possible paths forward when they get stuck with a hard problem. Nobody trusts the model to give an accurate answer. At the end of the day, all work committed to a project is done by real humans with the normal review processes.

Management is cautiously looking at potential uses for AI in our products, but there is a healthy dose of skepticism all around. If your machine is displaying diagnostic data to a doctor there cannot be any question as to whether the machine is hallucinating.

Honestly, this is probably the best use case for LLM's.

Tom Scott did something recent 2-3 years ago where he fed a bunch of his video titles into an LLM and had it come up 100 new names with a similar style. Most of the output sucked, a handful he had already done, and a few more sounded plausible but didn't exist. But he got 8-10 that he could have turned into actual videos (doing all the work himself) and even did so for a couple.

The hallucination of AI can be used to help a human artist or programmer, designer, scientist, etc.) make a new connection they couldn't before, and they can then use that new connection to implement their new idea. But LLM's generally suck for anything more than that, and over-reliance on them slowly erodes people's ability to think and create over time

[–] taiyang@lemmy.world 8 points 3 days ago

My wife's at a major video game company that, oddly enough, hasn't gone crazy over AI. Since she's in localization, she uses DeepL which has some machine learning, but not really an LLM and LLMs aren't really being pushed on her since it's a downgrade. From what I can tell, their dev team is also just keeping things human made, although they're in Japan so that might contribute.

They aren't saints, they did try to union bust a few years back, but their stance on AI, as well as creativity first mentality and recent pay raise guarantees and whatnot, kinda show they're paying attention.

[–] jtrek@startrek.website 8 points 3 days ago

Work in a big multi national company. not a software company, but I'm on an engineering team.

Leadership makes a lot of noises about AI.

The engineers can't even use git competently. I've suggested quietly maybe we should focus on learning software fundamentals instead of chasing dreams but no one here listens to me.

[–] Semi_Hemi_Demigod@lemmy.world 9 points 3 days ago

Every time I hear stories like this I’m glad I work at a startup where everyone’s too busy to worry about shit like AI usage dashboards

[–] greybeard@feddit.online 3 points 2 days ago (1 children)

Software company here. There's a strong external push for us to shove AI into every corner of our UI, but so far we've largely kept it out.

The one place we are using it is a pretty strong use-case (essentially sentiment analysis). We've had a chatbot in dev for a while, but are struggling to find a valid usecase for it. I think most of us are hoping the AI craze dies down and suddenly our lack of AI is no longer a marketing point our competitors use against us.

[–] BrickEater@lemmy.world 2 points 2 days ago

Advertise your lack of AI it will draw customers who are sick of the slop

[–] mlg@lemmy.world 2 points 2 days ago (1 children)

I worked at one that actually wasn't too bad except we had a peer review system for client reports and I was horrified to see how many people had such poor english grammatical understanding that they just assumed the AI was always the correct and better output than human.

And I don't mean people whose second language was english, I mean native english speakers were giving me AI feedback to change sentences that would completely change the context or horribly maim phrases into past tense where tense of the subject was very much important.

I could easily ignore the changes from coworkers, but a handful of managers would then give performance feedback telling me to utilize AI and grammarly to improve my report quality, even though all of their report feedback was utter garbage lol.

On a related note, grammarly can also go screw itself. That joke of a software suite still doesn't hold a candle to Word 2007's editor.

[–] Crozekiel@lemmy.zip 2 points 2 days ago

I fucking hate grammarly. And the modern Outlook webmail suggestions can go eat a back of dicks as well.

[–] ExLisper@lemmy.curiana.net 2 points 2 days ago

I work at a small software company. There is a push to use AI but I would say in a reasonable way. It does speed up some tasks but no one is vibe codding and pushing things without proper review. So far no one is tracking the usage or pushing us to use it more. It's just a new tool we're encouraged to be familiar with and use reasonably.

[–] bayta@lemmy.world 6 points 3 days ago* (last edited 3 days ago) (1 children)

I run a small (5-employees) tech firm. We ignored AI for the first couple of years. Last year we started paying the basic Cursor subscription for our employees. We encouraged them to try it out a bit for a couple of weeks however they saw fit to evaluate if they found it useful for their workflows but we said we didn't mind at all if they ended up deciding to adopt it long term or not. We also stressed we would continue reviewing code the same way so they would have to take responsibility for reviewing the AI's output. I started as the only coder in the company and I review every PR so I am extremely familiar with all our codebase and I haven't found it very useful personally but the people that joined more recently say it can be useful to point them towards parts of the code they are not familiar with yet. Right now each one uses it as a tool freely however they prefer and I don't usually ask them about it, same way I don't ask how often they use the "find and replace" function in VS Code.

[–] hperrin@lemmy.ca 4 points 3 days ago (1 children)

That could potentially backfire on you:

https://sciactive.com/human-contribution-policy/#Reasoning

  1. You could be including copyrighted code and not complying with its license.
  2. You don’t own the copyrights to AI generated code.
  3. The bugs and vulnerabilities AIs introduce are much harder to spot than in human authored code.
  4. Your team might not understand the code that they’re submitting.

Etc.

load more comments (1 replies)
[–] Lexam@lemmy.world 6 points 3 days ago

Did your CEO have a "Fireside Chat" about how great AI is?

[–] hansolo@lemmy.today 1 points 2 days ago

if you look at the comments on YC Hacker News, it's a relatively sane group of people RE: AI. Skeptical first adopters that have experience in the industry usually. It's worth your time.

[–] neidu3@sh.itjust.works 6 points 3 days ago (7 children)

Not a tech company, but a petroleum exploration company, which involves a lot of tech. The petroleum industry in general is extremely conservative in terms of tech, in that older and proven technologies tend to stick around. For example, I often write data to magnetic tape.

However, the industry doesn't shy away from newer technologies where it does make sense. There is some AI at play, but it is limited in scope, and only deployed where it makes sense. Most of it is done on the processing side, so I don't know much about it, but I get the impression it's used in a similar manner to those headlines you see from time about AI predicting rectal cancer 99% correctly. Interpreting seismic survey data involves some geophysical wizardry that I've never quite understood - I just make sure the production servers offshore work.

[–] Nighed@feddit.uk 1 points 2 days ago (1 children)

For the size of data that oil exploration requires, tapes make lots of sense still.

They have higher density, and they are more shock proof. When you need to move masses of data round the world, writing it to tape, then sticking it on a plane is still the fastest way to move it (probably, may have changed I guess)

[–] neidu3@sh.itjust.works 2 points 2 days ago* (last edited 2 days ago)

Yup, I 100% agree. Tapes are often viewed as obsolete, but there is no more cost-effective way storing data in the petabytes in a safer way than tape.

Hell, at work I have a few live storage clusters measured in petabytes, and being responsible for them can be pretty stressful at times. Data loss isn't just bad, it is fucking terrifying when its data costs hundreds of thousands of dollars per day to collect.

I have yet to experience data loss, but I breathe a sigh of relief for every batch of data that has been confirmed written to tape. Because once it is, I know that it is safe and no longer my responsibility.

It's written to two sets of tape at a time, both of which are read back to confirm data integrity, and once it is, that's when I know that my live copy is officially not supposed to be a backup.

One set of tapes is stored on board in case something stupid happens with the other set during transport to a literal mountain for storage. There it is re-read and checksummed, confirming that the other set of tapes can be rewritten with the next dataset. (Yes, every tape cartridge is written to twice).

load more comments (6 replies)

We have AI built into some tools I believe, but I have never been told I had to use them. The truth is they don't work all the time for every situation and the client is more worried about user data accidentally getting scooped up and spending time warning us to never enter any users information anywere, even so much as notating a user saying they have a limitation that explains why we performed a task in a non standard fashion is a complete not happening.

So if someone said, "I am vision impaired," someone reading our notes would probably be wondering... Why the f didn't they just do a,b,c it would have been much easier. But they are worried if those notes get integrated into something the AI gobbles up in the future, they don't want to get sued for that user information to somehow be linked to them. As that could be considered medical data I guess.

The funny part is, if an AI does use that data for learning now, it may start trying to instruct or perform tasks based off of highly inefficient solutions designed to assist a specific disability

[–] rimu@piefed.social 4 points 3 days ago

I am employed by a tiny software dev shop that develops a few apps used in education. No AI at all, unless I proactively choose to and pay for it out of my own pocket.

[–] Unleaded8163@fedia.io 4 points 3 days ago (1 children)

The company I work for builds a product that uses AI extensively. The product would not be possible without AI, like the one main thing the product does is only possible because of AI. That said, AI use for coding is quite limited. We talk about it, some people do develop with AI, but there is no push for it. I feel like building a product on it has made developers acutely aware of just how flakey and unreliable AI is.

load more comments (1 replies)
[–] tiredofsametab@fedia.io 3 points 3 days ago

My company uses copilot for code reviews. They encourage at least trying a number of other tools but do not require it. Some of our product does use LLMs for various things, though I don't personally work on those.

I do worry about the environmental impacts and ethical concerns around training data (especially pirated data used with neither consent nor compensation) so I don't use anything personally (aside from where some company has shoved it in somewhere).

I think that local models trained ethically can have a number of uses such as classification, data cleanup, and perhaps even checking code for security issues and exploits (I'm not sure if local models can do that yet or well).

[–] yuliyan@nahe.social 4 points 3 days ago

@pageflight Small design company. We experiment with llms in different areas but so far there are marginal improvements and very little work-safe use cases. Totally not up to the hype.

load more comments
view more: next ›