this post was submitted on 12 Dec 2025
175 points (99.4% liked)

Asklemmy

51636 readers
723 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 6 years ago
MODERATORS
 

I have a boss who tells us weekly that everything we do should start with AI. Researching? Ask ChatGPT first. Writing an email or a document? Get ChatGPT to do it.

They send me documents they "put together" that are clearly ChatGPT generated, with no shame. They tell us that if we aren't doing these things, our careers will be dead. And their boss is bought in to AI just as much, and so on.

I feel like I am living in a nightmare.

top 50 comments
sorted by: hot top controversial new old
[–] smallredearth@lemmy.zip 1 points 4 hours ago

My workplace is very cautious with AI. It’s very conscious of putting private client data into an AI model that will transfer/use it/spit it out somewhere else, and the legal consequences of making mistakes from AI models making things up.

AI is useful in my work (we use copilot) but it’s limited to the most basic stuff. “Here’s a document, give me three key points of about 20 words each” works pretty well. It’s also ok at rewriting things in a different style but it’s a bit hit and miss.

Its crazy to me that people will use these things as some all-knowing Oracle when they’re mainly just systems that imitate human language or jumped-up search engines. I think it goes to show that things that imitate human communication well can be very convincing, and a herd mentality to adopt new technology out of a fear of being left behind, often amongst people that don’t know anything about the technology.

[–] Ephera@lemmy.ml 4 points 23 hours ago

I find it annoying, because the hype means that if you're not building a solution that involves AI in some way, you practically can't get funding. Many vital projects are being cancelled due to a lack of funding and tons of bullshit projects get spun up, where they just slap AI onto a problem for which the current generation of AI is entirely ill-suited.

Basically, if you don't care for building useful stuff, if you're an opportunistic scammer, then the hype is fucking excellent. If you do care, then prepare for pain.

[–] Mrkawfee@lemmy.world 6 points 1 day ago* (last edited 1 day ago)

We have CoPilot at my corporate job and I use it every day. Summarising email chains, reviewing documents and research. Its a huge time saver.

Its good, not perfect. It makes mistakes and because of hallucination risks i have to double check sources. I dont see it taking my job as its more like having an assistant whose output you have to sense check. Its made me much more productive.

[–] BanaramaClamcrotch@lemmy.zip 2 points 1 day ago

Idk my boss sometimes uses chatGPT to generate goofy (and appropriate) memes about the workplace from time to time…. That sort of seems to be about it.

[–] Brutticus@midwest.social 12 points 1 day ago (1 children)

I work in social work; I would say about 60 percent of what I do is paperwork. My agency has told us not to use LLMs, as that would be a massive HIPPA nightmare. That being said, we use "secure" corporate emails. These use Microsoft 365 office suite, which are copilot enabled. These include TLDRs at the top, before you even look at the email, predictive texts... and not much else.

Would I love a bot who could spit out a Plan based on my notes or specifications? absolutely. Do I trust them not to make shit up. Absolutely not.

[–] Apytele@sh.itjust.works 5 points 1 day ago (1 children)

Apparently a hospital in my network is trialing a tool to generate assessment flowsheets based on an audio recording of a nurse talking aloud while doing a head to toe assessment. So if they say, you've got a little swelling in your legs it'll mark down bilateral edema under the peripheral vascular section. You have to review before submitting but it seems nice.

[–] Brutticus@midwest.social 2 points 1 day ago

You're right, that does seem very nice.

[–] owsei@programming.dev 4 points 1 day ago

A higher up really likes AI for simple proofs of concept, but at times they get so big it's unusable. With the people on my team, however, bad code is synonymous to AI usage or stupidity (same thing)

[–] djsoren19@lemmy.blahaj.zone 6 points 1 day ago

Everyone in my office hates it, including my director who occasionally ends up going on a rant because Microsoft and Adobe often end up pushing their AI to us when we don't want it.

Sometimes I'm very thankful I work for a nonprofit. They're still p shitty to us employees, but our focus is first and foremost on doing the job right, something AI has no chance at.

[–] yogthos@lemmy.ml 3 points 1 day ago (1 children)

We had a discussion about AI at work. Our consensus was that it doesn't matter how you want to do your work. What matters is the result, not the process. Are you writing clean code and on finishing tasks on time? That's the metric. How you get there is up to you.

[–] mavu@discuss.tchncs.de 3 points 1 day ago (1 children)

While this sounds like a good idea, leaving individual decisions to people, longterm it is quite dumb.

  • if you let an LLM solve your software dev problems, you learn nothing. You don't get better at handling this problem, you don't get faster, you don't get experience in spotting the same problem and having a solution ready.

  • you don't train junior devs this way, and in 20 years there will be (or would be without the bubble popping) a massive need for skilled software developers. (and other specialists in other fields. Better pray that medical doctors handle their profession differently..)

  • you really enjoy tweaking a prompt, dealing with "lying" LLMs and the occasional deleted harddrive? Is this really what you want to do as a job?

  • (bonus point) Would your company be ok with someone paying a remote worker to do his tasks for a fraction of the salary, and then do nothing? I doubt that. so, apparently it does matter how the work gets done.

[–] yogthos@lemmy.ml 2 points 1 day ago* (last edited 1 day ago) (2 children)

Old enough to remember how people made these same arguments about writing in anything but assembly, using garbage collection, and so on. Technology moves on, and every time there's a new way to do things people who invested time into doing things the old way end up being upset. You're just doing moral panic here.

It's also very clear that you haven't used these tools yourself, and you're just making up a straw man workflow that is divorced from reality.

Meanwhile, your bonus point has nothing to do with technology itself. You're complaining about how capitalism works.

[–] mavu@discuss.tchncs.de -1 points 1 day ago (1 children)

Old enough to remember how people made these same arguments about writing in anything but assembly, using garbage collection, and so on. Technology moves on, and every time there’s a new way to do things people who invested time into doing things the old way end up being upset. You’re just doing moral panic here.

If this is an example of your level of reading comprehension, then i guess it's no surprise that you find LLMs work well for you. Your answer addresses none of the points i made, and just tries to do the Jedi-mind-trick-handwave, which unfortunately doesn't work in real life.

[–] yogthos@lemmy.ml 2 points 1 day ago (1 children)

Correct, my answer does not address obvious straw man points of scenarios that don't exist in the real world.

[–] mavu@discuss.tchncs.de 1 points 1 day ago (1 children)

that don’t exist in the real world.

A bit like your ability to reason and provide arguments. But i guess that happens when you have used LLMs for too long.

[–] yogthos@lemmy.ml 2 points 23 hours ago (1 children)

I guess using personal attacks like a child is all you can do when you don't have any actual point to make.

[–] mavu@discuss.tchncs.de 1 points 14 hours ago (1 children)

I'm sorry?

You have the gall to tell that to me, after the first thing you do is falsely accusing me of using straw man arguments and making things up.

And then come here, after providing zero actual counterpoints and tell me I am acting like a child?

Incredible.

[–] yogthos@lemmy.ml 1 points 10 hours ago

You haven't made any arguments that warrant counterpoints. Go do your trolling somewhere else.

[–] zbyte64@awful.systems 1 points 1 day ago (9 children)

All the technologies you listed behave deterministically, or at least predictably enough that we generally don't have to worry about surprises from that abstraction layer. Technology does not just move on, practitioners need to actually find it practical beyond their next project that satisfies the shareholders.

load more comments (9 replies)
[–] theparadox@lemmy.world 11 points 2 days ago

I am very, very concerned at how widely it is used by my superiors.

We have an AI committee. When ChatGPT went down, I overheard people freaking out about it. When our paid subscription had a glitch, IT sent out emails very quickly to let them know they were working to resolve it ASAP.

It's a bit upsetting because may of them are using it to basically automate their job (write reports & emails). I do a lot of work to ensure that our data is accurate from manual data entry by a lot of people... and they just toss it into an LLM to convert it into an email... and they make like 30k more than me.

[–] Bunbury@feddit.nl 6 points 1 day ago

I’m in an environment with various level of sensitive data, including very sensitive data. Think GDPR type stuff you really don’t want to accidentally leak.

One day when we started up our company laptops Copilot just was installed and auto launched on startup. Nobody warned us. No indication about how to use it or not use it. That lasted about 3 months. Eventually they limited some ways of using it, gave a little bit of guidance on not putting the most sensitive data in there. Then they enabled Copilot in most apps that we use to actually process the sensitive data. It’s in everything. We are actively encouraged to learn more about it and use it.

I overheard a colleague recently trying to let it create a whole PowerPoint presentation. From what I heard the results were less than ideal.

The scary thing is that I’m in a notoriously risk averse industry. Yet they do this. It’s… a choice.

[–] TheImpressiveX@lemmy.today 82 points 3 days ago (4 children)

I am reminded of this article.

The future of web development is AI. Get on or get left behind.

5/5/2025

Editor’s Note: previous titles for this article have been added here for posterity.

~~The future of web development is blockchain. Get on or get left behind.~~

~~The future of web development is CSS-inJS. Get on or get left behind.~~

~~The future of web development is Progressive Web Apps. Get on or get left behind.~~

~~The future of web development is Silverlight. Get on or get left behind.~~

~~The future of web development is XHTML. Get on or get left behind.~~

~~The future of web development is Flash. Get on or get left behind.~~

~~The future of web development is ActiveX. Get on or get left behind.~~

~~The future of web development is Java applets. Get on or get left behind.~~

If you aren’t using this technology, then you are shooting yourself in the foot. There is no future where this technology is not dominant and relevant. If you are not using this, you will be unemployable. This technology solves every development problem we have had. I can teach you how with my $5000 course.

load more comments (4 replies)
[–] HurlingDurling@lemmy.world 8 points 2 days ago (1 children)

Our company is forcing us to do everything with AI, hell they developed a "tool" to generate simple apps using AI our customers can use for their enterprise applications and we are forced to generate 2 a week minimum to "experiment" with the constant new features being added by the dev teams behind it (but we're basically training it).

The director uses AI spy bots to tell him who read and who didn't read his emails.

Can't even commit code to our corporate github unless copilot gives it the thumbs up and its constantly nitpicking things like how we wrote our comments and asking to replace pronouns or to write them a different way, which I always reply with "no" because the code is what matters here.

We are told to be positive about AI and to push the use of AI into every facet of our work lives, but I just feel my career as a developer ending because of AI. We're a bespoke software company and our customers are starting to ask if they should use AI to built their software instead of paying us, which I then have to spend hours explaining them why it would be a disaster due to the shear complexity of what they want to build.

Most if not all executives I talk to are idiots who don't understand web development, shit some don't even understand the basics of technology but think they can design an app.

After being a senior dev and writing code for 15 years I'm starting to look at other careers to switch to... Maybe becoming an AI evangelist? I hear companies are desperately looking for them... Lol, what a fucking disaster this shit is becoming.

That work environment sounds like hell. Literally. If I woke up one day and had to work like this, I would think I never woke up at all and Lucifer finally started torturing me.

AI is ruining my ability to think and sucks the fun out of writing code. I am so happy our boss doesn't force us to use it.

[–] Crotaro@beehaw.org 2 points 1 day ago* (last edited 1 day ago)

Disclaimer: I only started working at this company about three weeks ago, so this info may not be as accurate as I currently think it is.

I work in quality management and recently asked my boss what the current stance on AI is, since he mentioned quite early that he and his colleagues sometimes use ChatGPT and Copilot in conjunction to write up some text for process descriptions or info pages. They use it in research tasks, or, for example, to summarize large documents like government regulations, and they very often use it to rephrase texts when they can't think of a good way to word something. From his explanation, the company consensus seems to be that everyone has access to Copilot via our computers and if someone has, for example, a Kagi or Gemini or whatever subscription, we are absolutely allowed and encouraged to utilize it to its full potential.

The only rules seem to be to not blindly trust the AI output ever and to not feed it company sensitive information (and/or our suppliers/customers)

[–] rayquetzalcoatl@lemmy.world 8 points 2 days ago* (last edited 2 days ago) (1 children)

The head of my agency is a gullible rube who is terrified of being "left behind", and the head of my department is a grown-up with a family and a career who spends his days off sending AI videos and memes into the work chat.

I've been called into meetings and told I have to be positive about AI. I've been told to stop coding and generate (very important) things with AI.

It's disheartening. My career is over, because I have no interest in generating mountains of no-intention code rather than putting in the effort to build reliable, good, useful things for our clients. AI dorks can't fathom human effort and hard work being important.

I'm working to pay off my debts, and then I'm done. I strongly want to get a job that allows me to be offline.

[–] HurlingDurling@lemmy.world 4 points 2 days ago

It almost sounds like were both in the same company

[–] pebbles@sh.itjust.works 45 points 3 days ago (1 children)

My company is doing small trial runs and trying to get feedback on if stuff is helpful. They are obviously pushing things because they are hopeful, but most people report that AI is helpful about 45% of the time. I'm sorry your leadership just dove in head first. That's sound like such a pain.

[–] rain_lover@lemmy.ml 26 points 3 days ago (1 children)

Sounds like your company is run by people who are a bit more sensible and not driven by hype and fomo.

load more comments (1 replies)
[–] Foofighter@discuss.tchncs.de 3 points 1 day ago

They just hopped onto the bandwagon pushing for copilot and SharePoint. Just in time as some states are switching to open source.

My company added an AI chatbot to our web site, but beyond that we generally are anti-AI.

[–] paequ2@lemmy.today 9 points 2 days ago* (last edited 2 days ago) (1 children)

Uff. That sounds like a nightmare. I'm glad my job doesn't force us to us AI. It's encouraged, but also my managers say "Use whatever makes you the most productive." AI makes me slower because I'm experienced and already know what I want and how I want it. So instead of fighting with the AI or fact checking it, I can just do shit right the first time.

For tasks that I don't have experience in, a web search is just as fast. Search, click first link. OR. Sure, I'll click and read a few pages, but that's not wasted time. That's called learning.

I have a friend who works at a company where they have AI usage quotas that affect their performance review. I would fucking quit that job immediately. Not all jobs are this crazy.

AI tends to generate tech debt. I have some coworkers that generate nasty, tech debt, AI slop merge requests for review. My policy is: if you're not gonna take the time to use your brain and write something, then I'm not gonna waste my time reviewing your slop. In those cases, I use AI to "review" the code and decide to approve or not. IDGAF.

load more comments (1 replies)
[–] lichtmetzger@discuss.tchncs.de 4 points 2 days ago (3 children)

I work for a small advertising agency as a web developer. I'd say is mixed. The writing team is pissed about AI, because of the SEO-optimized slop garbage that is ruining enjoyable articles on the internet. The video team enjoys it, because it's really easy to generate good (enough) looking VFX with it. I use it rarely. Mostly for mundane tasks and boilerplate code. I enjoy using my actual brain to solve coding problems.

Customers don't have a fucking clue, of course. If we told them that they need AI for some stupid reason, they would probably believe us.

The boss is letting us decide and not forcing anything upon us. If we believe our work is done better with it, we can go for it, but we don't have to. Good boss.

load more comments (3 replies)
[–] Godnroc@lemmy.world 9 points 2 days ago

So far it's a glorified search engine, which it is mildly competent at. It just speeds up collecting the information I would anyways and then I can get to sorting useful from useless faster.

That said, I've seen emails from people that were written with AI and it instantly makes me less likely to take it seriously. Just tell me what the end goal is and we can discuss how to best get there instead is regurgitating some slop that wouldn't get is there in the first place!

[–] morgan_423@lemmy.world 3 points 2 days ago

I use Excel at work, not in a traditional accounting sense, but my company uses it as an interface with one of our systems I frequently work with.

Rather than tediously search the main Excel sheets that get fed into that system for all of the data fields I have to fill in, I made separate Excel tools that consolidate all of that data, then use macros to put the data into the correct fields on the main sheets for me.

Occasionally I'll have to add new functionality to that sheet, so I'll ask AI to write the macro code that does what I need it to do.

Saves me from having to learn obscure VBA programming to perform a function that I do during .0001% of my work time, but that's about the extent of it. For now.

Of course most of what I do is white collar computer work, so I'm expecting that my current job likely has a two-year-or-less countdown on it before they decide to use AI to replace me.

load more comments
view more: next ›