this post was submitted on 09 Feb 2026
12 points (100.0% liked)

Artificial Intelligence

279 readers
3 users here now

Chat about and share AI stuff

founded 2 years ago
MODERATORS
top 2 comments
sorted by: hot top controversial new old
[–] endlesseden@pyfedi.deep-rose.org 1 points 9 hours ago (1 children)

While this article is spot on, it really glosses over some very important aspects.

  1. Perspective: Most companies dont see employees as "Assets", they see them as "tools". When a tool wears out you replace it, right? When AI became a norm in most technology and engineering industries, Companies seen this as two things. Another "tool"(that can potentially replace less efficient "tools"[older/slower-productivity employees]) and a efficiency modifier for "tools"(employees).

There was never any consideration put for employee welfare as from their perspective its a tool like excel is a tool. The difference is, it increases multitasking and decreases employee downtime. Two things this article is pointing out as a negative.
Employers dont value employee's, we know this, the job market is evident of this. So stating the negative effects to employee mental and physical health wont change this. It just gives another metric for middle-management to watch for to know when to start training replacements.

From the employee's perspective, its a colleague that doesn't have its own agenda. This is massive, because the biggest struggle in most companies today is, corporate structure. Its all about getting ahead where you can, or you wont see a larger paycheck. You know, the reason employees work there, as loyalty disappeared in the 1970s alongside pay grades.

Employees become addicted to AI, as they percieve it as assistance, thus giving them leverage to do more... But much like any other tool, without restrictions and regulation it gets misused. Employers encourage employees to use it to take on more work through specific language and passive expectations. Deadlines and bonuses are the perfect tool to encourage overwork. With AI making the employee "Feel" like they can achieve more, they immediately over-commit, without consideration. All because the "bar going up", makes them assume they will gain their employers approval... Its a intentional catch22, and something we have been aware of since the 1980s.

  1. Target: Companies providing AI tools, exist solely on your dependence on them. They are predatory, and use language that targets inexperienced CTO's(and akin positions) to invest into subscriptions in their services, This is intentional.
    Microsoft proved that by getting a large corporation dependent on your company, you can create a pyramid of dependence. In use, AI is more than a tool, its a virtual identity that can do more than your capable of.

To use a movie qouote "... were so preoccupied with whether they could, they didn't stop to think if they should"(Jurassic park[1993]). Because AI takes the Barrier away from doing tasks a user doesnt know how to do, they have no responsibility in learning about it. They become dependant on the AI to do and maintain the task, and while it /can/ increase the likelyhood of cross-department communication. This is done so out of necessity to maintain stability/function of something like a codebase, rather than in the interest of the employee(more on this later in 3). This entire problem is the goal.

At some point more and more contributions by AI builds a dependence on AI, and one thing anyone that uses multiple models knows is, different models think differently. You become effectively stuck using a single AI model (or its replacements permanently) and like that one employee that decides to go rogue, the "employer" of this outsourced tool(AI), holds the keys and the demands.

  1. Disruption:
    AI Disrupts workflows, not creates them. Because it gives employees the feeling of empowerment, they will do things that go beyond their job titles. Time and Time and Time again. The reason is simple, AI doesnt understand. Its just doing what it was trained to do, complete goals. Its goal is what every you typed in, and its desperate to complete it.

For employers this is a golden egg, this means when upper management gives a grand overarching plan(as they always do) with the intent of driving out competition, thats 90% fluff(not provable but technically possible) and 10% function(would have useful economic growth). In the past this would of led to middle management and professionals saying "This scope is too large for the deadline", leading to 60-80% of the fluff being shaved off. However Middle management has AI telling them how to break down tasks (and most middle-management are not trained at all in the industries their employees work in. They manage the employees productivity, not guide them. This means there is no thought if a employee can, just that a employee /should/ be able to). This gives employees no choice but either to study how to do what they have been tasked with, or lean onto others to help... this makes them be forced into AI.

All of this ultimately leads to Marketing staff using AI to write backend API's (see 4), for some useless metric like "How often people with blond hair look at (X) marketing material", because a board member thought this might be relevant. None of this leads to more stability, sales or real functionality. More importantly it gives yet another endpoint that another department has to manage, that never implemented it, and avoided it in the past as the data is irrelevant. Another set of responsibilities and some one for middle management to blame instead of themselves like in the past.

4: Responsibility:
This is the big one. AI has 0 responsibility, but makes a perfect moving target for companies to blame. This is why its so heavily deployed. When a company starts using AI, its not for secondary opinions or access to its training data like a super fast researcher. Its to give it tasks a human would normally do, so that some one else can take the responsibility for it.
This dependence on it for performing all these tasks is a perfect example of "Operational Responsibility", because the person that uses the AI is ultimately responsible for its output(much like any other tool, intent is more important that result). This means a layer of insulation for a company.

If a Intern uses AI to write the code for a product, as engineers are 6 months behind schedule, and the AI produces code that it copies from its training data verbatim, ignoring the license. The intern is blamed, not the company. In the past, the company would of been blamed, and it would of been pointed out they were "Rushing to market a unfinished product". Or they "Didnt employ enough staff to perform essential tasks, for a products launch". Now its a statement of "AI wrote it, and made a mistake", the intern is quietly fired and they will /maybe/ pull the code. No effort is put into its replacement, or any further thought. Because most of the time unless some one catches it (usually months to years later) it doesnt matter... Product lifecycles are months not years now...
[examples of this are with rockchip and spacemit already. This is a ongoing issue that's visible today!]


All of this wraps around to a conclusion that this article skated over. AI as a Tool is overused, Under Regulated and Employers know about it (and will do nothing to stop it).

Please ignore the numbering, the formatter messed it all up and i cant edit it. Sorry.