Unpopular Opinion
Welcome to the Unpopular Opinion community!
How voting works:
Vote the opposite of the norm.
If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.
Guidelines:
Tag your post, if possible (not required)
- If your post is a "General" unpopular opinion, start the subject with [GENERAL].
- If it is a Lemmy-specific unpopular opinion, start it with [LEMMY].
Rules:
1. NO POLITICS
Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.
2. Be civil.
Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Shitposts and memes are allowed but...
Only until they prove to be a problem. They can and will be removed at moderator discretion.
5. No trolling.
This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.
6. Defend your opinion
This is a bit of a mix of rules 4 and 5 to help foster higher quality posts. You are expected to defend your unpopular opinion in the post body. We don't expect a whole manifesto (please, no manifestos), but you should at least provide some details as to why you hold the position you do.
Instance-wide rules always apply. https://legal.lemmy.world/tos/
view the rest of the comments
You'll get downvoted to hell and so will I, but I'll share my personal observation working at a large IT company in the AI space.
Everybody in my company and our competitors are automating the shit out of everything we can. In some cases it's stuff we could have automated with regular cloud automation stuff, there just wasn't organizational focus. But in ~75% of cases it's automating things that used to require an engineer doing some Brain Work.
Simple build breaks or bug fixes used now get auto-fixed and reviewed later. Not at 100% success rate but it started at like 15% and then 25% and ...
Whoops some problem in the automation scripts, only have a junior engineer on call right now and he doesn't know Groovy syntax? No problem, not knowing the language is not a blocker anymore. Engineer just needs to tweak the AI suggestions.
Code reviews? Well the AI already caught a lot of the common stuff in our org standards before the PR was submitted, so engineers are focusing on the tricky issues not the common easy to find ones.
Management wants quantifiable numbers. Sometimes that's easy ("X% of bugs fixed automatically saving ~Y person-hours"), sometimes like with code reviews it's a quality thing that will show up over time.
But we're all scrambling like fuck, knowing full well that
a) everything is up for change right now and nobody knows where this is going
b) we coders are like the horseshoe makers we better figure out how the fuck to get in front of this
c) just like the Internet -- the companies that Figure It Out will be so much more efficient, that their competitors will Just Die
I can only speak for large corporate IT. But AFAICT, it's exactly like the Internet -- just even more disruptive.
To quote Jordan Peele: Stay woke, bitches!
I'm just amazed whenever I hear people say things like this as I can't get any model to spit out working code most of the time. And even when I can it's inconsistent and/or questionable quality code.
Is it because most of your work is small iterations on an existing code base? Are you only working with the most popular tools that are better supported by models?
Llama 4 sucked but with scaffolding could solve some common problems.
o1/3 was way better less gaslighting
Grok4 kicked it up a notch more like a pro coder
GPT5 and Claude able to solve real problems, implement simple features.
A lot depends on not just the codebase but on context, aka prompt engineering. Does the AI have access to relevant design docs? Interface definitions? Clearly written, well-formed bug? ... but not so much that context is overwhelming and it doesn't work well again
Okay, that's more or less what I was expecting. A lot of my work is on smaller problems with more open ended solutions and in those scenarios I find the AI only really helps with boiler plate stuff. Most of the packages I work with it only ever has a fleeting understanding or mixes up versioning so badly that it's really hard to trust it.
There is only one thing for certain: the people who hold the purses dictate the policies.
I sympathize for the IT workers who feel like they're engineering their replacements. Eventually, only a fraction of those jobs will survive.
I believe hardware and market limitations will curb AI growth in the near future, hopefully the dust will start to settle and the real people who need to feed their families will find a way through. I think one way or another, there will be a serious need for social safety net programs to offset the IT labor surplus, which, hopefully, could create a (Socialist) Red Wave.
Partly true. They are engaged in a dance with the technologists and researchers as we collectively figure out how exactly this is all going to work
I know some who feel that way. But anecdotally most I know feel like we're racing in front of technology and history more than against plutocracy