this post was submitted on 09 Aug 2025
40 points (100.0% liked)
technology
24031 readers
227 users here now
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
- Ways to run Microsoft/Adobe and more on Linux
- The Ultimate FOSS Guide For Android
- Great libre software on Windows
- Hey you, the lib still using Chrome. Read this post!
Rules:
- 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I mean there's plenty of shit code written by humans, and many libraries are of very poor quality. This is especially true in languages like Js. The assertion that somebody used diligence at every level is not true. Even when we do code reviews we often miss really basic things. And the way we get around that is by creating test harnesses, using tools like type systems and linters. We don't just trust that the code works because a human wrote it and looked at it.
Whether someone made it or not is not really that important if you have a formal contract. You specify what the inputs and outputs are, and as long as the code meets the contract you know what it's doing. That's basically been the whole promise with static typing. Whether a human writes this code or a model doesn't really matter.
Also worth noting that people used these exact types of arguments when programming languages started being used. People claimed that you had to write assembly by hand so you know what the code is doing, and that you can't trust the compiler, and so on. Then these arguments were made regarding GC saying that you have to manage memory by hand and that GC is too unpredictable, and etc. We've already been here many times before.
This isn't what I suggested at all. What I said that the programmer would focus on the specification and understanding what the code is doing semantically. The LLM handles the implementation details of the code with the human focusing on what the code is doing while the LLM focuses on how it does it.
Again, you use exact same tools and processes to evaluate software whether it's written by a human or a model. The reality is that people make mistakes all the time, people write code that's as bad as any LLM. We have developed practices to evaluate code and to catch problems. You're acting as if human written code doesn't already have all these same problems that we deal with on daily basis.
Yes, every programming abstraction comes with trade offs. Yet, it's pretty clear that most people prefer the trade offs that allow them to write code that's more declarative. That said, using approaches like genetic algorithms coupled with agents could actually allow automating a lot of optimization that we don't bother doing because it's currently too tedious.
It's relevant because the key skill is being able to understand the problem and then understand how to represent it formally. This is the skill that's needed whether you have agents fill in the blanks or you do it yourself. There's a reason why you do a lot of math work and algorithms on paper in university (or at least I did back in my program). The focus was on understanding how algorithms work conceptually and writing pseudo code. The specific language used to implement the code was never the focus.
What you're talking about is a specific set of skills degrading because they're becoming automated. This is no different from people losing skills like writing assembly by hand because the compiler can do it now.
There's always a moral panic every time new technology emerges that automates something that displaces a lot of skills people invested a lot of time into. And the sky never comes crashing down. We end up making some trade offs, we settle on practices that work, and the world moves on.
I don't see how that follows. You appear to be conflating the skill of learning the specific syntax of a language with the skill of designing algorithms and writing contracts. These are two separate things. A developer will simply do work that's more akin to what a mathematician or a physicist does.
LLMs don't allow the programmer to bypass every stage of the learning process in both an academic and professional capacity. That's simply a false statement. You cannot create quality software with the current generation of LLMs without understanding how to structure code, how algorithms work, and so on. These are absolutely necessary skills to use these tools effectively, and they will continue to be needed.
Again, this was said many times before when development became easier. What's really happening is that the barrier is being lowered, and a lot more people are now able to produce code, many of whom simply would've been shut out of this field before.
I see no basis for this assertion myself, but I guess we'll just wait and see.
Nobody is suggesting trusting LLMs here. In fact, I've repeatedly pointed out that trust shouldn't be part of the equation with any kind of code whether it's written by a human or a machine. We have proven techniques for verifying that the code does what was intended, and that's how we write professional software.
Even if this technology stopped improving today, which there is very reason to expect, it is already a huge quality of life improvement for software development. There are plenty of legitimate real world use cases for this tech already, and it's not going away.