8
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 08 Oct 2024
8 points (100.0% liked)
JetBrains
71 readers
1 users here now
A community for discussion and news relating to JetBrains and its products! https://www.jetbrains.com/
Related Communities
- !aqua@programming.dev
- !clion@programming.dev
- !datagrip@programming.dev
- !dataspell@programming.dev
- !fleet@programming.dev
- !goland@programming.dev
- !intellij@programming.dev
- !phpstorm@programming.dev
- !pycharm@programming.dev
- !rider@programming.dev
- !rubymine@programming.dev
- !rustrover@programming.dev
- !webstorm@programming.dev
Copyright © 2000-2024 JetBrains s.r.o. JetBrains and the JetBrains logo are registered trademarks of JetBrains s.r.o.
founded 9 months ago
MODERATORS
No, you define what you want in project planning and briefs, coding is the interpretation of your definition. It is quickly becoming far easier and definitely faster for a machine to interpret what we define than for us to translate our definition into what a machine can interpret.
And 64k oughta be enough for everyone.
You switch to LLM's at your convenience but you tripped over the term "AI". We've been over this a few times already and I hate repeating myself.
We can boil the issue down to a very simple question, do you think in time AI will play a significant role in how we generate code?
If the answer is no, then I'll see you in ten years, if the answer is yes, then you should admit that GitHub choosing that term is not out of place and it is only self evident that they use what is currently the best approach to produce code/assistance while putting it under the "AI" banner for their long term vision and because it wants and needs to ride the hype train.
All the arguments I hear are largely pedantry and contrarianism. You see this every time something new and exciting pops up, people will huff and puff about small issues while losing track of the larger picture. The way you choose your words makes it obvious that this is just another case of that. No nuance, no, just "this is trash", as if completely oblivious to the fact that in the time it took you to type those 3 words, a million people received an answer from an LLM that would otherwise take them 5 minutes to Google.
But you have no idea whether the code generated is of such low quality that it offsets the time it took to produce it. That is just another assertion. For someone who is so adamant about the precision of code, you sure do throw around a lot of unfounded beliefs.
Like this gem for instance. Not only do you build on the unfounded premise that AI generates bad code, it also assumes a coder does not. On top of that, how much bad code? How many more times? Shouldn't there be some quantification in all this rhetoric?