this post was submitted on 24 Apr 2026
25 points (100.0% liked)
Technology
1420 readers
10 users here now
A tech news sub for communists
founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Unfortunately, DeepSeek V4 is not a full frontier model able to beat OpenAI's or Anthropic's latest yet, so there is a bit yet to improve.
I'm not really noticing much difference with claude for coding so far. And I'd argue claude 4.7 was actually a regression in a lot of ways.
Fair, I've heard similar annoyances about GPT 5.5. I think I hope DeepSeek reinforcement-trains V4 a bit harder and, in two to three months, comes out with an earth-shattering V4.1.
The quality of the training is really what it comes down to. I saw one approach that was actually kind of obvious in retrospect where a model was trained on the actual git history instead of repository snapshots which taught it how code actually evolves over time. I think these kinds of tricks will add a lot of polish to make really competent coding models.