this post was submitted on 01 May 2026
30 points (100.0% liked)

Technology

1419 readers
38 users here now

A tech news sub for communists

founded 3 years ago
MODERATORS
top 6 comments
sorted by: hot top controversial new old
[–] bennieandthez@lemmygrad.ml 7 points 11 hours ago (1 children)

It will never cease to amaze me that the first deepseek drop was just a side project lol

[–] yogthos@lemmygrad.ml 4 points 11 hours ago

The best part about it is how it's the result of China cracking down on quant trading. Really shows how much impact proper governance has on the overall development of a country. By cracking down on a harmful economic pattern they forced a shift in direction of labour and resources towards actually productive purposes.

[–] chesmotorcycle@lemmygrad.ml 12 points 22 hours ago (1 children)

OpenAI is spending $500 billion on data centers. Google has entire campuses of supercomputers. Meta hired every genius on the planet. And then a hedge fund guy from Hangzhou with 200 kids fresh out of university just casually dropped a model that beats them all — and then open sourced it with the full recipe.

Let that sink in. Not “almost as good.” Not “competitive.” Beats them — on math olympiads, on coding, on long-context retrieval — while using a fraction of the compute. And then they uploaded it to Hugging Face for free. For. Free.

The model is Deepseek V4, available here.

[–] PoY@lemmygrad.ml 6 points 12 hours ago (2 children)

i dunno where the line that it beats all the other models comes from when their own stats show they do worse across the board against the other big models, but they are close

[–] CriticalResist8@lemmygrad.ml 4 points 10 hours ago

I would have said the same thing as the author if I'd written that, counting 'beating' as being exceedingly cheaper while delivering comparable results, and doing it under sanctions. Deepseek makes a lot of sense for hobby projects because of the price, though I'm hearing about professional devs ditching Claude for V4-pro - but before deepseek, there was no reasonable solution for agentic at home, and you were stuck on debugging on the web interface.

Mind you speaking of benchmarks I have no idea what these things are actually supposed to represent lol. I found v4 good at recall and memory, but when talking to it (doing research, clarifying questions etc as opposed to just having it code), I found its overall output pretty diminished, like an old GPT 3.5 "you're so right - and here's why you are". You can gloss over it but they had found a great mix by late 3.2 imo.

[–] chesmotorcycle@lemmygrad.ml 4 points 10 hours ago

I thought that claim wasn't quite right. Maybe the author was just cherry picking certain stats or benchmarks.