I mean, maybe? But the amount of trust I put in a description from "GeekWire" written by "an investor at Madrona Venture Group and a former leader at Amazon Web Services" who uncritically declares that spicy autocomplete "achieved strong reasoning capabilities" is ... appropriately small.
I've previously discussed the concept of model collapse, and how feeding synthetic data (training data created by an AI, rather than a human) to an AI model can end up teaching it bad habits, but it seems that DeepSeek succeeded in training its models using generative data, but specifically for subjects (to quote GeekWire's Jon Turow) "...like mathematics where correctness is unambiguous,"
That sound you hear is me pressing F to doubt. Checking the correctness of mathematics written as prose interspersed with equations is, shall we say, not easy to automate.
Wait, the splinter group from the cult whose leader wants to bomb datacenters might be violent?
I mean, "downvotes are proof that the commies are out to get me" is an occasion not just to touch grass, but to faceplant into an open field of wildflowers.
Enjoy your trip to the egress.
yeah, DeepSeek LLMs are probably still an environmental disaster for the same reason most supposedly more efficient blockchains are — perverse financial incentives across the entire industry.
-
the waste generation will expand to fill the available data centers
-
oops all data centers are full, we need to build more data centers
Perhaps the most successful "sequel to chess" is actually the genre of chess problems, i.e., the puzzles about how Black can achieve mate in 3 (or whatever) from a contrived starting position that couldn't be seen in ordinary ("real") gameplay.
There are also various ways of randomizing the starting positions in order to make the memorized knowledge of opening strategies irrelevant.
Oh, and Bughouse.
Pouring one out for the local-news reporters who have to figure out what the fuck "timeless decision theory" could possibly mean.
The big claim is that R1 was trained on far less computing power than OpenAI’s models at a fraction of the cost.
And people believe this ... why? I mean, shouldn't the default assumption about anything anyone in AI says is that it's a lie?
Altman: Mr. President, we must not allow a bullshit gap!
Musk: I have a plan... Mein Führer, I can walk!
I would appreciate this too, frankly. The rabbit hole is deep, and full of wankers.
Kelsey Piper bluechecks thusly: