blakestacey

joined 2 years ago
MODERATOR OF
[–] blakestacey@awful.systems 10 points 4 months ago (1 children)

Kelsey Piper bluechecks thusly:

James Damore was egregiously wronged.

[–] blakestacey@awful.systems 11 points 4 months ago

I mean, maybe? But the amount of trust I put in a description from "GeekWire" written by "an investor at Madrona Venture Group and a former leader at Amazon Web Services" who uncritically declares that spicy autocomplete "achieved strong reasoning capabilities" is ... appropriately small.

[–] blakestacey@awful.systems 15 points 4 months ago (2 children)

I've previously discussed the concept of model collapse, and how feeding synthetic data (training data created by an AI, rather than a human) to an AI model can end up teaching it bad habits, but it seems that DeepSeek succeeded in training its models using generative data, but specifically for subjects (to quote GeekWire's Jon Turow) "...like mathematics where correctness is unambiguous,"

That sound you hear is me pressing F to doubt. Checking the correctness of mathematics written as prose interspersed with equations is, shall we say, not easy to automate.

[–] blakestacey@awful.systems 16 points 4 months ago (1 children)

Wait, the splinter group from the cult whose leader wants to bomb datacenters might be violent?

[–] blakestacey@awful.systems 23 points 5 months ago (3 children)

I mean, "downvotes are proof that the commies are out to get me" is an occasion not just to touch grass, but to faceplant into an open field of wildflowers.

[–] blakestacey@awful.systems 12 points 5 months ago (4 children)

Enjoy your trip to the egress.

[–] blakestacey@awful.systems 6 points 5 months ago

yeah, DeepSeek LLMs are probably still an environmental disaster for the same reason most supposedly more efficient blockchains are — perverse financial incentives across the entire industry.

  1. the waste generation will expand to fill the available data centers

  2. oops all data centers are full, we need to build more data centers

[–] blakestacey@awful.systems 7 points 5 months ago (3 children)

Perhaps the most successful "sequel to chess" is actually the genre of chess problems, i.e., the puzzles about how Black can achieve mate in 3 (or whatever) from a contrived starting position that couldn't be seen in ordinary ("real") gameplay.

There are also various ways of randomizing the starting positions in order to make the memorized knowledge of opening strategies irrelevant.

Oh, and Bughouse.

[–] blakestacey@awful.systems 21 points 5 months ago (2 children)

Pouring one out for the local-news reporters who have to figure out what the fuck "timeless decision theory" could possibly mean.

[–] blakestacey@awful.systems 17 points 5 months ago* (last edited 5 months ago) (7 children)

The big claim is that R1 was trained on far less computing power than OpenAI’s models at a fraction of the cost.

And people believe this ... why? I mean, shouldn't the default assumption about anything anyone in AI says is that it's a lie?

[–] blakestacey@awful.systems 14 points 5 months ago

Altman: Mr. President, we must not allow a bullshit gap!

Musk: I have a plan... Mein Führer, I can walk!

[–] blakestacey@awful.systems 3 points 5 months ago (1 children)

I would appreciate this too, frankly. The rabbit hole is deep, and full of wankers.

view more: ‹ prev next ›