this post was submitted on 06 Nov 2023
94 points (100.0% liked)

the_dunk_tank

15917 readers
7 users here now

It's the dunk tank.

This is where you come to post big-brained hot takes by chuds, libs, or even fellow leftists, and tear them to itty-bitty pieces with precision dunkstrikes.

Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.

Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.

Rule 3: No sectarianism.

Rule 4: TERF/SWERFs Not Welcome

Rule 5: No ableism of any kind (that includes stuff like libt*rd)

Rule 6: Do not post fellow hexbears.

Rule 7: Do not individually target other instances' admins or moderators.

Rule 8: The subject of a post cannot be low hanging fruit, that is comments/posts made by a private person that have low amount of upvotes/likes/views. Comments/Posts made on other instances that are accessible from hexbear are an exception to this. Posts that do not meet this requirement can be posted to !shitreactionariessay@lemmygrad.ml

Rule 9: if you post ironic rage bait im going to make a personal visit to your house to make sure you never make this mistake again

founded 4 years ago
MODERATORS
 

I HATE HEINLEIN

I HATE HEINLEIN

you are viewing a single comment's thread
view the rest of the comments
[โ€“] drhead@hexbear.net 4 points 1 year ago (1 children)

In the context of AI, people tend to use "grok" to describe what can sometimes happen if you overtrain the living shit out of a model and it somehow goes from being trained appropriately and generalizing well -> overfitted and only working well on the training data, with shit performance everywhere else -> somehow working again and generalizing, even better than before. Example in a paper: https://arxiv.org/abs/2201.02177

OpenAI really wants a monopoly and are trying to present themselves as a "safe" AI company while also lobbying for regulation of "unsafe" AI companies (everyone else, and especially open-source development). So pretty much half of all manhours spent on developing models at OpenAI seem to be directed towards stopping it from generating anything that will get them the wrong kind of press. Sometimes, they are moderately successful at doing this, but someone always eventually finds a way to get something on the level of "gender reveal 9/11" out of their models.

Elon owned OpenAI at some point but sold it because, as we all know, he makes a lot of extremely poor financial decisions.

trained appropriately and generalizing well -> overfitted and only working well on the training data, with shit performance everywhere else -> somehow working again and generalizing, even better than before

That's fascinating, I've never heard of that before.