this post was submitted on 08 Jun 2025
1333 points (97.1% liked)
Microblog Memes
10376 readers
2124 users here now
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
RULES:
- Your post must be a screen capture of a microblog-type post that includes the UI of the site it came from, preferably also including the avatar and username of the original poster. Including relevant comments made to the original post is encouraged.
- Your post, included comments, or your title/comment should include some kind of commentary or remark on the subject of the screen capture. Your title must include at least one word relevant to your post.
- You are encouraged to provide a link back to the source of your screen capture in the body of your post.
- Current politics and news are allowed, but discouraged. There MUST be some kind of human commentary/reaction included (either by the original poster or you). Just news articles or headlines will be deleted.
- Doctored posts/images and AI are allowed, but discouraged. You MUST indicate this in your post (even if you didn't originally know). If a post is found to be fabricated or edited in any way and it is not properly labeled, it will be deleted.
- Be nice. Take political debates to the appropriate communities. Take personal disagreements to private messages.
- No advertising, brand promotion, or guerrilla marketing.
Related communities:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Can you go into a bit more details on why you think these papers are such a home run for your point?
Where do you get 95% from, these papers don't really go into much detail on human performance and 95% isn't mentioned in either of them
These papers are for transformer architectures using next token loss. There are other architectures (spiking, tsetlin, graph etc) and other losses (contrastive, RL, flow matching) to which these particular curves do not apply
These papers assume early stopping, have you heard of the grokking phenomenon? (Not to be confused with the Twitter bot)
These papers only consider finite size datasets, and relatively small ones at that. I.e. How many "tokens" would a 4 year old have processed? I imagine that question should be somewhat quantifiable
These papers do not consider multimodal systems.
You talked about permeance, does a RAG solution not overcome this problem?
I think there is a lot more we don't know about these things than what we do know. To say we solved it all 2-5 years ago is, perhaps, optimistic
You claim to be some kind of expert but you can't even read the paper? Lmao.
y rude pls
(not parent commenter)