this post was submitted on 26 Apr 2025
582 points (97.7% liked)
Microblog Memes
7480 readers
1735 users here now
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
Rules:
- Please put at least one word relevant to the post in the post title.
- Be nice.
- No advertising, brand promotion or guerilla marketing.
- Posters are encouraged to link to the toot or tweet etc in the description of posts.
Related communities:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
But don't LLMs not do math, but just look at how often tokens show up next to each other? It's not actually doing any prime number math over there, I don't think.
If I fed it a big enough number, it would report back to me that a particular python math library failed to complete the task, so it must be neralling it's answer AND crunching the numbers using sympy on its big supercomputer
Is it running arbitrary python code server side? That sounds like a vector to do bad things. Maybe they constrained it to only run some trusted libraries in specific ways or something.
They do math, just in a very weird (and obviously not super reliable) way. There is a recent paper by anthropic that explains it, I can track it down if you'd be interested.
Broadly speaking, the weights in a model will form sorts of "circuits" which can perform certain tasks. On something hard like factoring numbers the performance is probably abysmal but I'd guess the model is still trying to approximate the task somehow.