this post was submitted on 01 Jan 2026
791 points (93.6% liked)
Microblog Memes
10011 readers
3746 users here now
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
Rules:
- Please put at least one word relevant to the post in the post title.
- Be nice.
- No advertising, brand promotion or guerilla marketing.
- Posters are encouraged to link to the toot or tweet etc in the description of posts.
Related communities:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Most of the stuff known as AI in the current environment is really, really powerful inference engines. And understanding the limits of inference (see for example Hume's Problem of Induction) is an important part of understanding the appropriate scope of where these tools are actually useful and where they're actively misleading or dangerous.
So, take the example of filling in unknown details in a low resolution image. We might be able to double the number of pixels and try to fill in our best guesses of what belongs in the in-between pixels that weren't in the original image. That's probably a pretty good use of inference.
But guessing what's off the edge of the picture is built on a less stable and predictable process, less grounded in what is probably true.
When we use these technologies, we need domain-specific expertise to be able to define which problems are the interstitial type where inferential engines are good at filling things in, and which are trying to venture beyond the frontier of what is known/proven and susceptible to "hallucination."
That's why there's likely going to be a combination of things that are improved and worsened by the explosion of AI capabilities, and it'll be up to us to know which is which.