this post was submitted on 25 Nov 2025
1284 points (99.4% liked)

Microblog Memes

9933 readers
1857 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Godort@lemmy.ca 37 points 4 weeks ago* (last edited 4 weeks ago) (10 children)

It would be great, but it could never happen. All the marketing of AI is around speculation of what it could do.

Investors know what a train is, what it does and how much it costs. They don't know any of those things when it comes to AI, so they're willing to spend a lot, because they were promised a lot.

[–] HakFoo@lemmy.sdf.org 21 points 4 weeks ago (6 children)

But what about this promise makes it so uniquely seductive?

There are a million guys with ideas for cars that will go 750km on a thimble-full of Fresca, robot butlers that can't turn evil because they don't have red LEDs in the eye positions, and 200:1 data compression as long as you never have to decompress it. They must all be looking at Altman and company and asking where their bubbles.

I sadly suspect the charm is "we can sack some huge percentage of workers if it delivers"

But what about this promise makes it so uniquely seductive?

Part of it is, as you pointed out, just the elimination of costly labor. That's a capitalist's wet dream. But the main thing that makes it attractive as a slick, highly marketable investment vehicle is that AI models are inherently black boxes.

There are ways you can examine the ways they work (for example, researchers found that the parts of an LLM that "understand" one topic, like money, can also simultaneously "understand" other different, yet related things, like value, credit, etc), but we can't truly comprehend everything about them. It would be like looking at a math problem billions of equations large and assuming we could hold the whole equation perfectly in our brain and do the mental math to solve it. We can't.

That means that instead of seeing "here's our robot that is currently capable of this, but these are the components that could be upgraded/replaced, X is an issue it faces because of Y" and so on, instead you get "It's not good at this yet, but it will be if you just throw a few billion dollars more compute at it, we promise this time."

Problems are abstracted away to "something that will fix itself later," or something that "just happens, but we'll find a way to fix it", and not any kind of mechanical constraint a VC fund manager might be able to understand.

load more comments (5 replies)
load more comments (8 replies)