this post was submitted on 23 Jul 2025
810 points (99.2% liked)

Microblog Memes

8704 readers
3110 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Sadbutdru@sopuli.xyz 51 points 5 days ago (10 children)

Right, I'm no expert (and very far from an AI fanboi), but not all "AI" are LLMs. I've heard there's good use cases in protein folding, recognising diagnostic patterns in medical images.

It fits with my understanding that you could train a similar model on more constrained datasets than 'all the English language text on the Internet' and it might be good at certain jobs.

Am I wrong?

[–] alk@lemmy.blahaj.zone 76 points 5 days ago

You are correct. However, more often than not it's just like the image describes and people are actually applying LLM's en masse to random problems.

[–] not_IO@lemmy.blahaj.zone 39 points 5 days ago (2 children)

what ai, apart from language generators "makes up studies"

[–] jonne@infosec.pub 28 points 5 days ago

Hallucinating studies is however very on brand for LLM as opposed to other types of machine learning.

[–] jaredwhite@piefed.social 16 points 5 days ago (1 children)

Technically, LLMs as used in Generative AI fall under the umbrella term "machine learning"…except that until recently machine learning was mostly known for "the good stuff" you're referring to (finding patterns in massive datasets, classifying data entries like images, machine vision, etc.). So I feel like continuing to use the term ML for the good stuff helps steer the conversation away from what is clearly awful about genAI.

[–] peoplebeproblems@midwest.social 3 points 5 days ago (2 children)

There is no generative AI. It's just progressively more complicated chatbots. The goal is to fool the human into believing it's real.

Its what Frank Herbert was warning us all about in 1965.

[–] shalafi@lemmy.world 1 points 4 days ago

What was Frank on about? The Butlerian Jihad I assume? Read the book 8 times and don't remember why thinking machines had gone rogue. ?

[–] fushuan@piefed.blahaj.zone 1 points 5 days ago

Chatbprs are genAI. Any artificial intelligence like NPCs, autopilot, playing games against the machine, playing chess against the machine... All of those have been called AI.

GenAI is a subset where what the AI does is generate text or images instead of taking a deterministic option. GenAI describes pretty well what it does generate a text or image output, no matter the accuracy of the text. The AI is optimised to generate output that looks like what you would expect with the given input, and generally it does exactly that, even if it hallucinated facts to fit the idea of the response that they are supposed to give with the given input.

[–] Tomassci@sh.itjust.works 7 points 4 days ago

The problems with AI we talk of here is mostly with generative AI. Protein folding, diagnostic patterns and weather prediction works a bit differently than image making or text writing services.

[–] baggachipz@sh.itjust.works 14 points 5 days ago

That’s because “AI” has come to mean anything with an algorithm and a training set. Technologies under this umbrella are vastly different, but nontechnical people (especially the press) don’t understand the difference.

[–] minnow@lemmy.world 7 points 5 days ago

Right. You're talking about specialized AI that are programmed and trained to perform very specific tasks, and are absolutely useless outside of those tasks.

Llama are generalized AI which can't do any of those things. The problem is that what it's good at, really REALLY good at, is giving the appearance of specialized AI. Of course this is only a problem because people keep getting fooled into thinking that generalized AI can do all the same things that specialize AI does.

[–] Sadbutdru@sopuli.xyz 6 points 5 days ago

Obviously that should be in an advisory capacity, and not making decisions (like approving drugs for human use [which i heavy doubt was actually happening])

[–] takeda@lemmy.dbzer0.com 1 points 5 days ago

Yeah, AI (not LLM) can be a very useful tool in doing research, but this takes about deciding if a drug should be approved or not.