this post was submitted on 06 Mar 2026
129 points (98.5% liked)

science

25839 readers
327 users here now

A community to post scientific articles, news, and civil discussion.

dart board;; science bs

rule #1: be kind

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Spacehooks@reddthat.com 21 points 5 days ago (2 children)

My SO was complaining that boss (LLM that sucker) put all the meeting notes into LLM and then asked it to make a presentation on it. Then my SO had to redo 90% of it because it was trash. So yay it saved 10% of time. Oh but wait it took time to read all that and run it through the AI sooo no it didn't.

[–] frongt@lemmy.zip 15 points 5 days ago (1 children)

Why redo it? Clearly the boss wanted a presentation on garbage.

[–] Doomsider@lemmy.world 5 points 5 days ago

For real, now the boss will be like that AI isn't half bad after all.

[–] vivalapivo@lemmy.today 2 points 5 days ago (1 children)

Tbh note taking is something LLMs are good at

[–] snooggums@piefed.world 11 points 5 days ago* (last edited 5 days ago) (2 children)

Transcriptions, mostly decent.

Notes and summaries? Not if you care about accuracy.

[–] cravl@slrpnk.net 6 points 5 days ago (1 children)

And transcriptions usually aren't really even AI; speech-to-text has been around a while.

[–] snooggums@piefed.world 10 points 5 days ago (2 children)

Speech to text is AI and always has been.

It wasn't always the current LLM slop bots that coopted the name, sure.

[–] melfie@lemy.lol 6 points 5 days ago

Yep, that’s a fact. Hidden Markov Models, LSTMs, and LLMs are all ML models, and ML is a branch of AI.

[–] cravl@slrpnk.net 2 points 5 days ago

You're 100% right, and I should know that too. “Not LLM-based” is indeed what I was intending to say.

It gets hard to remember the (correct) broader definition when slop is being shoved into your brain through every possible orifice. Even for us that vehemently disagree, it still subconsciously molds the frameworks and language we use. It's insidious, really.

See this article by a fellow lemming which I highly recommend.

[–] vivalapivo@lemmy.today 3 points 5 days ago (1 children)

Transcriptions, mostly decent.

Yup, quite good

Notes and summaries?

It works mostly to shrink the meaning. Something that LLMs are trained for in the first place.

And that's all. 2 separate steps that are both good and reliable to an extent. One of the best applications of ai so far

[–] snooggums@piefed.world 4 points 5 days ago

There was someone at work who was using read.ai for technical discussion and the few summaries I read were like someone who didn't understand the topic and couldn't tell what details were important. We would summarize the decisions and next steps and each one had at least one really important thing changed or left out.

A transcription getting words wrong but still phonetically right is still more helpful than a misleading summary.