this post was submitted on 25 Jun 2024
118 points (100.0% liked)

technology

24214 readers
249 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
 

The Enshittification continues. Slightly better than Google because you can at least turn it off, but still on by default! Turn that shit off if you use DDG!

you are viewing a single comment's thread
view the rest of the comments
[–] JCreazy@midwest.social 5 points 2 years ago (1 children)

The AI chat is helpful though

[–] dat_math@hexbear.net 17 points 2 years ago (1 children)

please demonstrate a case where this has been useful to you

[–] JCreazy@midwest.social 6 points 2 years ago* (last edited 2 years ago) (5 children)

I asked it the difference between soy sauce and tamari and it told me.

[–] dat_math@hexbear.net 19 points 2 years ago* (last edited 2 years ago)

oooh tamari

so like, could you have answered that question without spinning up a 200W gpu somewhere to do the llm "inference"?

[–] Black_Mald_Futures@hexbear.net 18 points 2 years ago (1 children)

literally just google "wikipedia tamari"

[–] blobjim@hexbear.net 12 points 2 years ago

and essentially all it's doing is plagiarizing a dozen other answers from various websites.

[–] dat_math@hexbear.net 11 points 2 years ago (1 children)

tamarind the fruit or tamarin the genus?

[–] Findom_DeLuise@hexbear.net 10 points 2 years ago

horror soypoint-2

is-this Are these the same?

[–] booty@hexbear.net 2 points 2 years ago (1 children)

and what part of that required an AI?

[–] JCreazy@midwest.social 1 points 2 years ago (1 children)

I never said it did. It was just faster than searching through multiple search results and reading through multiple paragraphs.

[–] dat_math@hexbear.net 1 points 2 years ago (1 children)

How did you know the answer was correct?

[–] JCreazy@midwest.social 1 points 2 years ago* (last edited 2 years ago) (1 children)

How do you know any information is correct?

[–] dat_math@hexbear.net 1 points 2 years ago (1 children)

Do you really not see why I asked my rhetorical question or do you just want to bicker?

[–] JCreazy@midwest.social 0 points 2 years ago (1 children)

I wasn't bickering. You're the one trying to argue. It sounds like you're implying that information from AI is inherently incorrect which simply isn't true.

[–] dat_math@hexbear.net 1 points 2 years ago* (last edited 2 years ago)

First, at the risk of being a pedant, bickering and arguing are distinct activities. Second, I didn't imply llm's results are inherently incorrect. However, it is undeniable that they sometimes make shit up. Thus without other information from a more trustworthy source, an LLM's outputs can't be trusted.