Technology
Which posts fit here?
Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.
Post guidelines
[Opinion] prefix
Opinion (op-ed) articles must use [Opinion] prefix before the title.
Rules
1. English only
Title and associated content has to be in English.
2. Use original link
Post URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communication
All communication has to be respectful of differing opinions, viewpoints, and experiences.
4. Inclusivity
Everyone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacks
Any kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangents
Stay on topic. Keep it relevant.
7. Instance rules may apply
If something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.
Companion communities
!globalnews@lemmy.zip
!interestingshare@lemmy.zip
Icon attribution | Banner attribution
If someone is interested in moderating this community, message @brikox@lemmy.zip.
view the rest of the comments
AI does not lie. People using untrustworthy AI lie when they promote it as their own work.
Edit: to clarify, a day later. AI has wrong facts and cannot be trusted or used well. But machines don’t lie, people promoting the use of things that create misinformation lie. I wrote this to mimic the “guns kill” argument because I thought it would be fun to see reactions .
I learned a lot from this
I’m pretty sure it lies
Saying Generative AI lies is attributing the ability to reason to it. That's not what it's doing. It can't think. It doesn't "understand".
So at best it can fabricate information by choosing the statistically best word that comes next based on its training set. That's why there is a distinction between Generative AI hallucinations and actual lying. Humans lie. They tell untruths because they have a motive to. The Generative AI can't have a motive.
People made AI to lie. When companies make something that does not work and promote it as reliable, that’s on the people doing that.
When faulty products are used by people, that’s on them.
I can no more blame AI than I could a car used during a robbery . Both are tools
but what if the car lied though.
The car's AI lied.
S[ai] be[lie]ve[d]
It’s exactly like the “guns kill people” arguments. I would like all this AI stuff to go away, the tech is not ready to be used.
Last year AI claimed "bleach" is a popular pizza topping. Nobody claimed this as their own work. It's just what a chatbot said.
Are you saying AI didn't lie? Is bleach a popular pizza topping?
To be able to lie, you need to know what truth is. AI doesn't know that, these tools don't have the concept of right vs wrong nor truth vs lie.
What they do is assemble words based on statistical patterns of languages.
"bleach is a popular pizza topping", from the "perspective" of AI, is just a sequence of words that works in the English language, it has no meaning to them.
Being designed to create language patterns in a statistical way is the reason why they hallucinate, but you can't call those "lies" because AI tools have no such concept.
What it did was assemble words based on a statistical probability model. It's not lying because it doesn't want to deceive, because it has no wants and no concept of truth or deception.
Of course, it sure looks like it's telling the truth. Google engineered it that way, putting it in front of actual search results. IMO the head liar is Sundar Pichai, the man who decided to show it to people.
AI has a high rate of hallucinations…