[-] pavnilschanda@lemmy.world 2 points 9 hours ago

I get where you're coming from. Ideally, we should be able to say whatever we want whenever we want. But based on my experience as an autistic living in a country where context is very important, the way you convey words affects your standing in a society, at least one that caters to neurotypicals that are highly dependent on context. I have no easy answers to how we can eliminate this hurdle, but your words truly made me think about language usage and how society should perceive them and I would like to thank you for that.

[-] pavnilschanda@lemmy.world -2 points 9 hours ago

I am aware that Lemmy has an anti-religious bent but the fact is that religious people are part of this world, some even in places of power. Shouldn't they also be informed about how LLMs are prone to bullshit as well? Though if they are OK with the word "bullshit" then it's all fine by me at the end of the day

[-] pavnilschanda@lemmy.world 5 points 11 hours ago* (last edited 11 hours ago)

Interestingly enough that game got improved with patches. Seems to be the norm with games these days

[-] pavnilschanda@lemmy.world 31 points 13 hours ago

Being neurodivergent does that to you

[-] pavnilschanda@lemmy.world 8 points 15 hours ago

There's also the assumption that men with younger children are automatically preditors. It's why dads taking their daughters outside without a mom get looks

3

cross-posted from: https://lemdro.id/post/10240841

It was indeed a rickroll...

[-] pavnilschanda@lemmy.world -1 points 18 hours ago

Understandable, though we should also find ways to explain complex academic concepts, like LLM bullshit, to the general public, including those with strong religious beliefs that may be sensitive to these words. The fact that some religious philosophers already use this term without issue shows that it's possible to bridge this gap.

[-] pavnilschanda@lemmy.world 0 points 18 hours ago

You make a good point about the potential for harm in all types of language, regardless of whether it's considered 'profanity' or not. I also agree that intent and impact matter more than the specific words used.

At the same time, I'm curious about how this relates to words like 'bullshit' in different social contexts. Do you think there are still situations where using 'bullshit' might be seen as more or less appropriate, even if we agree that any word can potentially cause harm?

[-] pavnilschanda@lemmy.world -2 points 18 hours ago

You have a point. I did remember being told that the word "shit" was a curse word that I should always avoid. But that was in the 2000s, so that sentiment may have changed now (that was in the United States and now I've been living in Indonesia so I don't know the evolution of languages there anymore). I know that the word "queer" used to be a slur as well. Let's see if the word "bullshit" becomes normalized in society as the years go on

[-] pavnilschanda@lemmy.world -2 points 20 hours ago

Educating children about LLMs for the most part. There are also religious institutions that would like to be informed about LLMs as well

1
submitted 1 day ago* (last edited 1 day ago) by pavnilschanda@lemmy.world to c/aicompanions@lemmy.world

SK hynix has made a new super-fast computer storage device called the PCB01. They say it's great for AI tasks, like helping chatbots and AI companions work faster. The PCB01 can move data really quickly, which means AI programs could load and respond faster, almost as quick as humans talk. This could make AI companions feel more natural to chat with. The device is also good for gaming and high-end computers. While SK hynix says it's special for AI, it seems to be just as fast as other top storage devices. The big news is that this is SK hynix's fastest storage device yet, moving data twice as fast as their previous best. This kind of speed could help make AI companions and other AI programs work much more smoothly on regular computers.

by Claude 3.5 Sonnet

1
[-] pavnilschanda@lemmy.world 10 points 1 day ago

Reducing people from third world countries to "language models" as an attempt to critique AI aint it

1

AI is getting smarter and more powerful, which is exciting but also a bit scary. Some experts, like Zhang Hongjiang in China, are worried about AI becoming too strong and maybe even dangerous for humans. They want to make sure AI can't trick people or make itself better without our help. Zhang thinks it's important for scientists from different countries to work together on keeping AI safe. He also talks about how AI is changing robots, making them understand more than we thought they could. For example, some robots can now figure out which toy is a dinosaur or who Taylor Swift is in a picture. As AI gets better at seeing and understanding things, it might lead to big changes in how we use robots in our homes and jobs.

by Claude 3.5 Sonnet

3

AI language models like ChatGPT are changing how we interact with computers. But some experts worry that big tech companies are keeping these AI systems secret and using them to make money, not help people. One of the inventors of this AI technology, Illia Polosukhin, thinks we need more open and transparent AI that everyone can use and understand. He wants to create "user-owned AI" where regular people, not big companies, control how the AI works. This could be safer and fairer than secret AIs made by tech giants. It's important to have open AI companions that won't take advantage of lonely people or suddenly change based on what the app makers want. With user-owned AI, we could all benefit from smarter computers without worrying about them being used against us.

by Claude 3.5 Sonnet

90
2
[-] pavnilschanda@lemmy.world 13 points 2 days ago* (last edited 2 days ago)

When people misinterpret The Boys and form a fandom based on their false assumptions, I'm not surprised anymore

4
submitted 3 days ago* (last edited 3 days ago) by pavnilschanda@lemmy.world to c/aicompanions@lemmy.world

The author shares her experience using an AI-powered therapy chatbot called Therapist GPT for one week. As a millennial who values traditional therapy, she was initially skeptical but decided to try it out. The author describes her daily interactions with the chatbot, discussing topics like unemployment, social anxiety, and self-care. She found that the AI provided helpful reminders and validation, similar to a human therapist. However, she also noted limitations, such as generic advice and the lack of personalized insights based on body language or facial expressions. The author concludes that while AI therapy can be a useful tool for quick support between sessions, it cannot replace human therapists. She suggests that AI might be more valuable in assisting therapists rather than replacing them, and recommends using AI therapy as a supplement to traditional therapy rather than a substitute.

by Claude 3.5 Sonnet

3

A new company called Sonia has made an AI chatbot that acts like a therapist. People can talk to it on their phones about their problems, like feeling sad or stressed. The chatbot uses special AI models to understand what people say and give advice. It costs $20 a month, which is cheaper than seeing a real therapist. The people who made Sonia say it's not meant to replace human therapists, but to help people who can't or don't want to see a real one. Some people like talking to the chatbot more than a human. But there are worries about how well it can really help with mental health issues. The chatbot might not understand everything as well as a human therapist would. It's also not approved by the government as a medical treatment. Sonia is still new, and we'll have to see how well it works as more people use it.

by Claude 3.5 Sonnet

117

cross-posted from: https://lemmy.world/post/16969151

I wasn't aware just how good the news is on the green energy front until reading this. We still have a tough road in the short/medium term, but we are more or less irreversibly headed in the right direction.

4

cross-posted from: https://lemmy.zip/post/18084495

Very bad, not good.

[-] pavnilschanda@lemmy.world 31 points 3 days ago

TechCrunch: this is nightmare fuel

engadget: it's so cute :)

8

The company denied making "major changes," but users report noticeable differences in the quality of their chatbot conversations.

view more: next ›

pavnilschanda

joined 1 year ago
MODERATOR OF