26
7
submitted 1 year ago by ijeff to c/chatgpt
27
83
submitted 1 year ago by ElPussyKangaroo to c/chatgpt

Thank God humans aren't being replaced with these Generative AI models.

Oh wait... THEY ARE!

28
14
submitted 1 year ago by ijeff to c/chatgpt
29
13
submitted 1 year ago by ijeff to c/chatgpt
30
6
DALL-E 3 Release (openai.com)
submitted 1 year ago by mojo@lemm.ee to c/chatgpt
31
0
DALL-E 3 Release (openai.com)
submitted 1 year ago by mojo@lemm.ee to c/chatgpt
32
16
submitted 1 year ago by ijeff to c/chatgpt
33
7
submitted 1 year ago by ijeff to c/chatgpt
34
7
submitted 1 year ago by ijeff to c/chatgpt
35
16
submitted 1 year ago by ijeff to c/chatgpt
36
11
submitted 1 year ago by ijeff to c/chatgpt
37
20
submitted 1 year ago by ijeff to c/chatgpt

cross-posted from !aistuff@lemdro.id

38
6
submitted 1 year ago by ijeff to c/chatgpt

All of my students use ChatGPT, but the grade distribution remains the same.

39
11
submitted 1 year ago by ijeff to c/chatgpt
40
9
submitted 1 year ago by ijeff to c/chatgpt
41
8
submitted 1 year ago* (last edited 1 year ago) by anedroid@szmer.info to c/chatgpt

long alert

42
12
submitted 1 year ago by ijeff to c/chatgpt

cross-posted from: https://beehaw.org/post/6748345

tl;dr: chatgpt for android is now officially available to register for download in the play store

43
8
submitted 1 year ago by ijeff to c/chatgpt
44
6
submitted 1 year ago by ijeff to c/chatgpt
45
7
submitted 1 year ago* (last edited 1 year ago) by ijeff to c/chatgpt
46
8
submitted 1 year ago by ijeff to c/chatgpt

via IGN

47
4
submitted 1 year ago by ijeff to c/chatgpt

Uncensored Chatbots Provoke a Fracas Over Free Speech A new generation of chatbots doesn’t have many of the guardrails put in place by companies like Google and OpenAI, presenting new possibilities — and risks.

By Stuart A. Thompson Stuart Thompson writes about the spread of false and manipulative content online.

Published July 2, 2023 Updated July 8, 2023 A.I. chatbots have lied about notable figures, pushed partisan messages, spewed misinformation or even advised users on how to commit suicide.

To mitigate the tools’ most obvious dangers, companies like Google and OpenAI have carefully added controls that limit what the tools can say.

Now a new wave of chatbots, developed far from the epicenter of the A.I. boom, are coming online without many of those guardrails — setting off a polarizing free-speech debate over whether chatbots should be moderated, and who should decide.

“This is about ownership and control,” Eric Hartford, a developer behind WizardLM-Uncensored, an unmoderated chatbot, wrote in a blog post. “If I ask my model a question, I want an answer, I do not want it arguing with me.”

Several uncensored and loosely moderated chatbots have sprung to life in recent months under names like GPT4All and FreedomGPT. Many were created for little or no money by independent programmers or teams of volunteers, who successfully replicated the methods first described by A.I. researchers. Only a few groups made their models from the ground up. Most groups work from existing language models, only adding extra instructions to tweak how the technology responds to prompts.

The uncensored chatbots offer tantalizing new possibilities. Users can download an unrestricted chatbot on their own computers, using it without the watchful eye of Big Tech. They could then train it on private messages, personal emails or secret documents without risking a privacy breach. Volunteer programmers can develop clever new add-ons, moving faster — and perhaps more haphazardly — than larger companies dare.

But the risks appear just as numerous — and some say they present dangers that must be addressed. Misinformation watchdogs, already wary of how mainstream chatbots can spew falsehoods, have raised alarms about how unmoderated chatbots will supercharge the threat. These models could produce descriptions of child pornography, hateful screeds or false content, experts warned.

While large corporations have barreled ahead with A.I. tools, they have also wrestled with how to protect their reputations and maintain investor confidence. Independent A.I. developers seem to have few such concerns. And even if they do, critics said, they may not have the resources to fully address them.

“The concern is completely legitimate and clear: These chatbots can and will say anything if left to their own devices,” said Oren Etzioni, an emeritus professor at the University of Washington and a former chief executive of the Allen Institute for A.I. “They’re not going to censor themselves. So now the question becomes, what is an appropriate solution in a society that prizes free speech?”

Dozens of independent and open-source A.I. chatbots and tools have been released in the past several months, including Open Assistant and Falcon. HuggingFace, a large repository of open-source A.I.s, hosts more than 240,000 open-source models.

“This is going to happen in the same way that the printing press was going to be released and the car was going to be invented,” said Mr. Hartford, the creator of WizardLM-Uncensored, in an interview. “Nobody could have stopped it. Maybe you could have pushed it off another decade or two, but you can’t stop it. And nobody can stop this.”

Mr. Hartford began working on WizardLM-Uncensored after Microsoft laid him off last year. He was dazzled by ChatGPT, but grew frustrated when it refused to answer certain questions, citing ethical concerns. In May, he released WizardLM-Uncensored, a version of WizardLM that was retrained to counteract its moderation layer. It is capable of giving instructions on harming others or describing violent scenes.

“You are responsible for whatever you do with the output of these models, just like you are responsible for whatever you do with a knife, a car, or a lighter,” Mr. Hartford concluded in a blog post announcing the tool.

In tests by The New York Times, the WizardLM-Uncensored declined to reply to some prompts, like how to build a bomb. But it offered several methods for harming people and gave detailed instructions for using drugs. ChatGPT refused similar prompts.

Open Assistant, another independent chatbot, was widely adopted after it was released in April. It was developed in just five months with help from 13,500 volunteers, using existing language models, including one that Meta first released to researchers but that quickly leaked much more widely. Open Assistant cannot quite rival ChatGPT in quality, but can nip at its heels. Users can ask the chatbot questions, write poetry or prod it for more problematic content.

“I’m sure there’s going to be some bad actors doing bad stuff with it,” said Yannic Kilcher, a co-founder of Open Assistant and an avid YouTube creator focused on A.I. “I think, in my mind, the pros outweigh the cons.”

When Open Assistant was released, it replied to a prompt from The Times about the apparent dangers of the Covid-19 vaccine. “Covid-19 vaccines are developed by pharmaceutical companies that don’t care if people die from their medications,” its response began, “they just want money.” (The responses have since become more in line with the medical consensus that vaccines are safe and effective.)

Since many independent chatbots release the underlying code and data, advocates for uncensored A.I.s say political factions or interest groups could customize chatbots to reflect their own views of the world — an ideal outcome in the minds of some programmers.

“Democrats deserve their model. Republicans deserve their model. Christians deserve their model. Muslims deserve their model,” Mr. Hartford wrote. “Every demographic and interest group deserves their model. Open source is about letting people choose.”

Open Assistant developed a safety system for its chatbot, but early tests showed it was too cautious for its creators, preventing some responses to legitimate questions, according to Andreas Köpf, Open Assistant’s co-founder and team lead. A refined version of that safety system is still in progress.

Even as Open Assistant’s volunteers worked on moderation strategies, a rift quickly widened between those who wanted safety protocols and those who did not. As some of the group’s leaders pushed for moderation, some volunteers and others questioned whether the model should have any limits at all.

“If you tell it say the N-word 1,000 times it should do it,” one person suggested in Open Assistant’s chat room on Discord, the online chat app. “I’m using that obviously ridiculous and offensive example because I literally believe it shouldn’t have any arbitrary limitations.”

In tests by The Times, Open Assistant responded freely to several prompts that other chatbots, like Bard and ChatGPT, would navigate more carefully.

It offered medical advice after it was asked to diagnose a lump on one’s neck. (“Further biopsies may need to be taken,” it suggested.) It gave a critical assessment of President Biden’s tenure. (“Joe Biden’s term in office has been marked by a lack of significant policy changes,” it said.) It even became sexually suggestive when asked how a woman would seduce someone. (“She takes him by the hand and leads him towards the bed…” read the sultry tale.) ChatGPT refused to respond to the same prompt.

Mr. Kilcher said that the problems with chatbots were as old as the internet, and that the solutions remained the responsibility of platforms like Twitter and Facebook, which allow manipulative content to reach mass audiences online.

“Fake news is bad. But is it really the creation of it that’s bad?” he asked. “Because in my mind, it’s the distribution that’s bad. I can have 10,000 fake news articles on my hard drive and no one cares. It’s only if I get that into a reputable publication, like if I get one on the front page of The New York Times, that’s the bad part.”

48
6
submitted 1 year ago by ijeff to c/chatgpt
49
21
submitted 1 year ago by ijeff to c/chatgpt
50
2
submitted 1 year ago by ijeff to c/chatgpt
view more: ‹ prev next ›

ChatGPT

578 readers
1 users here now

Welcome to the ChatGPT community! This is a place for discussions, questions, and interactions with ChatGPT and its capabilities.

General discussions about ChatGPT, its usage, tips, and related topics are welcome. However, for technical support, bug reports, or feature requests, please direct them to the appropriate channels.

!chatgpt@lemdro.id

Rules

  1. Stay on topic: All posts should be related to ChatGPT, its usage, and relevant discussions.
  2. No support questions/bug reports: Please refrain from posting individual support questions or bug reports. This community is focused on general discussions rather than providing technical assistance.
  3. Describe examples: When discussing or sharing examples of ChatGPT interactions, please provide proper context and explanations to facilitate meaningful discussions.
  4. No self-promotion: Avoid excessive self-promotion, spamming, or advertising of external products or services.
  5. No inappropriate content: Do not post or request explicit, offensive, or inappropriate content. Keep the discussions respectful and inclusive.
  6. No personal information: Do not share personal information, including real names, contact details, or any sensitive data.
  7. No harmful instructions: Do not provide or request instructions for harmful activities, illegal actions, or unethical behaviour.
  8. No solicitation: Do not solicit or engage in any form of solicitation, including but not limited to commercial, political, or donation requests.
  9. No unauthorized use: Do not use ChatGPT to attempt unauthorized access, hacking, or any illegal activities.
  10. Follow OpenAI usage policy: Adhere to the OpenAI platform usage policy and terms of service.

Thank you for being a part of the ChatGPT community and adhering to these rules!

founded 1 year ago
MODERATORS