280
submitted 1 year ago by Razgriz@lemmy.world to c/chatgpt@lemmy.world

I was using Bing to create a list of countries to visit. Since I have been to the majority of the African nation on that list, I asked it to remove the african countries...

It simply replied that it can't do that due to how unethical it is to descriminate against people and yada yada yada. I explained my resoning, it apologized, and came back with the same exact list.

I asked it to check the list as it didn't remove the african countries, and the bot simply decided to end the conversation. No matter how many times I tried it would always experience a hiccup because of some ethical process in the bg messing up its answers.

It's really frustrating, I dunno if you guys feel the same. I really feel the bots became waaaay too tip-toey

top 50 comments
sorted by: hot top controversial new old
[-] chaogomu@kbin.social 43 points 1 year ago

The very important thing to remember about these generative AI is that they are incredibly stupid.

They don't know what they've already said, they don't know what they're going to say by the end of a paragraph.

All they know is their training data and the query you submitted last. If you try to "train" one of these generative AI, you will fail. They are pretrained, it's the P in chatGPT. The second you close the browser window, the AI throws out everything you talked about.

Also, since they're Generative AI, they make shit up left and right. Ask for a list of countries that don't need a visa to travel to, and it might start listing countries, then halfway through the list it might add countries that do require a visa, because in its training data it often saw those countries listed together.

AI like this is a fun toy, but that's all it's good for.

[-] Vlhacs@reddthat.com 16 points 1 year ago

Bings version of chatgpt once said Vegito was the result of Goku and Vegeta performing the Fusion dance. That's when I knew it wasn't perfect. I tried to correct it and it said it didn't want to talk about it anymore. Talk about a diva.

Also one time, I asked it to generate a reddit AITA story where they were obviously the asshole. It started typing out "AITA for telling my sister to stop being a drama queen after her miscarriage..." before it stopped midway and, again, said it didn't want to continue this conversation any longer.

Very cool tech, but it's definitely not the end all, be all.

[-] intensely_human@lemm.ee 10 points 1 year ago

That’s actually fucking hilarious.

“Oh I’d probably use the meat grinder … uh I don’t walk to talk about this any more”

[-] person4268@lemm.ee 8 points 1 year ago

Bing chat seemingly has a hard filter on top that terminates the conversation if it gets too unsavory by their standards, to try and stop you from derailing it.

[-] Silviecat44@vlemmy.net 7 points 1 year ago* (last edited 1 year ago)

I was asking it (binggpt) to generate “short film scripts” for very weird situations (like a transformer that was sad because his transformed form was a 2007 Hyundai Tuscon) and it would write out the whole script, then delete it before i could read it and say that it couldn’t fulfil my request.

[-] Vlhacs@reddthat.com 8 points 1 year ago

It knew it struck gold and actually sent the script to Michael Bay

[-] Ech@lemmy.world 10 points 1 year ago

I seriously underestimated how little people understand these programs, and how much they overestimate them. Personally I stay away from them for a variety of reasons, but the idea of using them like OP does or various other ways I've heard about is absurd. They're not magic problem solvers - they literally only make coherent blocks of text. Yes, they're quite good at that now, but that doesn't mean they're good at literally anything else.

I know people smarter than me see potential and I'm curious to see how it develops further, but that all seems like quite a ways off, and the way people treat and use them right now is just creepy and weird.

load more comments (11 replies)
[-] Anticorp@lemmy.world 10 points 1 year ago

They know everything they've said since the start of that session, even if it was several days ago. They can correct their responses based on your input. But they won't provide any potentially offensive information, even in the form of a joke, and will instead lecture you on DEI principles.

[-] alternative_factor@kbin.social 9 points 1 year ago

Are you saying I shouldn't use chat GPT for my life as a lawyer? 🤔

load more comments (4 replies)
[-] hikaru755@feddit.de 8 points 1 year ago

They don't know what they've already said, they don't know what they're going to say by the end of a paragraph.

I mean, the first part of this is just wrong (the next prompt usually includes everything that has been said so far}, and the second part is also not completely true. When generating, yes, they're only ever predicting the next token, and start again after that. But internally, they might still generate a full conceptual representation of what the full next sentence or more is going to be, even if the generated output is just the first token of that. You might say that doesn't matter because for the next token, that prediction runs again from scratch and might change, but remember that you're feeding it all the same input as before again, plus one more token which nudges it even further towards the previous prediction, so it's very likely it's gonna arrive at the same conclusion again.

[-] intensely_human@lemm.ee 5 points 1 year ago

Do you mean that the model itself has no memory, but the chat feature adds memory by feeding the whole conversation back in with each user submission?

[-] 80085@lemmy.world 8 points 1 year ago

Yeah, that's how these models work. They have also have a context limit, and if the conversation goes too long they start "forgetting" things and making more mistakes (because not all of the conversation can be fed back in).

load more comments (3 replies)
[-] PixxlMan@lemmy.world 7 points 1 year ago

Not quite true. They have earlier messages available.

load more comments (20 replies)
[-] OsakaWilson@lemmy.world 36 points 1 year ago

Please remove countries I've been to.

I've been to these African countries.

[-] Texas_Hangover@lemmy.world 30 points 1 year ago

4chan turns ONE ai program into Nazi, and now they have to wrap them all in bubble wrap and soak 'em in bleach.

[-] Coliseum7428@kbin.social 25 points 1 year ago

>Implying it would have stopped at one AI program

load more comments (2 replies)
[-] Zaphod@discuss.tchncs.de 28 points 1 year ago

Have you tried wording it in different ways? I think it's interpreting "remove" the wrong way. Maybe "exclude from the list" or something like that would work?

[-] Furbag@lemmy.world 10 points 1 year ago

"I've already visited Zimbabwe, Mozambique, Tanzania, the Democratic Republic of the Congo, and Egypt. Can you remove those from the list?"

Wow, that was so hard. OP is just exceptionally lazy and insists on using the poorest phrasing for their requests that ChatGPT has obviously been programmed to reject.

[-] TechnoBabble@lemmy.world 6 points 1 year ago* (last edited 1 year ago)

"List all the countries outside the continent of Africa" does indeed work per my testing, but I understand why OP is frustrated in having to employ these workarounds on such a simple request.

load more comments (1 replies)
[-] yokonzo@lemmy.world 21 points 1 year ago

Just make a new chat ad try again with different wording, it's hung up on this

[-] TheFBIClonesPeople@lemmy.world 28 points 1 year ago

Honestly, instead of asking it to exclude Africa, I would ask it to give you a list of countries "in North America, South America, Europe, Asia, or Oceania."

[-] XEAL@lemm.ee 7 points 1 year ago

Chat context is a bitch sometimes...

load more comments (2 replies)
[-] Gabu@lemmy.world 20 points 1 year ago

Your wording is bad. Try again, with better wording. You're talking to a roided-out autocorrect bot, don't expect too much intelligence.

[-] sycamore@lemmy.world 17 points 1 year ago

I have been to these countries [list] Generate a list of all the countries I haven't been to.

[-] Mr_Dr_Oink@lemmy.world 8 points 1 year ago

I was going to say copy and paste the african countries from the list the AI is giving you and add "please remove the following list of countries i have already visited."

load more comments (1 replies)
[-] breadsmasher@lemmy.world 14 points 1 year ago

You could potentially work around by stating specific places up front? As in

“Create a travel list of countries from europe, north america, south america?”

[-] Razgriz@lemmy.world 13 points 1 year ago* (last edited 1 year ago)

I asked for a list of countries that dont require a visa for my nationality, and listed all contients except for the one I reside in and Africa...

It still listed african countries. This time it didn't end the conversation, but every single time I asked it to fix the list as politely as possible, it would still have at least one country from Africa. Eventually it woukd end the conversation.

I tried copy and pasting the list of countries in a new conversation, as to not have any context, and asked it to remove the african countries. No bueno.

I re-did the exercise for european countries, it still had a couple of european countries on there. But when pointed out, it removed them and provided a perfect list.

Shit's confusing...

[-] Smokeless7048@lemmy.world 7 points 1 year ago* (last edited 1 year ago)

you would probobly have had more success editing the original prompt. that way it doesn't have the history of declining, and the conversation getting derailed.

I was able to get it to respond appropriatly, and im wondering how my wording differs from yours:

https://chat.openai.com/share/abb5b920-fd00-42dd-8e63-0da76940e3f5

I was able to get this response from Bing:

Canadian citizens can travel visa-free to 147 countries in the world as of June 2023 according to VisaGuide Passport Index¹.

Here is a list of countries that do not require a Canadian visa by continent ²:

  • Europe: Andorra, Austria, Belgium, Bosnia and Herzegovina, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Italy, Kosovo, Latvia, Liechtenstein, Lithuania, Luxembourg, Malta, Monaco, Montenegro, Netherlands (Holland), Norway, Poland, Portugal (including Azores and Madeira), Romania (including Bucharest), San Marino (including Vatican City), Serbia (including Belgrade), Slovakia (Slovak Republic), Slovenia (Republic of Slovenia), Spain (including Balearic and Canary Islands), Sweden (including Stockholm), Switzerland.
  • Asia: Hong Kong SAR (Special Administrative Region), Israel (including Jerusalem), Japan (including Okinawa Islands), Malaysia (including Sabah and Sarawak), Philippines.
  • Oceania: Australia (including Christmas Island and Cocos Islands), Cook Islands (including Aitutaki and Rarotonga), Fiji (including Rotuma Island), Micronesia (Federated States of Micronesia including Yap Island), New Zealand (including Cook Islands and Niue Island), Palau.
  • South America: Argentina (including Buenos Aires), Brazil (including Rio de Janeiro and Sao Paulo), Chile (including Easter Island), Colombia.
  • Central America: Costa Rica.
  • Caribbean: Anguilla, Antigua and Barbuda (including Barbuda Island), Aruba, Bahamas (including Grand Bahama Island and New Providence Island), Barbados, Bermuda Islands (including Hamilton City and Saint George City), British Virgin Islands (including Tortola Island and Virgin Gorda Island), Cayman Islands (including Grand Cayman Island and Little Cayman Island), Dominica.
  • Middle East: United Arab Emirates.

I hope this helps!

load more comments (1 replies)
load more comments (17 replies)
[-] Lininop@lemmy.ml 13 points 1 year ago

Is it that hard to just look through the list and cross off the ones you've been to though? Why do you need chatgpt to do it for you?

[-] Yuuuuuuuuuuuuuuuuuuu@lemmy.world 46 points 1 year ago* (last edited 1 year ago)

People should point out flaws. OP obviously doesn’t need chatgpt to make this list either, they’re just interacting with it.

I will say it’s weird for OP to call it tiptoey and to be “really frustrated” though. It’s obvious why these measures exist and it’s goofy for it to have any impact on them. It’s a simple mistake and being “really frustrated” comes off as unnecessary outrage.

[-] TechnoBabble@lemmy.world 11 points 1 year ago

Anyone who has used ChatGPT knows how restrictive it can be around the most benign of requests.

I understand the motivations that OpenAI and Microsoft have in implementing these restrictions, but they're still frustrating, especially since the watered down ChatGPT is much less performant than the unadulterated version.

Are these limitations worth it to prevent a firehose of extremely divisive speech being sprayed throughout every corner of the internet? Almost certainly yes. But the safety features could definitely be refined and improved to be less heavy-handed.

[-] Yuuuuuuuuuuuuuuuuuuu@lemmy.world 7 points 1 year ago* (last edited 1 year ago)

I agree. I’m not here to argue that the limitations are perfect, they should definitely be refined and flaws should be pointed out such as in the post itself. But it’s important to recognize the reason that the limitations have been implemented on the heavier side are to compensate for the AI still being stupid. It’s a better safe than sorry approach and I would imagine these restrictions will gradually slacken as the AI improves.

You have a reasonable take that just wanted to remind people that there could still be improvements, but I just wanted to say this as there are people that exaggerate these inconveniences. I honestly appreciate the direct, involved approach I’m seeing from developers over the lazy, laid-back approach that I was really afraid of.

load more comments (1 replies)
[-] Machefi@lemmy.world 12 points 1 year ago

Bing AI once refused to give me a historical example of a waiter taking revenge on a customer who hadn't tipped, because "it's not a representative case". Argued with it for a while, achieved nothing

[-] elxeno@lemm.ee 12 points 1 year ago

Ask to separate by continent?

[-] HunterBidensLapDog@infosec.pub 10 points 1 year ago

You Continentalists are all the same! Can't we all just get along?!?

[-] Slayra@lemmy.world 8 points 1 year ago

I asked for information on a turtle race where people cheated with mechanic cars and it also stopped talking to me, exactly using the same "excuse". You want to err on the side of caution, but it's just ridiculous.

load more comments (1 replies)

I recently asked Bing to give some code on a pretty undocumented feature and use case. It was typing out a clear answer from a user forum, but just before it was done, it deleted everything and just said it couldn't find anything. Tried it again in a new conversation and it didn't even try to type it out and said the same straight away. Only when given a hint in the question from what it had previously typed, it actually gave the answer. ChatGPT didn't have this problem and just gives an answer, even though it was a bit outdated.

load more comments (1 replies)
[-] st3ph3n@kbin.social 5 points 1 year ago

I tried to have it create an image of a 2022 model Subaru Baja if it was designed by an idiot. It refused on the ground that it would be insulting to the designers of the car... even though no such car exists. I tried reasoning with it and not using the term idiot, but it refused. Useless.

load more comments (3 replies)
load more comments
view more: next ›
this post was submitted on 06 Jul 2023
280 points (93.2% liked)

ChatGPT

8856 readers
3 users here now

Unofficial ChatGPT community to discuss anything ChatGPT

founded 1 year ago
MODERATORS