this post was submitted on 07 Jul 2023
2804 points (97.4% liked)

Mildly Infuriating

35457 readers
1171 users here now

Home to all things "Mildly Infuriating" Not infuriating, not enraging. Mildly Infuriating. All posts should reflect that.

I want my day mildly ruined, not completely ruined. Please remember to refrain from reposting old content. If you post a post from reddit it is good practice to include a link and credit the OP. I'm not about stealing content!

It's just good to get something in this website for casual viewing whilst refreshing original content is added overtime.


Rules:

1. Be Respectful


Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.

Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.

...


2. No Illegal Content


Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.

That means: -No promoting violence/threats against any individuals

-No CSA content or Revenge Porn

-No sharing private/personal information (Doxxing)

...


3. No Spam


Posting the same post, no matter the intent is against the rules.

-If you have posted content, please refrain from re-posting said content within this community.

-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.

-No posting Scams/Advertisements/Phishing Links/IP Grabbers

-No Bots, Bots will be banned from the community.

...


4. No Porn/ExplicitContent


-Do not post explicit content. Lemmy.World is not the instance for NSFW content.

-Do not post Gore or Shock Content.

...


5. No Enciting Harassment,Brigading, Doxxing or Witch Hunts


-Do not Brigade other Communities

-No calls to action against other communities/users within Lemmy or outside of Lemmy.

-No Witch Hunts against users/communities.

-No content that harasses members within or outside of the community.

...


6. NSFW should be behind NSFW tags.


-Content that is NSFW should be behind NSFW tags.

-Content that might be distressing should be kept behind NSFW tags.

...


7. Content should match the theme of this community.


-Content should be Mildly infuriating.

-At this time we permit content that is infuriating until an infuriating community is made available.

...


8. Reposting of Reddit content is permitted, try to credit the OC.


-Please consider crediting the OC when reposting content. A name of the user or a link to the original post is sufficient.

...

...


Also check out:

Partnered Communities:

1.Lemmy Review

2.Lemmy Be Wholesome

3.Lemmy Shitpost

4.No Stupid Questions

5.You Should Know

6.Credible Defense


Reach out to LillianVS for inclusion on the sidebar.

All communities included on the sidebar are to be made in compliance with the instance rules.

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Wololo@lemmy.world 46 points 1 year ago (3 children)

I literally broke down into tears doing this one night. Was running something that would take hours to complete and noticed an issue at maybe 11pm. Tried to troubleshoot and could not for the life of me figured it out. Thought to myself, surely chatgpt can help me figure this out quickly. Fast forward to 3am, work night: "no, as stated several times prior, this will not resolve the issue, it causes it to X, Y, Z, when it should be A, B, C. Do you not understand the issue?"

"I apologize for any misunderstanding. You are correct, this will not cause the program to A, B, C. You should... Inserts the same response it's been giving me for several hours"

It was at that moment that I realized these large language models might not currently be as advanced as people make them out to be

[–] tweeks@feddit.nl 5 points 1 year ago (2 children)

Might I ask if you were using Chat-GPT 3 or 4? I had this as well, got send into circles for hours, with 3. Then I used 4.

Only two bloody messages back and forth and I got my solution.

[–] Wololo@lemmy.world 2 points 1 year ago (2 children)

If I remember correctly it should have been gpt-4, of course, there is always a chance it was 3.5

Since then I've learned much better ways to kind of manipulate it into answering my questions more precisely, and that seems to do the trick

[–] Shush@reddthat.com 2 points 1 year ago (1 children)

IIRC, 3.5 is the freeware and 4 requires a subscription.

[–] tweeks@feddit.nl 1 points 1 year ago

Yes, and 4 has access to several custom plugins, live web browsing (temporarily disabled though) and a Python Interpreter (soft launch, as I can use it but did not see a release post yet). All in beta though.

[–] tweeks@feddit.nl 1 points 1 year ago

One might wonder who is training who.

[–] musicworld@lemmy.world 1 points 1 year ago (2 children)

Is 4 trained on newer data than Sept 2021?

[–] tweeks@feddit.nl 1 points 1 year ago

Just a little bit, the main set is just September 2021 but some specific additional data (and other modifications) has been used to further improve the model.

[–] twitterfluechtling@lemmy.pathoris.de 3 points 1 year ago (1 children)

They are trained to give answers which sound convincing on a first glance, for simple questions in most fields that strongly correlates with the correct answer. So, asking something simple on a topic I have no clue has a high likelihood to yield the answer I'm looking for.

The problem is, if I have no clue, the only way to know if I exceeded the "really simple" ralm is by trying the answer and failing, because chatgpt has no concept of verifying it's own answers or identifying its own limitations, or even to "learn" from it's mistakes, as such.

I do know some very similar humans, though: Very assertive, selling guesses and opinions as facts, overestimating themselves, never backing down. ChatGPT might replace tech-CEOs or politicians 😁

[–] Wololo@lemmy.world 1 points 1 year ago (2 children)

It's entirely possible! I remember listening to a podcast on AI, where they mentioned someone once asked the questions "which mammal lays the largest eggs" to which the ai responded with elephants, and proceeded to argue with the user that it was right and he was wrong.

It has become a lot easier as I've learned how to kind of coach it in the direction I want, pointing out obvious errors and showing it what I'm really looking to do.

Ai is a great tool, when it works. As the technology improves I'm sure it will rapidly get better.

[–] Trainguyrom@reddthat.com 2 points 1 year ago

Another fun example is "how many giraffes have been landed on the moon?" Because it's a question that lends itself to a creative answer but obviously the answer is 0, no giraffes have been flown into space

[–] blargh1111@lemmy.one 1 points 1 year ago

So is the answer a platypus? I think that's the only mammal that lays eggs, but now I'm wondering about echidnas.

[–] kicksystem@lemmy.world 2 points 1 year ago (1 children)

Oh yeah. I was learning some Haskell with the "help" of GPT4. It send me down a super frustrating rabbit hole where in the end I concluded that I knew Haskell better than GPT4 and it was wrong from the very start 🤷‍♂️

[–] Wololo@lemmy.world 2 points 1 year ago

When you end up resorting to saying things like "wow, this is wonderful, but... It breaks my code into a million tiny pieces" Or "for the love of God do you have any idea what you're actually doing?" It's a sign that perhaps stack overflow is still your best (and only) ally