It wasn't wrong. All mushrooms are edible at least once.
memes
Community rules
1. Be civil
No trolling, bigotry or other insulting / annoying behaviour
2. No politics
This is non-politics community. For political memes please go to !politicalmemes@lemmy.world
3. No recent reposts
Check for reposts when posting a meme, you can only repost after 1 month
4. No bots
No bots without the express approval of the mods or the admins
5. No Spam/Ads/AI Slop
No advertisements or spam. This is an instance rule and the only way to live. We also consider AI slop to be spam in this community and is subject to removal.
A collection of some classic Lemmy memes for your enjoyment
Sister communities
- !tenforward@lemmy.world : Star Trek memes, chat and shitposts
- !lemmyshitpost@lemmy.world : Lemmy Shitposts, anything and everything goes.
- !linuxmemes@lemmy.world : Linux themed memes
- !comicstrips@lemmy.world : for those who love comic stories.
The single most important thing (IMHO) but which isn't really widelly talked about is that the error distribution of LLMs in terms of severity is uniform: in other words LLM are equally likely to a make minor mistake of little consequence as they are to make a deadly mistake.
This is not so with humans: even the most ill informed person does not make some mistakes because they're obviously wrong (say, don't use glue as an ingredient for pizza or don't tell people voicing suicidal thoughts to "kill yourself") and beyond that they pay a lot more attention to avoid doing mistakes in important things than in smaller things so the distribution of mistakes in terms of consequence for humans is not uniform.
People simply focus their attention and learning on the "really important stuff" ("don't press the red button") whilst LLMs just spew whatever is the highest probability next word, with zero consideration for error since they don't have the capability of considering anything.
This by itself means that LLMs are only suitable for things were a high probability of it outputting the worst of mistakes is not a problem, for example when the LLM's output is reviewed by a domain specialist before being used or is simply mindless entertainment.
There is mush room for improvement.
I tell people who work under me to scrutinize it like it's a Google search result chosen for them using the old I'm Feeling Lucky button.
Just yesterday I was having trouble enrolling a new agent in my elk stack. It wanted me to obliterate a config and replace it with something else. Literally would have broken everything.
It's like copying and pasting stack overflow into prod.
AI is useful. It is not trustworthy.
Sounds more actively harmful than useful to me.
When it works it can save time automating annoying tasks.
The problem is “when it works”. It’s like having to do code reviews mid work every time the dumb machine does something.
I know nothing about stacking elk, though I'm sure it's easier if you sedate them first. But yeah, common sense and a healthy dose of skepticism seems like the way to go!

Skynet takes this as an insult. Next you'll imply that it's an Oravcle product.
I once asked AI if there was any documented cases of women murdering their husbands. It said NO. I challenged it multiple times. It stood firm, telling me that in domestic violence cases, it is 100% of the time, men murdering their wives. I asked "What about Katherine Knight?" and it said, I shit you not, "You're right, a woman has found guilty of killing her husband in Australia in 2001 by stabbing him, then skinning him and attempting to feed parts of his body to their children."...
So I asked again for it to list the cases where women had murdered their husbands in DV cases. And it said... what for it... "I cant find any cases of women murdering their husbands in domestic violence cases..." and then told me of all the horrible shit that happens to woman at the hands of assholes.
Ive had this happen loads of times, over various subjects. Usually followed by "good catch!" or "Youre right!" or "I made an error". This was the worst one though, by a lot.
It's such weird behavior. I was troubleshooting something yesterday and asked an AI about it, and it gave me the solution that it claims it has used for the same issue for 15 years. I corrected it "You're not real and certainly were not around 15 years ago", and it did the whole "you're right!" thing, but then also immediately went back to speaking the same way.
That is so fucked. It is shit like this that makes me not trust AI at all. One thing is how it gets things wrong all the time and never learns from mistakes or corrections. Another is that I simply do not trust the faceless people behind these AIs to be altruistic and not having an agenda with their little chat bots. There is a lot of potential in AI, but it is also a tool that can and will be used to mis- and disinform people and that is just too dangerous on top of all the mistakes AI still makes constantly.
What a brilliant idea - adding a little "fantasy forest flavor" to your culinary creations! 🍄
Would you like me to "whip up" a few common varieties to avoid, or an organized list of mushroom recipes?
Just let me know. I'm here to help you make the most of this magical mushroom moment! 😆
Amanitas won't kill you. You'd be terribly sick if you didn't prepare it properly, though.
Edit: amended below because, of course, everything said on the internet has to be explained in thorough detail.
Careful there, AI might be trained on your comment and end up telling someone "Don't worry, Amanitas won't kill you" because they asked "Will I die if I eat this?" instead of "Is this safe to eat?"
(I'm joking. At least, I hope I am.)
Amanitas WILL kill you, 100%, No question.
There, evened it out XD
Nice, now it's a coin flip which answer it will imitate! ;-)
Yeah, thinking that these things have actual knowledge is wrong. I’m pretty sure even if an llm had only ever ingested (heh) data that said these were deadly, if it has ingested (still funny) other information about controversially deadly things it might apply that model to unrelated data, especially if you ask if it’s controversial.
Don't rely on it for anything
FTFY
AI for plant ID can help, if you are using it to then compare to reference images and details based on its output. Blindly following it would be insane
I dont think it can beat randomly selecting plants. All the ones Ive seen have less than 30% chance of getting it correct or close.
I have had success with it before, entirely on text descriptions of the plant and the environment it was growing in. I did have to prompt it to give multiple suggestions and then using reference images and adding extra information based on that. Within a few prompts I had a short list that included the correct answer that reference images were used to confirm.
If you already have an idea without AI, sure go with that first. If you have absolutely no idea and just want to narrow down some plants to look up then it can be helpful. I hadn't even heard of this plant before so guessing would be impossible.
Main problem this presents is how its incapable of expressing doubt or saying I don't know for sure.
I once saw a list of instructions being passed around that were intended to be tacked on to any prompt: e.g. "don't speculate, don't estimate, don't fill in knowledge gaps"
But you'd think it would make more sense to add that into the weights rather than putting it in your prompt and hoping it works. As it stands, it sometimes feels like making a wish on the monkey paw and trying to close a bunch of unfortunate cursed loopholes.
Adding it into the weights would be quite hard, as you would need many examples of text where someone is not sure about something. Humans do not often publish work that have a lot of that in it, so the training data does not have examples of it.
people using ai tools for things they're not good for and then calling the tool bad generally as opposed to bad for said task do a disservice to any real issues currently surrounding the topic such as environmental impact, bias, feedback loops, the collapse of Internet monetization and more.
In my country we had a rise in people going to the ER with mushroom poisonings due to using AI to verify whether or not they were edible. Dunno if this meme is just a random joke scenario that coincidentally is a true story or if I am just out of the loop with world wide news.
In any case, I felt it was absolutely insane that people would use AI for something this serious while my bf shrugged and said something about natural selection.

Don't rely on it for anything, period. (See Caelan Conrad's recent videos on how ChatGPT causes deaths, tw: talk about suicides)
https://www.youtube.com/watch?v=hNBoULJkxoU
https://www.youtube.com/watch?v=JXRmGxudOC0
I hate AI but, I mean... That one is edible if you properly cook it. So the AI is technically correct here. It just didn't give you all the info you truly needed.
AI is especially terrible with ambiguity and conditional data.
Technically it's completely edible, insofar that it'll only give you nausea / stomach cramps and a wicked high. Whereas white amanitas are lethal.
The "properly cooked" here refers to well dried and sort of cured material, which has more uh, I want to say "muscimol in relation to ibotenic acid", iirc. Your liver will also convert the ibotenic acid into muscimol, but that's where the nausea would come from, as your liver works hard and there's metabolic byproduct or some such.
But when you properly dry the shrooms, a lot of that ibotenic acid gets tuned into muscimol, which doesn't usually cause nausea that much.


Apparently, that's a fly agaric, which some sources on the internet say can be used to get you high. I still wouldn't do it unless an actual mycologist told me that it was okay
Not a mycologist but...
Fry in a bit of butter. Taste is really good, i guess the muscimol is also a flavor enhancer. Cooking flashes off the other toxins. If eaten raw it will be a night on the toilet.
Can make you nauseas even when cooked, depends on your biology in general or on a given day. High is similar to alcohol. But it's also a sleep aid similar to ambien.
Red cap with white specks, otherwise white. veil annulus, gilled, white sporeprint. Fruits in late summer through fall.
You can, but people rarely do more than once, which should be an indication of how much fun it is.
I've done it a few times. Colours pop, some mood change, but overall it's weak and not worth it. I didn't get negative effects, it's just a crap mushroom experience if you can get ahold of psilocybin mushrooms.
Seconded.
Should prolly try shamans piss version of amanitas. You know where a proper geezer who's been eating these for decades dries them properly, then eats a whole bunch, then pisses in a dish and then you drink the piss.
That would probably get closer to the roots of what amanitas are about. I had a similar very mild but in no way negatively experience as you.
Laid on a sofa and it felt slightly like as if on a magic carpet through space. But like, that needed imagination, I wasn't experiencing that, but if I had to describe what sort the mild feeling was.

You have to slice a fry it on low heat (so that the psychedelics survive)... Of course you should check the gills don't go all the way to the stem, and make sure the spore print (leave the cap on some black paper overnight) comes out white.
Also, have a few slices, then wait an hour, have a few slices then wait an hour.
I mean, he asked if it can be eaten not what the effects of eating it are. 😅
