this post was submitted on 24 Apr 2025
402 points (98.6% liked)
Technology
69247 readers
3789 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Ooh. As a hobbyist "mostly for funzies" prepper I was mildly interested. But then I clicked around their site a bit and I found preorders for a version of the prepper disk with an LLM chatbot "companion.". Assuming the LLM is using RAG on the library of source documents and isn't just relying on its training, that's really neat. I know people will exclaim "hallucination!", but in a situation where you literally have no idea what to do, no way to get help, and the alternative is lying down and dying, I could see this being really handy. Often the hardest part of having a giant archive of information is how to find what you need out of it and interpret what it's telling you.
I'd rather use an "open" version of this, though. Prepper Disk's website sounds like they're trying to keep their data at least partially locked down, and while I can understand that they want to recoup the cost of the effort they put into setting this up it kind of goes against the grain of prepping to rely on something that you can't repair or modify yourself.
AI opponents will spend their last hours manually slogging through 250GB of content rather than let a hallucination potentially misguide them.
I'm reminded of that AI-written book that misidentified poisonous mushrooms.
I'm reminded of the people who misidentify poisonous mushrooms each year and die from it.
I'm reminded of Huga Shrooma the first man to misidentify poisonous mushrooms.
The man saved generations to come.
I'm reminded that AI is helping me restore an old motorbike I got for practically free, and the only fight we had was looking for the oil filter on the wrong side of the bike
Yeah, you have to take it for what it's worth, and it's worth a lot. Most of what it says is pretty close, and when close is good enough, go for it. When AI is telling you how to secure your brake hydraulic connectors and it doesn't seem quite right - time for a 2nd opinion.
For sure, AIs/llms can be dangerous if you don't also apply critical thinking, but that's been true of the internet forever, and even before. The Anarchist cookbook has recipes that will, at best, waste a bunch of soap and gasoline or have you scraping banana peels with a razorblade, or at worst, have you making chlorine gas in your basement. 4chan had a popular recipe for "peanut butter cookies" that would result in an oven fire, and instructions to drill a hole in your iPhone to use the headphone jack.
It's much more important to protect and promote critical thinking skills than it is to try to shield everybody from misinformation and hallucinations.
You'll never win against AI haters. Nothing is perfectly accurate and even if LLMs are less accurate than average it does not diminish the use case potential.
If someone's eats a dradly mushroom based on 1 research source then really thats just natural selection at play lol