this post was submitted on 10 Feb 2026
150 points (98.7% liked)

Technology

80978 readers
4695 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] 4grams@awful.systems 18 points 21 hours ago* (last edited 21 hours ago) (3 children)

I am playing with it, sandboxed in an isolated environment, only interacting with a local LLM and only connected to one public service with a burner account. I haven’t even given it any personal info, not even my name.

It’s super fascinating and fun, but holy shit the danger is outrageous. Multiple occasions, it’s misunderstood what I’ve asked and it will fuck around with its own config files and such. I’ve asked it to do something and the result was essentially suicide as it ate its own settings. I’ve only been running it for like a week but have had to wipe and rebuild twice already (probably could have fixed it, but that’s what a sandbox is for). I can’t imagine setting it loose on anything important right now.

But it is undeniably cool, and watching the system communicate with the LLM model has been a huge learning opportunity.

[–] Imgonnatrythis@sh.itjust.works 3 points 14 hours ago (2 children)

Curious, are you having it do anything useful? If it could be trusted, a local Ai assistant would benefit from access to many facets of personal data. Once upon a time I had a trusted admin - I gave her my cc info, key fob, calendar and email access and it was amazing. She could schedule things for me, have my car taken to the shop, maintain my calendar etc. Trust of course is the key here, but it would be great to have even a small taste of that kind of help again.

[–] XLE@piefed.social 2 points 13 hours ago (1 children)

There's a story about a guy who asked his LLM to remind him to do something in the morning, and it ended up burning quite a lot of money checking to see if daylight had broken once every 30 minutes with an unnecessary API call. Such is the supposed helpful assistant.

[–] wonderingwanderer@sopuli.xyz 3 points 11 hours ago

That story was about a guy paying to use an API for a flagship model (like 200 billion parameters).

I think these people are talking about self-hosting a local model (probably like 12-32 billion parameters depending on your hardware), which means no API, no payments, and more personal control over settings and configuration.

Thousands of open-source models are freely available on huggingface, and you can even make your own fine-tuned version based on an existing one using any datasets you choose.

Still no point in using an AI agent to do what a basic alarm/reminder could do, but it allows people to innovate their own ways to integrate them into specific workflows. You can even configure them to play minecraft, just as an example

[–] 4grams@awful.systems 1 points 13 hours ago

Nope, nothing useful. Right now I am playing with making some skills to do some rudimentary network testing. I figure it’s always nice to have a remote system to ping or nslookup or check a website from a remote location. I have it hooked to a telegram bot (burner account and restricted to just me) and I can ask it to ping or get me a screenshot or speedtest, etc. from anything it can reach on the internet.

Only purpose right now is to have something to show off :).

[–] baltakatei@sopuli.xyz 8 points 18 hours ago

Reminds me of a quote from Small Gods (1992) about an eagle that drops vulnerable tortoises to break their shell open:

But of course, what the eagle does not realize is that it is participating in a very crude form of natural selection. One day a tortoise will learn how to fly.

[–] Ulrich@feddit.org 4 points 18 hours ago (1 children)

the LLM model

the Local Language Model model?

[–] 4grams@awful.systems 8 points 16 hours ago

lol, straight from the redundant department of redundancies.

I do words good.