We're strike my midnight soon.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
I am playing with it, sandboxed in an isolated environment, only interacting with a local LLM and only connected to one public service with a burner account. I haven’t even given it any personal info, not even my name.
It’s super fascinating and fun, but holy shit the danger is outrageous. Multiple occasions, it’s misunderstood what I’ve asked and it will fuck around with its own config files and such. I’ve asked it to do something and the result was essentially suicide as it ate its own settings. I’ve only been running it for like a week but have had to wipe and rebuild twice already (probably could have fixed it, but that’s what a sandbox is for). I can’t imagine setting it loose on anything important right now.
But it is undeniably cool, and watching the system communicate with the LLM model has been a huge learning opportunity.
Curious, are you having it do anything useful? If it could be trusted, a local Ai assistant would benefit from access to many facets of personal data. Once upon a time I had a trusted admin - I gave her my cc info, key fob, calendar and email access and it was amazing. She could schedule things for me, have my car taken to the shop, maintain my calendar etc. Trust of course is the key here, but it would be great to have even a small taste of that kind of help again.
Nope, nothing useful. Right now I am playing with making some skills to do some rudimentary network testing. I figure it’s always nice to have a remote system to ping or nslookup or check a website from a remote location. I have it hooked to a telegram bot (burner account and restricted to just me) and I can ask it to ping or get me a screenshot or speedtest, etc. from anything it can reach on the internet.
Only purpose right now is to have something to show off :).
There's a story about a guy who asked his LLM to remind him to do something in the morning, and it ended up burning quite a lot of money checking to see if daylight had broken once every 30 minutes with an unnecessary API call. Such is the supposed helpful assistant.
Reminds me of a quote from Small Gods (1992) about an eagle that drops vulnerable tortoises to break their shell open:
But of course, what the eagle does not realize is that it is participating in a very crude form of natural selection. One day a tortoise will learn how to fly.
the LLM model
the Local Language Model model?
lol, straight from the redundant department of redundancies.
I do words good.
It's nice to see articles that push back against the myth of AI superintelligence. A lot of people who brand themselves as "AI safety experts" preach this ideology as if it is a guaranteed fact. I've never seen any of them talk about real life present issues with AI, though.
(The superintelligence myth is a promotion strategy; OpenAI and Anthropic both lean into it because they know it boosts their stocks.)
In the case of Moldbook or FaceClaw or whatever they're calling it, a lot of the AGI talk is sillier than ever, frankly. Many people who register their bots have become entirely lost in their own sauce, convinced that because their bots are speaking in the first person, that they've somehow come alive:

It's embarrassing, really. People promoting the industry have every incentive to exaggerate their claims on Twitter for the revenue, but some of them are starting to buy into it.
Thankfully Steinberger is the first to deny that this is AGI.
There's something uniquely dystopian about people rushing out to buy a new computer that costs hundreds of dollars just to run an AI chatbot that could go out of style next week.
Granted, they're doing it so it doesn't mess up their local hardware, but why would you even have that risk on the same Wi-Fi network?