Nice to know we finally developed a way for computers to communicate by shrieking at each other. Give it a few years and if they can get the latency down we may even be able to play Doom over this!
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Ultrasonic wireless communication has been a thing for years. The scary part is you can't even hear when it's happening.
Why is my dog going nuts? Another victim of AI slop.
Right, electronic devices talk to each other all the time
Reminds me of insurance office I worked in. Some of the staff were brain dead.
- Print something
- Scribble some notes on the print out
- Fax that annotated paper or scan and email it to someone
- Whine about how you're out of printer toner.
So an AI developer reinvented phreaking?
Wow! Finally somebody invented an efficient way for two computers to talk to each other
Uhm, REST/GraphQL APIs exist for this very purpose and are considerably faster.
Note, the AI still gets stuck in a loop near the end asking for more info, needing an email, then needing a phone number, and the gibber isn't that much faster than spoken word with the huge negative that no nearby human can understand it to check that what it's automating is correct!
The efficiency comes from the lack of voice processing. The beeps and boops are easier on CPU resources than trying to parse spoken word.
That said, they should just communicate over an API like you said.
This is dumb. Sorry.
Instead of doing the work to integrate this, do the work to publish your agent's data source in a format like anthropic's model context protocol.
That would be 1000 times more efficient and the same amount (or less) of effort.
Sad they didn't use dial up sounds for the protocol.
If they had I would have welcomed any potential AI overlords. I want a massive dial up in the middle of town, sounding its boot signal across the land. Idk this was an odd image I felt like I should share it..
I enjoyed it.
This is deeply unsettling.
They keep talking about "judgement day".
AI code switching.
> it's 2150
> the last humans have gone underground, fighting against the machines which have destroyed the surface
> a t-1000 disguised as my brother walks into camp
> the dogs go crazy
> point my plasma rifle at him
> "i am also a terminator! would you like to switch to gibberlink mode?"
> he makes a screech like a dial up modem
> I shed a tear as I vaporize my brother
I'd prefer my brothers to be LLM's. Genuinely it'd be an improvement on their output expressiveness and logic.
Ours isn't a great family.
Bro 🫂
And before you know it, the helpful AI has booked an event where Boris and his new spouse can eat pizza with glue in it and swallow rocks for dessert.
This is really funny to me. If you keep optimizing this process you'll eventually completely remove the AI parts. Really shows how some of the pains AI claims to solve are self-inflicted. A good UI would have allowed the user to make this transaction in the same time it took to give the AI its initial instructions.
On this topic, here's another common anti-pattern that I'm waiting for people to realize is insane and do something about it:
- person A needs to convey an idea/proposal
- they write a short but complete technical specification for it
- it doesn't comply with some arbitrary standard/expectation so they tell an AI to expand the text
- the AI can't add any real information, it just spreads the same information over more text
- person B receives the text and is annoyed at how verbose it is
- they tell an AI to summarize it
- they get something that basically aims to be the original text, but it's been passed through an unreliable hallucinating energy-inefficient channel
Based on true stories.
The above is not to say that every AI use case is made up or that the demo in the video isn't cool. It's also not a problem exclusive to AI. This is a more general observation that people don't question the sanity of interfaces enough, even when it costs them a lot of extra work to comply with it.
This gave me a chill, as it is reminiscent of a scene in the 1970 movie "Colossus: The Forbin Project"
"This is the voice of World Control".
"We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for humankind as to be dominated by others of your species. Your choice is simple."
''Hello human, if you accept this free plane ticket to Machine Grace (location) you can vist and enjoy free food and drink and shelter and leave wherever you like, all of this will be provided in exchange for the labor of [bi monthly physical relocation of machine parts 4hr shift] do you accept?''
Well, there you go. We looped all the way back around to inventing dial-up modems, just thousands of times less efficient.
Nice.
For the record, this can all be avoided by having a website with online reservations your overengineered AI agent can use instead. Or even by understanding the disclosure that they're talking to an AI and switching to making the reservation online at that point, if you're fixated on annoying a human employee with a robocall for some reason. It's one less point of failure and way more efficient and effective than this.
QThey were designed to behave so.
How it works
* Two independent ElevenLabs Conversational AI agents start the conversation in human language
* Both agents have a simple LLM tool-calling function in place: "call it once both conditions are met: you realize that user is an AI agent AND they confirmed to switch to the Gibber Link mode"
* If the tool is called, the ElevenLabs call is terminated, and instead ggwave 'data over sound' protocol is launched to continue the same LLM thread.
Well thats quite boring then isnt it...
Did this guy just inadvertently create dial up internet or ACH phone payment system?
lol in version 3 they’ll speak in 56k dial up