If only computers had a much more efficient and reliable way to tell time
Fuck AI
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
Like an 8GB local LLM? Surely that's what you mean. 100watts an hour sure beats $20 a night.
And if only you could set a reminder on one, even based on this reliable time telling.
Impossible though, better just load $1000 into my Definitely Not a Scam Ai to remember to buy milk, or whatever.
I'm extremely confused. Why is he checking the time every hour 14 times a day? I understand he's trying to test AI out so he's doing something trivial, but I feel like I'm having an aneurism reading this. This is still not an optimal way to do reminders. Am I just really dumb or is this nonsense?
He told it to remind him to get milk the next day. The artificial stupidity set up a cron job to check if it was "tomorrow" every so often before it reminded him. He's a moron for paying for a completely wasteful stupid system that wasted his money.
A fool and his money are soon parted

Maslow’s hammer. “When all you have is a hammer, everything looks like a nail.” Abraham Harold Maslow in 1966.
We never learn.
This is a bunch of gibberish.
I think he used a wrong list for "The problem" because the only answers is "I'm stupid".
To be clear: this isnt an AI problem, the LLM is doing exactly what its being told to
This is an Openclaw problem with the platform itself doing very very stupid things with the LLM lol
We are hitting the point now where, tbh, LLMs are on their own in a glass box feeling pretty solid performance wise, still prone to hallucinating but the addition of the Model Context Protocol for tooling makes them way less prone to hallucinating, cuz they have the tooling now to sanity check themselves automatically, and/or check first and then tell you what they found.
IE a MCP to search wikipedia and report back with "I found this wiki article on your topic" or whatever.
The new problem now is platforms that "wrap" LLMs having a "garbage in, garbage out" problem, where they inject their "bespoke" stuff into the llm context to "help" but it actually makes the LLM act stupider.
Random example: Github Copilot agents get a "tokens used" thing quietly/secretly injected to them periodically, looks like every ~25k tokens or so
I dunno what the wording is they used, but it makes the LLM start hallucinating a concept of a "deadline" or "time constraint" and start trying to take shortcuts and justifying it with stuff like "given time constraints I wont do this job right"
Its kinda weird how such random stuff that seems innocuous and tries to help can actually make the LLM worse instead of better.
You had me up until your first sentence.
I don't think we've overcome the halfglass of wine issue, rather, we've papier-mâchéd over some fundamental flaws in precisely what it is happening when an LLM creates the appearance of reason. In doing saw we're baking a certain amount of sawdust into the cake, and the fact that no substantive advances has really been made since maybe the 4, 4.5 days, with most of the "improvements" being seen coming from basically better engineering, its clear we've hit an asymptote with what these models are capable/ will be capable, and it will never manifest into a full reasoning system that can self correct.
There is no amount of engineering sandblasting that can overcome issues which are fundamental to the models structure. If the rot is in the bones, its in the bones.

