BatmanAoD

joined 2 years ago
[–] BatmanAoD@programming.dev 1 points 1 month ago (5 children)

making the same mistakes

This is key, and I feel like a lot of people arguing about "hallucinations" don't recognize it. Human memory is extremely fallible; we "hallucinate" wrong information all the time. If you've ever forgotten the name of a method, or whether that method even exists in the API you're using, and started typing it out to see if your autocompleter recognizes it, you've just "hallucinated" in the same way an LLM would. The solution isn't to require programmers to have perfect memory, but to have easily-searchable reference information (e.g. the ability to actually read or search through a class's method signatures) and tight feedback loops (e.g. the autocompleter and other LSP/IDE features).

[–] BatmanAoD@programming.dev 3 points 1 month ago (1 children)

This seems like it doesn't really answer OP's question, which is specifically about the practical uses or misuses of LLMs, not about whether the "I" in "AI" is really "intelligent" or not.

[–] BatmanAoD@programming.dev 2 points 1 month ago

One list, two list, red list, blue list

(I genuinely thought that was where you were going with that for a line or two)

[–] BatmanAoD@programming.dev 11 points 2 months ago

Agile Meridian / Post Manager

[–] BatmanAoD@programming.dev 9 points 2 months ago* (last edited 2 months ago) (1 children)

Thanks for sharing this! I really think that when people see LLM failures and say that such failures demonstrate how fundamentally different LLMs are from human cognition, they tend to overlook how humans actually do exhibit remarkably similar failures modes. Obviously dementia isn't really analogous to generating text while lacking the ability to "see" a rendering based on that text. But it's still pretty interesting that whatever feedback loops did get corrupted in these patients led to such a variety of failure modes.

As an example of what I'm talking about, I appreciated and generally agreed with this recent Octomind post, but I disagree with the list of problems that "wouldn’t trip up a human dev"; these are all things I've seen real humans do, or could imagine a human doing.

[–] BatmanAoD@programming.dev 3 points 2 months ago

That is a pretty lame "poisoning".

[–] BatmanAoD@programming.dev 1 points 2 months ago

This also makes me realize that I sometimes enunciate "the" unvoiced.

[–] BatmanAoD@programming.dev 49 points 2 months ago (3 children)

Well now you've seen it elsewhere, too.

[–] BatmanAoD@programming.dev 5 points 2 months ago

distributing relay knowledge among chatters (TBD)

This is the core reason that centralization is currently necessary. So admitting that it's an unsolved problem for a federated alternative is basically reinforcing Signal's point.

[–] BatmanAoD@programming.dev 4 points 3 months ago

That's because you haven't unlearned it yet

[–] BatmanAoD@programming.dev 14 points 3 months ago

Two, arguably: one with Apple and one with upstream Linux.

[–] BatmanAoD@programming.dev 9 points 3 months ago (2 children)

String escaping sucks in bash and other posix-style shells too, though.

view more: ‹ prev next ›