BatmanAoD

joined 2 years ago
[–] BatmanAoD@programming.dev 12 points 2 days ago (4 children)

That's just not terribly meaningful, though. Was JavaScript the "best tool" for client-side logic from the death of Flash until the advent of TypeScript? No, it was the only tool.

[–] BatmanAoD@programming.dev 4 points 4 days ago

Even in the original comic, that would have been appropriate, I think.

[–] BatmanAoD@programming.dev 5 points 4 days ago

At one point the user linked to a rust-lang forum thread from 2016-2019 as evidence that Jai has "some of the tools to make the code language agnostic" or something like that. The thread started with a discussion of array-of-struct vs struct-of-array data layouts, which of course has nothing to do with making code "language agnostic." The user also mentioned the coding influencer lunduke multiple times. So I think they are simply misinformed on a lot of points, and I doubt they're in the closed beta for Jai.

(I read some of the comments simply because I had the same question you did. And, as it happens, the last post in the forum thread I mentioned was written by me, which was a funny surprise.)

[–] BatmanAoD@programming.dev 1 points 4 days ago

Exactly: that's tight feedback loops. Agents are also capable of reading docs and source code prior to generating new function calls, so they benefit from both of the solutions that I said people benefit from.

[–] BatmanAoD@programming.dev 1 points 5 days ago

As an even more obvious example: students who put wrong answers on tests are "hallucinating" by the definition we apply to LLMs.

[–] BatmanAoD@programming.dev 1 points 5 days ago (5 children)

making the same mistakes

This is key, and I feel like a lot of people arguing about "hallucinations" don't recognize it. Human memory is extremely fallible; we "hallucinate" wrong information all the time. If you've ever forgotten the name of a method, or whether that method even exists in the API you're using, and started typing it out to see if your autocompleter recognizes it, you've just "hallucinated" in the same way an LLM would. The solution isn't to require programmers to have perfect memory, but to have easily-searchable reference information (e.g. the ability to actually read or search through a class's method signatures) and tight feedback loops (e.g. the autocompleter and other LSP/IDE features).

[–] BatmanAoD@programming.dev 3 points 5 days ago (1 children)

This seems like it doesn't really answer OP's question, which is specifically about the practical uses or misuses of LLMs, not about whether the "I" in "AI" is really "intelligent" or not.

[–] BatmanAoD@programming.dev 2 points 1 week ago

One list, two list, red list, blue list

(I genuinely thought that was where you were going with that for a line or two)

[–] BatmanAoD@programming.dev 11 points 4 weeks ago

Agile Meridian / Post Manager

[–] BatmanAoD@programming.dev 9 points 1 month ago* (last edited 1 month ago) (1 children)

Thanks for sharing this! I really think that when people see LLM failures and say that such failures demonstrate how fundamentally different LLMs are from human cognition, they tend to overlook how humans actually do exhibit remarkably similar failures modes. Obviously dementia isn't really analogous to generating text while lacking the ability to "see" a rendering based on that text. But it's still pretty interesting that whatever feedback loops did get corrupted in these patients led to such a variety of failure modes.

As an example of what I'm talking about, I appreciated and generally agreed with this recent Octomind post, but I disagree with the list of problems that "wouldn’t trip up a human dev"; these are all things I've seen real humans do, or could imagine a human doing.

[–] BatmanAoD@programming.dev 3 points 1 month ago

That is a pretty lame "poisoning".

[–] BatmanAoD@programming.dev 1 points 1 month ago

This also makes me realize that I sometimes enunciate "the" unvoiced.

66
submitted 11 months ago* (last edited 11 months ago) by BatmanAoD@programming.dev to c/programmer_humor@programming.dev
33
submitted 2 years ago* (last edited 2 years ago) by BatmanAoD@programming.dev to c/rust@programming.dev
 

Almost five years ago, Saoirse "boats" wrote "Notes on a smaller Rust", and a year after that, revisited the idea.

The basic idea is a language that is highly inspired by Rust but doesn't have the strict constraint of being a "systems" language in the vein of C and C++; in particular, it can have a nontrivial (or "thick") runtime and doesn't need to limit itself to "zero-cost" abstractions.

What languages are being designed that fit this description? I've seen a few scripting languages written in Rust on GitHub, but none of them have been very active. I also recently learned about Hylo, which does have some ideas that I think are promising, but it seems too syntactically alien to really be a "smaller Rust."

Edit to add: I think Graydon Hoare's post about language design choices he would have preferred for Rust also sheds some light on the kind of things a hypothetical "Rust-like but not Rust" language could do differently: https://graydon2.dreamwidth.org/307291.html

view more: next ›