lagrangeinterpolator

joined 6 months ago

Although I never use LLMs for any serious purpose, I do sometimes give LLMs test questions in order to get firsthand experience on what their responses are like. This guide tracks quite well with what I see. The language is flowery and full of unnecessary metaphors, and the formatting has excessive bullet points, boldface, and emoji. (Seeing emoji in what is supposed to be a serious text really pisses me off for some reason.) When I read the text carefully, I can almost always find mistakes or severe omissions, even when the mistake could easily be remedied by searching the internet.

This is perfectly in line with the fact that LLMs do not have deep understanding, or the understanding is only in the mind of the user, such as with rubber duck debugging. I agree with the "Barnum-effect" comment (see this essay for what that refers to).

[–] lagrangeinterpolator@awful.systems 4 points 1 day ago (2 children)

The most obvious indication of AI I can see is the countless paragraphs that start with a boldfaced "header" with a colon. I consider this to be terrible writing practice, even for technical/explanatory writing. When a writer does this, it feels as if they don't even respect their own writing. Maybe their paragraphs are so incomprehensible that they need to spoonfeed the reader. Or, perhaps they have so little to say that the bullet points already get it across, and their writing is little more than extraneous fluff. Yeah, much larger things like sections or chapters should have titles, but putting a header on every single paragraph is, frankly, insulting the reader's intelligence.

I see AI output use this format very frequently though. Honestly, this goes to show how AI appeals to people who only care about shortcuts and bullshitting instead of thinking things through. Putting a bold header on every single paragraph really does appeal to that type.

Chicken penne is now rivaling spaghetti Bolognese as a core layer of pasta infrastructure.

[–] lagrangeinterpolator@awful.systems 11 points 6 days ago (1 children)

If rationalists could benefit from just one piece of advice, it would be: actions speak louder than words. Right now, I don't think they understand that, given their penchant for 10k word blog posts.

One non-AI example of this is the most expensive fireworks show in history, I mean, the SpaceX Starship program. So far, they have had 11 or 12 test flights (I don't care to count the exact number by this point), and not a single one of them has delivered anything into orbit. Fans generally tend to cling on to a few parlor tricks like the "chopstick" stuff. They seem to have forgotten that their goal was to land people on the moon. This goal had already been accomplished over 50 years ago with the 11th flight of the Apollo program.

I saw this coming from their very first Starship test flight. They destroyed the launchpad as soon as the rocket lifted off, with massive chunks of concrete flying hundreds of feet into the air. The rocket itself lost control and exploded 4 minutes later. But by far the most damning part was when the camera cut to the SpaceX employees wildly cheering. Later on there were countless spin articles about how this test flight was successful because they collected so much data.

I chose to believe the evidence in front of my eyes over the talking points about how SpaceX was decades ahead of everyone else, SpaceX is a leader in cheap reusable spacecraft, iterative development is great, etc. Now, I choose to look at the actions of the AI companies, and I can easily see that they do not have any ethics. Meanwhile, the rationalists are hypnotized by the Anthropic critihype blog posts about how their AI is dangerous.

[–] lagrangeinterpolator@awful.systems 5 points 6 days ago (3 children)

You cannot honestly call it "trust" if you still have to go through the output with a magnifying glass and make sure it didn't tell anyone to put glue on their pizza.

When any other technology fails to achieve its stated purpose, we call it flawed and unreliable. But AI is so magical! It receives credit for everything it happens to get right, and it's my fault when it gets something wrong.

[–] lagrangeinterpolator@awful.systems 6 points 1 week ago (3 children)

Unfortunately, I don't think anyone is ever going to go through all 19,797 submissions and 75,800 reviews (to one conference, in one year) and manually review them all. Then again, using the ultra-advanced cutting-edge innovative statistical technique of randomly sampling a few papers/reviews, one can still get useful conclusions.

[–] lagrangeinterpolator@awful.systems 7 points 1 week ago (1 children)

After the bubble collapses, I believe there is going to be a rule of thumb for whatever tiny niche use cases LLMs might have: "Never let an LLM have any decision-making power." At most, LLMs will serve as a heuristic function for an algorithm that actually works.

Unlike the railroads of the First Gilded Age, I don't think GenAI will have many long term viable use cases. The problem is that it has two characteristics that do not go well together: unreliability and expense. Generally, it's not worth spending lots of money on a task where you don't need reliability.

The sheer expense of GenAI has been subsidized by the massive amounts of money thrown at it by tech CEOs and venture capital. People do not realize how much hundreds of billions of dollars is. On a more concrete scale, people only see the fun little chat box when they open ChatGPT, and they do not see the millions of dollars worth of hardware needed to even run a single instance of ChatGPT. The unreliability of GenAI is much harder to hide completely, but it has been masked by some of the most aggressive marketing in history towards an audience that has already drunk the tech hype Kool-Aid. Who else would look at a tool that deletes their entire hard drive and still ever consider using it again?

The unreliability is not really solvable (after hundreds of billions of dollars of trying), but the expense can be reduced at the cost of making the model even less reliable. I expect the true "use cases" to be mainly spam, and perhaps students cheating on homework.

Now I'm even more skeptical of the programmers (and managers) who endorse LLMs.

[–] lagrangeinterpolator@awful.systems 7 points 1 week ago* (last edited 1 week ago)

The basilisk now eats its own tail.

[–] lagrangeinterpolator@awful.systems 8 points 1 week ago* (last edited 1 week ago) (1 children)

Promptfans still can't get over the Erdős problems. Thankfully, even r/singularity has somehow become resistant to the most overhyped claims. I don't think I need to comment on this one.

Link: https://www.reddit.com/r/singularity/comments/1pag5mp/aristotle_from_harmonicmath_just_proved_erdos/

alt text (original claim)We are on the cusp of a profound change in the field of mathematics. Vibe proving is here.

Aristotle from @HarmonicMath just proved Erdos Problem #124 in @leanprover, all by itself. This problem has been open for nearly 30 years since conjectured in the paper “Complete sequences of sets of integer powers” in the journal Acta Arithmetica.

Boris Alexeev ran this problem using a beta version of Aristotle, recently updated to have stronger reasoning ability and a natural language interface.

Mathematical superintelligence is getting closer by the minute, and I’m confident it will change and dramatically accelerate progress in mathematics and all dependent fields.


alt text (comments)Gcd conditions removed, still great, but really hate the way people shill their stuff without any rigor to explaining the process. A lot of things become very easy when you remove a simple condition. Heck reimann hypothesis is technically solved for function fields over finite fields. But nowadays in the age of hype, a tweet post would probably say “Reimann hypothesis oneshotted by AI” even though that’s not true.

Gcd conditions removed

So they didn't solve the actual problem?


[–] lagrangeinterpolator@awful.systems 6 points 1 week ago* (last edited 1 week ago) (1 children)

True, it is possible to achieve 100,000x speedups if you dispose of the silly restriction of being correct.

[–] lagrangeinterpolator@awful.systems 15 points 1 week ago* (last edited 1 week ago) (5 children)

We will secure energy dominance by dumping even more money and resources into a technology that is already straining our power grid. But don't worry. The LLM will figure it all out by reciting the Wikipedia page for Fusion Power.

AI is expected to make cutting-edge simulations run “10,000 to 100,000 times faster.”

Turns out it's not good to assume that literally every word that comes out of a tech billionaire's mouth is true. Now everyone else thinks they can get away with just rattling off numbers where their source is they made it the fuck up. I still remember Elon Musk saying a decade ago that he could make rockets 1,000 times cheaper, and so many people just thought it was going to happen.

We need scientists and engineers. We do not need Silicon Valley billionaire visionary innovator genius whizzes with big ideas who are pushing the frontiers of physics with ChatGPT.

view more: next ›