[-] imadabouzu@awful.systems 7 points 2 weeks ago

When it comes to cloning or copying, I always have to remind people: at least half of what you are today, is the environment of today. And your clone X time in the future won't and can't have that.

The same thing is likely for these models. Inflate them again 100 years in the future, and maybe they're interesting for inspecting as a historical artifact, but most certainly they wouldn't be used the same way as they had been here and how. It'd just, be something different.

Which would beg the question, why?

I feel like a subset of sci-fi and philosophical meandering really is just increasingly convoluted paths of trying to avoid or come to terms with death as a possibly necessary component of life.

[-] imadabouzu@awful.systems 9 points 3 weeks ago

Oh man, anyone who runs on such existential maximalism has such infinite power to state things as if their conclusion has only one possible meaning.

How about invoking Monkey Paw -- what if every statement is true but just not in the way they think.

  1. A perfect memory which is infinitely copyable and scaleable is possible. And it's called, all the things in nature in sum.
  2. In fact, we're already there today, because it is, quite literally the sum of nature. The question for tomorrow is, "so like, what else is possible?"
  3. And it might not even have to try or do anything at all, especially if we don't bother to save ourselves from ecological disaster.
  4. What we don't know can literally be anything. That's why it's important not to project fantasy, but to conserve of the fragile beauty of what you have, regardless of whether things will "one day fall apart". Death and Taxes mate.

And yud can both one day technically right and whose interpretations today are dumb and worthy of mockery.

[-] imadabouzu@awful.systems 8 points 3 weeks ago* (last edited 3 weeks ago)

The issue isn't even that AI is doing grading, really. There are worlds where using technology to assist in grading isn't a loss for a student.

The issue is that all of this is as an excuse not to invest in students at all and the turn here is purely a symptom of that. Because in a world where we invest in technology to assist in education, the first thing that happens is we recognize the completely unsexy and obvious things that also need to happen, like funding for maintenance of school buildings, basic supplies, balancing class sizes by hiring and redistricting, you know. The obvious shit.

But those things don't attract the attention of the debt metabolism, they're too obvious and don't include more leverage for short term futures. To believe there is a future for the next generation is risk inherent and ambiguous. You can only invest in that it if you actually care.

[-] imadabouzu@awful.systems 8 points 1 month ago

Procreate is an example of what good AI deployment looks like. They do use technology, and even machine learning, but they do it in obviously constructive scopes between where the artist's attention is focused. And they're committed to that because... there's no value for them to just be a thin wrapper on an already completely commoditized technology on its way to the courtroom to be challenged by landmark rulings with no more room ceiling to grow into whooooooops.

[-] imadabouzu@awful.systems 9 points 1 month ago

Maybe I'm old fashioned but,

I still start by asking someone who knows about the thing what books they might recommend. And I know mushrooms are especially problematic, so I go look for um, active communities of people who aren't dead from eating the wrong mushrooms.

Is it possible, that we're looking too far away from accountable sources when we route our knowledge searches through noisy corporate slops?

[-] imadabouzu@awful.systems 9 points 1 month ago

LLM, tell me the most obviously persuasive sort of science devoid of context. Historically, that's been super helpful so let's do more of that.

[-] imadabouzu@awful.systems 8 points 1 month ago

Short story: it's smoke and mirrors.

Longer story: This is now how software releases work I guess. Alot is running on open ai's anticipated release of GPT 5. They have to keep promising enormous leaps in capability because everyone else has caught up and there's no more training data. So the next trick is that for their next batch of models they have "solved" various problems that people say you can't solve with LLMs, and they are going to be massively better without needing more data.

But, as someone with insider info, it's all smoke and mirrors.

The model that "solved" structured data is emperically worse at other tasks as a result, and I imagine the solution basically just looks like polling multiple response until the parser validates on the other end (so basically it's a price optimization afaik).

The next large model launching with the new Q* change tomorrow is "approaching agi because it can now reliably count letters" but actually it's still just agents (Q* looks to be just a cost optimization of agents on the backend, that's basically it), because the only way it can count letters is that it invokes agents and tool use to write a python program and feed the text into that. Basically, it is all the things that already exist independently but wrapped up together. Interestingly, they're so confident in this model that they don't run the resulting python themselves. It's still up to you or one of those LLM wrapper companies to execute the likely broken from time to time code to um... checks notes count the number of letters in a sentence.

But, by rearranging what already exists and claiming it solved the fundamental issues, OpenAI can claim exponential progress, terrify investors into blowing more money into the ecosystem, and make true believers lose their mind.

Expect more of this around GPT-5 which they promise "Is so scary they can't release it until after the elections". My guess? It's nothing different, but they have to create a story so that true believers will see it as something different.

[-] imadabouzu@awful.systems 8 points 1 month ago

The weird thing, is. From my perspective. Nearly every, weird, cringy, niche internet addiction I've ever seen or partaken in myself, has produced both two things: people who live through it and their perspective widens, and people who don't.

Like, I look back at my days of spending 2 days at a time binge playing World of Warcraft with a deep sense of cringe but also a smirk because I survived and I self regulated, and honestly. Made a couple of lifetime friends. Like whatever response we have to anime waifus, I hope we still recognize the humanity in being a thing that wants to be entertained or satisfied.

[-] imadabouzu@awful.systems 9 points 1 month ago

Watching this election has been amazing! LIKE WOAH what a fucking obviously self destructive end to delusion. Can I be optimistic and hope that with EA leaning explicitly heavier into the hard right Trump position, when it collapses and Harris takes it, maybe some of them will self reflective on what the hell they think "Effective" means anyways.

[-] imadabouzu@awful.systems 9 points 1 month ago* (last edited 1 month ago)

Audacious and Absurd Defender of Humanity

Your honor, I'd rather plea guilty than abide by my audacious counsel.

[-] imadabouzu@awful.systems 9 points 1 month ago

It can't stop the usage, it can raise the cost of doing so, by bringing in legal risk of operations operating in a public way. It can create precedence that can be built upon by other parts.

Politics and law move slower than and behind the things it attempts to regulate by design. Which is good, the atlernative is a surveilance state! But it definitely can arrange itself to punish or raise the risk profile of doing something in a certain patterned way.

[-] imadabouzu@awful.systems 8 points 1 month ago

A certain class of idealists definitely feel this way, and it's why many decentralized efforts are fragile and fall apart. Because they can't meaningfully construct something without centralization or owners, they end up just hiding these things under a blanket rather than acknowledging them as design elements that require an intentional specification.

view more: ‹ prev next ›

imadabouzu

joined 2 months ago