corbin

joined 2 years ago
[–] corbin@awful.systems 3 points 1 day ago (1 children)

I went into this with negative expectations; I recall being offended in high school that The Flashbulb was artificially sped up, unlike my heroes of neoclassical guitar and progressive-rock keyboards, and I've felt that their recent thoughts on newer music-making technology have been hypocritical. That said, this was a great video and I'm glad you shared it.

Ears and eyes are different. We deconvolve visual data in the brain, but our ears actually perform a Fourier decomposition with physical hardware. As a result, psychoacoustics is a real and non-trivial science, used e.g. in MP3, which limits what an adversary can do to frustrate classification or learning, because the result still has to sound like music in order to get any playtime among humans. Meanwhile I'm always worried that these adversarial groups are going to accidentally propagate something like McCollough stripes, a genuine cognitohazard that causes edges to become color-coded in the visual cortex for (up to) months after a few minutes of exposure; it's a kind of possible harm that fundamentally defies automatic classification by definition.

HarmonyCloak seems like a fairly boring adversarial tool for protecting the music industry from the music industry. Their code is incomplete and likely never going to get properly published; again we're seeing an industry-capture research group taking and not giving back to the Free Software community. I think all of the demos shown here are genuine, but he fully admits that this is a compute-intensive process which I estimate is going to slide back out of affordability by the end of 2026. This is going to stop being effective as soon as we get back into AI winter, but I'm not going to cry for Nashville.

I really like the two attacks shown near the end, starting around 22:00. The first attack, if genuinely not audible to humans, is likely a Mosquito-style frequency that is above hearing range and physically vibrates the components of the microphone. Hofstadter and the Tortoise would be proud, although I'm concerned about the potential long-term effects on humans. The second attack is again adversarial but specific to models on home-assistant devices which are trained to ignore some loud sounds; I can't tell spectrographically whether that's also done above hearing range or not. I'm reluctant to call for attacks on home assistants, but they're great targets.

Fundamentally this is a video that doesn't want to talk about how musicians actually rip each other off. The "tones and rhythms" that he keeps showing with nice visualizations have been machine-learnable for decades, ranging from beat-finders to frequency-analyzers to chord-spellers to track-isolators built into our music editors. He doubles down on copyright despite building businesses that profit from Free Software. And, most gratingly, he talks about the Pareto principle while ignoring that the typical musician is never able to make a career out of their art.

[–] corbin@awful.systems 5 points 2 days ago

It's well-known folklore that reinforcement learning with human feedback (RLHF), the standard post-training paradigm, reduces "alignment," the degree to which a pre-trained model has learned features of reality as it actually exists. Quoting from the abstract of the 2024 paper, Mitigating the Alignment Tax of RLHF (alternate link):

LLMs acquire a wide range of abilities during pre-training, but aligning LLMs under Reinforcement Learning with Human Feedback (RLHF) can lead to forgetting pretrained abilities, which is also known as the alignment tax.

[–] corbin@awful.systems 5 points 3 days ago (3 children)

In practice, the behaviors that the chatbots learn in post-training are FUD and weasel-wording; they appear to not unlearn facts, but to learn so much additional nuance as to bury the facts. The bots perform worse on various standardized tests about the natural world after post-training; there are quantitative downsides to forcing them to adopt any particular etiquette, including speaking like a chud.

The problem is mostly that the uninformed public will think that the chatbot is knowledgeable and well-spoken because it rattles off the same weak-worded hedges as right-wing pundits, and it's addressed by the same improvements in education required to counter those pundits.

Answering your question directly: no, slop machines can't be countered with more slop machines without drowning us all in slop. A more direct approach will be required.

[–] corbin@awful.systems 5 points 5 days ago

Yes, but the article's not actually about that. It's about Microsoft returning to the same datacenter-building schedule from a decade ago. Datacenters have a lag of about 3-5yrs depending on what's inside them and where they're located, so what we're actually seeing is Microsoft projecting a relative reduction in overall usage. Note that among all the cancellations of notes and prospective claims, Microsoft isn't walking back their two-decade nuclear-power deal with Westinghouse; they're not destroying or reducing any existing capacity, just planning to build less. At risk of quoting Bloomberg:

After a frantic expansion to support OpenAI and other artificial intelligence projects, [Microsoft] expects spending to shift from new construction to fitting out data centers with servers and other equipment.

To the extent that the bubble is popping, Microsoft and other datacenter owners have to guess half a decade in advance when the bubble will pop, and if you take them at their word — that is, if we assume that they canceled these contracts with perfect foresight — then the bubble must have already popped in 2023-2024, and the market is experiencing coyote time because…? More likely, this is fallout from their ongoing breakup with OpenAI, who almost certainly begged Microsoft for so much compute (and definitely begged for too many nVidia GPUs!) that Microsoft had to adjust their datacenter plans. The bubble's not done until OpenAI has exhausted all possible funding, say in late 2025 or early 2026 when Softbank and the Saudis realize that they've made a hilarious mistake.

We've discussed this previously on awful.systems, both the value of nuclear-energy contracts and Microsoft's retraction of intents.

[–] corbin@awful.systems 2 points 1 week ago

Like any reality-show writing room, they only plan one episode in advance and only have a week's worth of photography in mind.

[–] corbin@awful.systems 3 points 1 week ago (3 children)

As the classic film Network points out, the Saudi money is the end of the road; there aren't any richer or more gullible large wealth funds who will provide further cash. So OpenAI could be genuinely out of greater ~~fools~~ financing after another year of wasting Somebody Else's Money. This crash has removed "large" from the front of any other wealth fund that might have considered bailing them out. The Stargate gamble could still work out, but so far I think ti's only transferred bag-holding responsibilities from Microsoft to Oracle.

Another path is to deflate nVidia's cap. At first blush, this seems impossible to me; nVidia's business behavior is so much worse than that of competitors Intel or Imagination yet they have generally never lost faith from their core gaming laity, and as long as nVidia holds 20-30% of the gaming GPU market they will always have a boutique niche with cap at least comparable to e.g. their competitor AMD. But GPUs have been treated as currency among datacenter owners, and a market crash could devalue the piles of nVidia GPUs which some datacenter owners have been using as collateral for purchasing land, warehouses, machines, more GPUs, etc. nVidia isn't the only bag-holder here, though, and since they don't really want to play loan-shark and repossess a datacenter for dereliction, odds are good that they'll survive even if they're no longer king of the hill. The gold rush didn't work out? Too bad, no returns allowed on shovels or snow gear.

Side note: If folks just wanted to know whether tech in general is hurt by this, then yes, look at Tesla's valuation. Tesla is such a cross-cutting big-ticket component of so many ETFs that basically every retirement scheme took a hit from Tesla taking a hit. The same thing will happen with nVidia and frankly retirement-fund managers should feel bad for purchasing so much of what any long-term investor would consider to be meme stocks. (I don't hold either TSLA or NVDA stocks.)

I hope this makes sense. I don't post with this candor when I'm well-rested and sober.

[–] corbin@awful.systems 7 points 1 week ago

Australian chemist and videographer Explosions & Fire argues convincingly that the ongoing recent radioactive-boy-scout scandal should not result in prosecution. For context, a 24-year-old man ordered small samples of radioactive isotopes from the USA, Australia failed to intercept it at the border, and they are prosecuting him in order to avoid embarrassment over incompetence. I don't have a choice sneer; E&F is unwaveringly energized over the topic of radioactive isotopes and injustice, and the whole thing is worth watching.

[–] corbin@awful.systems 14 points 1 week ago (19 children)

Today on the orange site, an AI bro is trying to reason through why people think he's weird for not disclosing his politics to people he's trying to be friendly with. Previously, he published a short guide on how to talk about politics, which — again, very weird, no possible explanation for this — nobody has adopted. Don't worry, he's well-read:

So far I've only read Harry Potter and The Methods of Rationality, but can say it is an excellent place to start.

The thread is mostly centered around one or two pearl-clutching conservatives who don't want their beliefs examined:

I find it astonishing that anyone would ask, ["who did you vote for?"] … In my social circle, anyway, the taboo on this question is very strong.

To which the top reply is my choice sneer:

In my friend group it's clear as day: either you voted to kill and deport other people in the friend group or you didn't. Pretty obvious the group would like to know if you're secretly interested in their demise.

[–] corbin@awful.systems 14 points 2 weeks ago

Angela Collier has a wonderfully grumpy video up, why functioning governments fund scientific research. Choice sneer at around 32:30:

But what do I know? I'm not a medical doctor but neither is this chucklefuck, and people are listening to him. I don't know. I feel like this is [sighs, laughs] I always get comments that tell me, "you're being a little condescending," and [scoffs] yeah. I mean, we can check the dictionary definition of "condescending," and I think I would fit into that category. [Vaccine deniers] have failed their children. They are bad parents. One in four unvaccinated kids who get measles will die. They are playing Russian roulette with their child's life. But sure, the problem is I'm being, like, a little condescending.

[–] corbin@awful.systems 12 points 2 weeks ago (7 children)

Strange is a trooper and her sneer is worth transcribing. From about 22:00:

So let's go! Upon saturating my brain with as much background information as I could, there was really nothing left to do but fucking read this thing, all six hundred thousand words of HPMOR, really the road of enlightenment that they promised it to be. After reading a few chapters, a realization that I found funny was, "Oh. Oh, this is definitely fanfiction. Everyone said [laughing and stuttering] everybody that said that this is basically a real novel is lying." People lie on the Internet? No fucking way. It is telling that even the most charitable reviews, the most glowing worshipping reviews of this fanfiction call it "unfinished," call it "a first draft."

A shorter sneer for the back of the hardcover edition of HPMOR at 26:30 or so:

It's extremely tiring. I was surprised by how soul-sucking it was. It was unpleasant to force myself beyond the first fifty thousand words. It was physically painful to force myself to read beyond the first hundred thousand words of this – let me remind you – six-hundred-thousand-word epic, and I will admit that at that point I did succumb to skimming.

Her analysis is familiar. She recognized that Harry is a self-insert, that the out-loud game theory reads like Death Note parody, that chapters are only really related to each other in the sense that they were written sequentially, that HPMOR is more concerned with sounding smart than being smart, that HPMOR is yet another entry in a long line of monarchist apologies explaining why this new Napoleon won't fool us again, and finally that it's a bad read. 31:30 or so:

It's absolutely no fucking fun. It's just absolutely dry and joyless. It tastes like sand! I mean, maybe it's Yudkowsky's idea of fun; he spent five years writing the thing after all. But it just [struggles for words] reading this thing, it feels like chewing sand.

[–] corbin@awful.systems 12 points 2 weeks ago (5 children)

Anecdote: I gave up on COBOL as a career after beginning to learn it. The breaking point was learning that not only does most legacy COBOL code use go-to statements but that there is a dedicated verb which rewrites go-to statements at runtime and is still supported on e.g. the IBM Enterprise COBOL for z/OS platform that SSA is likely using: ALTER.

When I last looked into this a decade ago, there was a small personal website last updated in the 1990s that had advice about how to rewrite COBOL to remove GOTO and ALTER verbs; if anybody has a link, I'd appreciate it, as I can no longer find it. It turns out that the best ways of removing these spaghetti constructions involve multiple rounds of incremental changes which are each unlikely to alter the code's behavior. Translations to a new language are doomed to failure; even Java is far too structured to directly encode COBOL control flow, and the time would be better spent on abstract specification of the system so that it can be rebuilt from that specification instead. This is also why IBM makes bank selling COBOL emulators.

[–] corbin@awful.systems 16 points 3 weeks ago (9 children)

In lesser corruption news, California Governor Gavin Newsom has been caught distributing burner phones to California-based CEOs. These are people that likely already have Newsom's personal and business numbers, so it's not hard to imagine that these phones are likely to facilitate extralegal conversations beyond the existing ~~bribery~~ legitimate business lobbying before the Legislature. With this play, Newsom's putting a lot of faith into his sexting game.

 

Sorry, no sneer today. I'm tired of this to the point where I'm dreaming up new software licenses.

A trans person no longer felt safe in our community and is no longer developing. In response, at least four different forums full of a range of Linux users and developers (Lemmy #1, Lemmy #2, HN, Phoronix (screenshot)) posted their PII and anti-trans hate.

I don't have any solutions. I'm just so fucking disappointed in my peers and I feel a deep inadequacy at my inability to get these fuckwads to be less callous.

 

After a decade of cryptofascism and failed political activism, our dear friend jart is realizing that they don't really have much of a positive legacy. If only there was something they could have done about that.

 

In this big thread, over and over, people praise the Zuck-man for releasing Llama 3's weights. How magnanimous! How courteous! How devious!

Of course, Meta is doing this so that they don't have to worry about another 4chan leak of weights via Bittorrent.

 

In today's episode, Yud tries to predict the future of computer science.

 

Eminent domain? Never heard of it! Sounds like a fantasy from the "economical illiterate."

Edit: This entire thread is a trash fire, by the way. I'm only highlighting the silliest bit from one of the more aggressive landlords.

 

Saw this last night but decided to give them a few hours to backtrack. Surprisingly, they've decided to leave their comments intact!

This sort of attitude, not directly harassing trans folks but just asking questions about their moral fiber indirectly, seems to be coming from some playbook; it looks like a structured disinformation source, and I wonder what motivates them.

 

"The sad thing is that if the officer had not made a few key missteps … he might have covered his bases well enough to avoid consequences." Yeah, so sad.

For bonus sneer, check out their profile.

view more: next ›