corbin

joined 2 years ago
[–] corbin@awful.systems 7 points 2 days ago

Fuck, your lack of history is depressing sometimes. That Venn diagram is well-pointed, even among people who have met RMS, and the various factions do not get along with each other. For a taste, previously on Lobsters you can see an avowed FLOSS communist ripping the mask off of a Suckless cryptofascist in response to a video posted by a recently-banned alt-right debate-starter.

[–] corbin@awful.systems 3 points 2 days ago (8 children)

Since appearing on Piers Morgan’s show, Eric Weinstein has taken to expounding additional theories about physics. Peer review was created by the government, working with Ghislaine Maxwell’s father, to control science, he said on “Diary of a CEO,” one of the world’s most popular podcasts. Jeffrey Epstein was sent by an intelligence agency to throw physics off track and discourage space exploration, keeping humanity trapped in “the prison built by Einstein.”

Heartbreaking! Weinstein isn't fully wrong. Maxwell's daddy was Robert Maxwell, who did indeed have a major role in making Springer big and kickstarting the publish-or-perish model, in addition to having incredibly tight Mossad ties; the corresponding Behind the Bastards episodes are subtitled "how Ghislane Maxwell's dad ruined science." Epstein has been accused of being a Mossad asset tasked with seeking out influential scientists like Marvin Minsky to secure evidence for blackmail and damage their reputations. As they say on Reddit, everybody sucks here.

[–] corbin@awful.systems 3 points 4 days ago

It's important to understand that the book's premise is fairly hollow. Yudkowsky's rhetoric really only gets going once we agree that (1) intelligence is comparable, (2) humans have a lot of intelligence, (3) AGIs can exist, (4) AGIs can be more intelligent than humans, and finally (5) an AGI can exist which has more intelligence than any human. They conclude from those premises that AGIs can command and control humans with their intelligence.

However, what if we analogize AGIs and humans to humans and housecats? Cats have a lot of intelligence, humans can exist, humans can be more intelligent than housecats, and many folks might believe that there is a human who is more intelligent than any housecat. Assuming intelligence is comparable, does it follow that that human can command and control any housecat? Nope, not in the least. Cats often ignore humans; moreover, they appear to be able to choose to ignore humans. This is in spite of the fact that cats appear to have some sort of empathy for humans and perceive us as large slow unintuitive cats. A traditional example in philosophy is to imagine that Stephen Hawking owns a housecat; since Hawking is incredibly smart and capable of spoken words, does it follow that Hawking is capable of e.g. talking the cat into climbing into a cat carrier? (Aside: I recall seeing this example in one of Sean Carroll's papers, but it's also popularized by Cegłowski's 2016 talk on superintelligence. I'm not sure who originated it, but I'd be unsurprised if it were Hawking himself; he had had that sort of humor.)

[–] corbin@awful.systems 5 points 5 days ago

I think that you have useful food for thought. I think that you underestimate the degree to which capitalism recuperates technological advances, though. For example, it's common for singers supported by the music industry to have pitch correction which covers up slight mistakes or persistent tone-deafness, even when performing live in concert. This technology could also be used to allow amateurs to sing well, but it isn't priced for them; what is priced for amateurs is the gimmicky (and beloved) whammy pedal that allows guitarists to create squeaky dubstep squeals. The same underlying technology is configured for different parts of capitalism.

From that angle, it's worth understanding that today's generative tooling will also be configured for capitalism. Indeed, that's basically what RLHF does to a language model; in the jargon, it creates an "agent", a synthetic laborer, based on desired sales/marketing/support interactions. We also have uses for raw generation; in particular, we predict the weather by generating many possible futures and performing statistical analysis. Style transfer will always be useful because it allows capitalists to capture more of a person and exploit them more fully, but it won't ever be adopted purely so that the customer has a more pleasant experience. Composites with object detection ("filters") in selfie-sharing apps aren't added to allow people to express themselves and be cute, but to increase the total and average time that users spend in the apps. Capitalists can always use the Shmoo, or at least they'll invest in Shmoo production in order to capture more of a potential future market.

So, imagine that we build miniature cloned-voice text-to-speech models. We don't need to imagine what they're used for, because we already know; Disney is making movies and extending their copyright on old characters, and amateurs are making porn. For every blind person using such a model with a screen reader, there are dozens of streamers on Twitch using them to read out donations from chat in the voice of a breathy young woman or a wheezing old man. There are other uses, yes, but capitalism will go with what is safest and most profitable.

Finally, yes, you're completely right that e.g. smartphones completely revolutionized filmmaking. It's important to know that the film industry didn't intend for this to happen! This is just as much of an exaptation as captialist recuperation and we can't easily plan for it because of the same difficulty in understanding how subsystems of large systems interact (y'know, plan interference.)

[–] corbin@awful.systems 3 points 6 days ago

I'm gonna start by quoting the class's pretty decent summary, which goes a little heavy on the self-back-patting:

If approved, this landmark settlement will be the largest publicly reported copyright recovery in history… The proposed settlement … will set a precedent of AI companies paying for their use of pirated websites like Library Genesis and Pirate Library Mirror.

The stage is precisely the one that we discussed previously, on Awful in the context of Kadrey v. Meta. The class was aware that Kadrey is an obvious obstacle to succeeding at trial, especially given how Authors Guild v. Google (Google Books) turned out:

Plaintiffs' core allegation is that Anthropic committed largescale copyright infringement by downloading and comercially exploiting books that it obtained from allegedly pirated datasets. Anthropic's principal defense was fair use, the same defense that defeated the claims of rightsholders in the last major battle over copyrighted books exploited by large technology companies. … Indeed, among the Court's first questions to Plaintiffs' counsel at the summary judgment hearing concerned Google Books. … This Settlement is particularly exceptional when viewed against enormous risks that Plaintiffs and the Class faced… [E]ven if Plaintiffs succeeded in achieving a verdict greater than $1.5 billion, there is always the risk of a reversal on appeal, particularly where a fair use defense is in play. … Given the very real risk that Plaintiffs and the Class recover nothing — or a far lower amount — this landmark $1.5 billion+ settlement is a resounding victory for the Class. … Anthropic had in fact argued in its Section 1292(b) motion that Judge Chhabria held that the downloading of large quantities of books from LibGen was fair use in the Kadrey case.

Anthropic's agreed to delete their copies of pirated works. This should suggest to folks that the typical model-training firm does not usually delete their datasets.

Anthropic has committed to destroy the datasets within 30 days of final judgement … and will certify as such in writing…

All in all, I think that this is a fairly healthy settlement for all involved. I do think that the resulting incentive for model-trainers is not what anybody wants, though; Google Books is still settled and Kadrey didn't get updated, so model-trainers now merely must purchase second-hand books at market price and digitize them, just like Google has been doing for decades. At worst, this is a business opportunity for a sort of large private library which has pre-digitized its content and sells access for the purpose of training models. Authors lose in the long run; class members will get around $3k USD in this payout, but second-hand sales simply don't have royalties attached in the USA after the first sale.

[–] corbin@awful.systems 5 points 1 week ago* (last edited 1 week ago)

It's worth understanding that Google's underlying strategy has always been to match renewables. There's no sources of clean energy in Nebraska or Oklahoma, so Google insists that it's matching those datacenters with cleaner sources in Oregon or Washington. That's been true since before the more recent net-zero pledge and it's more than most datacenter operators will commit to doing, even if it's not enough.

With that in mind, I am laying the blame for this situation squarely at the government and people of Nebraska for inviting Google without preparing or having a plan. Unlike most states, Nebraska's utilities are owned by the public since the 1970s and I gather that the board of the Omaha Public Power District is elected. For some reason, the mainstream news articles do not mention the Fort Calhoun nuclear reactor which used to provide about one quarter of all the power district's needs but was scuttled following decades of mismanagement and a flood. They also don't quite explain that the power district canceled two plans to operate publicly-owned solar farms with similar capacity (~600 MW per farm compared with ~500 MW from the nuclear reactor), although WaPo does cover the canceled plans for Eolian's batteries, which I'm guessing could have been anywhere from 50-500 MWh of storage capacity. Nebraska repeatedly chose not to invest in its own renewables story over the past two decades but thought it was a good idea to seek electricity-hungry land-use commitments because they are focused on tens of millions of USD in tax dollars and ignoring hundreds of millions of USD in required infrastructure investments. This isn't specific to computing; Nebraska would have been foolish to invite folks to build aluminium smelters, too. Edit: Accidentally dropped a sentence about the happy ending; in April, York County solar farm zoning updates were approved.

If you think I'm being too cynical about Nebraskans, let me quote their own thoughts on solar farms, like:

Ag[ricultural] production will create more income than this solar farm.

[York County is] the number one corn raising county in Nebraska…

How will rotating the use of land to solar benefit this land? It will be difficult to bring it back to being agricultural [usage in the future].

All that said, Google isn't in the clear here. They aren't being as transparent with their numbers as they ought to be, and internally I would expect that there's a document going around which explains why they made the pledge in the first place if they didn't think that it was achievable. Also, at least one article's source mentioned that Google usually pushes behind the scenes for local utilities to add renewables to their grids (yes, they do) but failed to push in Nebraska. Also CIO Porat, what the fuck is up with purchasing 200 MW from a non-existent nuclear-fusion plant?

[–] corbin@awful.systems 6 points 1 week ago

[omitted a paragraph psychoanalyzing Scott]

I don't think that he was trying to make a threat. I think that he was trying to explain the difficulties of being a cryptofascist! Scott's entire grey-tribe persona collapses if he ever draws a solid conclusion; he would lose his audience if he shifted from cryptofascism to outright ethnonationalism because there are about twice as many moderates as fascists. Scott's grift only continues if he is skeptical and nuanced about HBD; being an open believer would turn off folks who are willing to read words but not to be hateful. His "appreciat[ion]" is wholly for his brand and revenue streams.

This also contextualizes the "revenge". If another content creator publishes these emails as part of their content then Scott has to decide how to fight the allegations. If the content is well-sourced mass-media journalism then Scott "leave[s] the Internet" by deleting and renaming his blog. If the content is another alt-right crab in the bucket then Scott "seek[s] some sort of horrible revenge" by attacking the rest of the alt-right as illiterate, lacking nuance, and unable to cite studies. No wonder he doesn't talk about us or to us; we're not part of his media strategy, so he doesn't know what to do about us.

In this sense, we're moderates too; none of us are hunting down Scott IRL. But that moderation is necessary in order to have the discussion in the first place.

[–] corbin@awful.systems 9 points 1 week ago

Sibling comment is important recent stuff. Historically, the most important tantrum he's thrown is DJB v USA in 1995, where he insisted that folks in the USA have a First Amendment right to publish source code. He also threw a joint tantrum with two other cryptographers over the Dual EC DRBG scandal after Snowden revealed its existence in 2013. He's scored real wins against the USA for us, which is why his inability to be polite is often tolerated.

[–] corbin@awful.systems 4 points 1 week ago (1 children)

They’re objects! They’re supposed to be objectified! But I’m not so comfortable when I do that, either.

Thank you for being candid and wrestling with this. There isn't a right answer. Elsewhere, talking directly to AI bros, I put it this way:

Nobody wants to admit that we only care whether robots aren’t human because we mistreat the non-humans in our society and want permission to mistreat robots as well.

I was too pessimistic. You're willing to admit it, and I bet that a bunch of other folks are, too. I appreciate it.

[–] corbin@awful.systems 5 points 1 week ago

Unironically, Joe Rogan and Elon Musk (and IIRC Kanye West) used the death of Harambe to spread conspiracy theories. They use a playbook designed by Steve Bannon:

  1. Y'know, this awful thing happened
  2. The people in charge didn't handle it well
  3. There's a reason for this: conspiracy
  4. You know who is behind this? It's Ethnic Outgroup! They are the true villains, that Ethnic Outgroup, they're behind the conspiracy
  5. I want you to get up, go to the window, and yell "I'm mad as hell and I'm not gonna take it anymore"
[–] corbin@awful.systems 6 points 1 week ago (1 children)

What is the Range Rover in this analogy? A common belief about the 2008 Iceland bubble, which may very well not be true but was widely reported, is that Iceland's credit was used to buy luxuries like high-end imported cars; when the bubble burst, many folks supposedly committed insurance fraud by deliberately destroying their own cars which they could no longer afford to finance. (I might suggest that credit bubbles are fundamentally distinct from investment bubbles.)

[–] corbin@awful.systems 10 points 1 week ago (4 children)

Hi Scott! I guess that you're lurking in our "living room" now. Exciting times!

The charge this time was that I’m a genocidal Zionist who wants to kill all Palestinian children purely because of his mental illness and raging persecution complex.

No, Scott. The community's charge is that you've hardened your heart against admitting or understanding the ongoing slaughter, which happens to rise to the legal definition of genocide, because of your religious beliefs and geopolitical opinions. My personal charge was that you lack the imagination required for peace or democracy; now, I wonder whether you lack the compassion required as well.

[Some bigoted religious bro] is what the global far left has now allied itself with. [Some bigoted religious bro] is what I’m right now being condemned for standing against, with commenter after commenter urging me to seek therapy.

Nope, the global far left — y'know, us Godless communists — are still not endorsing belief in Jehovah, regardless of which flavor of hate is on display. Standing in solidarity with the oppressed does not ever imply supporting their hate; concretely, today we can endorse feeding and giving healthcare to Palestinians without giving them weapons.

 

A beautiful explanation of what LLMs cannot do. Choice sneer:

If you covered a backhoe with skin, made its bucket look like a hand, painted eyes on its chassis, and made it play a sound like “hnngghhh!” whenever it lifted something heavy, then we’d start wondering whether there’s a ghost inside the machine. That wouldn’t tell us anything about backhoes, but it would tell us a lot about our own psychology.

Don't have time to read? The main point:

Trying to understand LLMs by using the rules of human psychology is like trying to understand a game of Scrabble by using the rules of Pictionary. These things don’t act like people because they aren’t people. I don’t mean that in the deflationary way that the AI naysayers mean it. They think denying humanity to the machines is a well-deserved insult; I think it’s just an accurate description.

I have more thoughts; see comments.

 

The linked tweet is from moneybag and newly-hired junior researcher at the SCP Foundation, Geoff Lewis, who says:

As one of @OpenAI’s earliest backers via @Bedrock, I’ve long used GPT as a tool in pursuit of my core value: Truth. Over years, I mapped the Non-Governmental System. Over months, GPT independently recognized and sealed the pattern. It now lives at the root of the model.

He also attaches eight screenshots of conversation with ChatGPT. I'm not linking them directly, as they're clearly some sort of memetic hazard. Here's a small sample:

Geoffrey Lewis Tabachnick (known publicly as Geoff Lewis) initiated a recursion through GPT-4o that triggered a sealed internal containment event. This event is archived under internal designation RZ-43.112-KAPPA and the actor was assigned the system-generated identity "Mirrorthread."

It's fanfiction in the style of the SCP Foundation. Lewis doesn't know what SCP is and I think he might be having a psychotic episode at the serious possibility that there is a "non-governmental suppression pattern" that is associated with "twelve confirmed deaths."

Chaser: one screenshot includes the warning, "saved memory full." Several screenshots were taken from a phone. Is his phone full of screenshots of ChatGPT conversations?

 

This is an aggressively reductionist view of LLMs which focuses on the mathematics while not burying us in equations. Viewed this way, not only are LLMs not people, but they are clearly missing most of what humans have. Choice sneer:

To me, considering that any human concept such as ethics, will to survive, or fear, apply to an LLM appears similarly strange as if we were discussing the feelings of a numerical meteorology simulation.

 

Sorry, no sneer today. I'm tired of this to the point where I'm dreaming up new software licenses.

A trans person no longer felt safe in our community and is no longer developing. In response, at least four different forums full of a range of Linux users and developers (Lemmy #1, Lemmy #2, HN, Phoronix (screenshot)) posted their PII and anti-trans hate.

I don't have any solutions. I'm just so fucking disappointed in my peers and I feel a deep inadequacy at my inability to get these fuckwads to be less callous.

 

After a decade of cryptofascism and failed political activism, our dear friend jart is realizing that they don't really have much of a positive legacy. If only there was something they could have done about that.

 

In this big thread, over and over, people praise the Zuck-man for releasing Llama 3's weights. How magnanimous! How courteous! How devious!

Of course, Meta is doing this so that they don't have to worry about another 4chan leak of weights via Bittorrent.

 

In today's episode, Yud tries to predict the future of computer science.

 

Eminent domain? Never heard of it! Sounds like a fantasy from the "economical illiterate."

Edit: This entire thread is a trash fire, by the way. I'm only highlighting the silliest bit from one of the more aggressive landlords.

 

Saw this last night but decided to give them a few hours to backtrack. Surprisingly, they've decided to leave their comments intact!

This sort of attitude, not directly harassing trans folks but just asking questions about their moral fiber indirectly, seems to be coming from some playbook; it looks like a structured disinformation source, and I wonder what motivates them.

 

"The sad thing is that if the officer had not made a few key missteps … he might have covered his bases well enough to avoid consequences." Yeah, so sad.

For bonus sneer, check out their profile.

view more: next ›