corbin

joined 2 years ago
[–] corbin@awful.systems 4 points 2 days ago (1 children)

Sadly, it's a Chomskian paper, and those are just too weak for today. Also, I think it's sloppy and too Eurocentric. Here are some of the biggest gaffes or stretches I found by skimming Moro's $30 book, which I obtained by asking a shadow library for "impossible languages" (ISBN doesn't work for some reason):

book review of Impossible Languages (Moro, 2016)

  • Moro claims that it's impossible for a natlang to have free word order. There's many counterexamples which could be argued, like Arabic or Mandarin, but I think that the best counterexample is Latin, which has Latinate (free) word order. On one hand, of course word order matters for parsers, but on the other hand the Transformers architecture attends without ordering, so this isn't really an issue for machines. Ironically, on p73-74, Moro rearranges the word order of a Latin phrase while translating it, suggesting either a use of machine translation or an implicit acceptance of Latin (lack of) word order. I could be harsher here; it seems like Moro draws mostly from modern Romance and Germanic languages to make their points about word order, and the sensitivity of English and Italian to word order doesn't imply a universality.
  • Speaking of universality, both the generative-grammar and universal-grammar hypotheses are assumed. By "impossible" Moro means a non-recursive language with a non-context-free grammar, or perhaps a language failing to satisfy some nebulous geometric requirements.
  • Moro claims that sentences without truth values are lacking semantics. Gödel and Tarski are completely unmentioned; Moro ignores any sort of computability of truth values.
  • Russell's paradox is indirectly mentioned and incorrectly analyzed; Moro claims that Russell fixed Frege's system by redefining the copula, but Russell and others actually refined the notion of building sets.
  • It is claimed that Broca's area uniquely lights up for recursive patterns but not patterns which depend on linear word order (e.g. a rule that a sentence is negated iff the fourth word is "no"), so that Broca's area can't do context-sensitive processing. But humans clearly do XOR when counting nested negations in many languages and can internalize that XOR so that they can handle utterances consisting of many repetitions of e.g. "not not".
  • Moro mentions Esperanto and Volapük as auxlangs in their chapter on conlangs. They completely fail to recognize the past century of applied research: Interlingue and Interlingua, Loglan and Lojban, Láadan, etc.
  • Sanskrit is Indo-European. Also, that's not how junk DNA works; it genuinely isn't coding or active. Also also, that's not how Turing patterns work; they are genuine cellular automata and it's not merely an analogy.

I think that Moro's strongest point, on which they spend an entire chapter reviewing fairly solid neuroscience, is that natural language is spoken and heard, such that a proper language model must be simultaneously acoustic and textual. But because they don't address computability theory at all, they completely fail to address the modern critique that machines can learn any learnable system, including grammars; they worst that they can say is that it's literally not a human.

[–] corbin@awful.systems 5 points 4 days ago (1 children)

I got jumpscared by Gavin D. Howard today; apparently his version of bc appeared on my system somehow, and his name's in the copyright notice. Who is Gavin anyway? Well, he used to have a blog post that straight-up admitted his fascism, but I can't find it. I could only find, say, the following five articles, presented chronologically:

Also, while he's apparently not caused issues for NixOS maintainers yet, he's written An Apology to the Gentoo Authors for not following their rules when it comes to that same bc package. So this might be worth removing for other reasons than the Christofascist authorship.

BTW his code shows up because it's in upstream BusyBox and I have a BusyBox on my system for emergency purposes. I suppose it's time to look at whether there is a better BusyBox out there. Also, it looks like Denys Vlasenko has made over one hundred edits to this code to integrate it with BusyBox, fix correctness and safety bugs, and improve performance; Gavin only made the initial commit.

[–] corbin@awful.systems 5 points 4 days ago (1 children)

They (or the LLM that summarized their findings and may have hallucinated part of the post) say:

It is a fascinating example of "Glue Code" engineering, but it debunks the idea that the LLM is natively "understanding" or manipulating files. It's just pushing buttons on a very complex, very human-made machine.

Literally nothing that they show here is bad software engineering. It sounds like they expected that the LLM's internals would be 100% token-driven inference-oriented programming, or perhaps a mix of that and vibe code, and they are disappointed that it's merely a standard Silicon Valley cloudy product.

My analysis is that Bobby and Vicky should get raises; they aren't paid enough for this bullshit.

By the way, the post probably isn't faked. Google-internal go/ URLs do leak out sometimes, usually in comments. Searching GitHub for that specific URL turns up one hit in a repository which claims to hold a partial dump of the OpenAI agents. Here is combined_apply_patch_cli.py. The agent includes a copy of ImageMagick; truly, ImageMagick is our ecosystem's cockroach.

[–] corbin@awful.systems 5 points 5 days ago

Now I'm curious about whether Disney funded Glaze & Nightshade. Quoting Nightshade's FAQ, their lab has arranged to receive donations which are washed through the University of Chicago:

If you or your organization may be interested in pitching in to support and advance our work, you can donate directly to Glaze via the Physical Sciences Division webpage, click on "Make a gift to PSD" and choose "GLAZE" as your area of support (managed by the University of Chicago Physical Sciences Division).

Previously, on Awful, I noted the issues with Nightshade and the curious fact that Disney is the only example stakeholder named in the original Nightshade paper, as well as the fact that Nightshade's authors wonder about the possibility of applying Glaze-style techniques to feature-length films.

[–] corbin@awful.systems 17 points 1 week ago (2 children)

The author also proposes a framework for analyzing claims about generative AI. I don't know if I endorse it fully, but I agree that each of the four talking points represents a massive failure of understanding. Their LIES model is:

  • Lethality: the bots will kill us all
  • Inevitability: the bots are unstoppable and will definitely be created in the future
  • Exceptionalism: the bots are wholly unlike any past technology and we are unprepared to understand them
  • Superintelligent: the bots are better than people at thinking

I would add to this a Plausibility or Personhood or Personality: the incorrect claim that the bots are people. Maybe call it PILES.

 

A straightforward dismantling of AI fearmongering videos uploaded by Kyle "Science Thor" Hill, Sci "The Fault in our Research" Show, and Kurz "We're Sorry for Summarizing a Pop-Sci Book" Gesagt over the past few months. The author is a computer professional but their take is fully in line with what we normally post here.

I don't have any choice sneers. The author is too busy hunting for whoever is paying SciShow and Kurzgesagt for these videos. I do appreciate that they repeatedly point out that there is allegedly a lot of evidence of people harming themselves or others because of chatbots. Allegedly.

[–] corbin@awful.systems 12 points 1 week ago

Fundamentally, Chapman's essay is about how subcultures transition from valuing functionality to aesthetics. Subcultures start with form following function by necessity. However, people adopt the subculture because they like the surface appearance of those forms, leading to the subculture eventually hollowing out into a system which follows the iron law of bureaucracy and becomes non-functional due to over-investment in the façade and tearing down of Chesterton's fences. Chapman's not the only person to notice this pattern; other instances of it, running the spectrum from right to left, include:

I think that seeing this pattern is fine, but worrying about it makes one into Scott Alexander, paranoid about societal manipulation and constantly worrying about in-group and out-group status. We should note the pattern but stop endorsing instances of it which attach labels to people; after all, the pattern's fundamentally about memes, not humans.

So, on Chapman. I think that they're a self-important nerd who reached criticality after binge-reading philsophy texts in graduate school. I could have sworn that this was accompanied by psychedelic drugs, but I can't confirm or cite that and I don't think that we should underestimate the psychoactive effect of reading philosophy from the 1800s. In his own words:

[T]he central character in the book is a student at the MIT Artificial Intelligence Laboratory who discovers Continental philosophy and social theory, realizes that AI is on a fundamentally wrong track, and sets about reforming the field to incorporate those other viewpoints. That describes precisely two people in the real world: me, and my sometime-collaborator Phil Agre.

He's explicitly not allied with our good friends, but at the same time they move in the same intellectual circles. I'm familiar with that sort of frustration. Like, he rejects neoreaction by citing Scott Alexander's rejection of neoreaction (source); that's a somewhat-incoherent view suggesting that he's politically naïve. His glossary for his eternally-unfinished Continental-style tome contains the following statement on Rationalism (embedded links and formatting removed):

Rationalisms are ideologies that claim that there is some way of thinking that is the correct one, and you should always use it. Some rationalisms specifically identify which method is right and why. Others merely suppose there must be a single correct way to think, but admit we don’t know quite what it is; or they extol a vague principle like “the scientific method.” Rationalism is not the same thing as rationality, which refers to a nebulous collection of more-or-less formal ways of thinking and acting that work well for particular purposes in particular sorts of contexts.

I don't know. Sometimes he takes Yudkowsky seriously in order to critique him. (source, source) But the critiques are always very polite, no sneering. Maybe he's really that sort of Alan Watts character who has transcended petty squabbles. Maybe he didn't take enough LSD. I once was on LSD when I was at the office working all day; I saw the entire structure of the corporation, fully understood its purpose, and — unlike Chapman, apparently — came to the conclusion that it is bad. Similarly, when I look at Yudkowsky or Yarvin trying to do philosophy, I often see bad arguments and premises. Being judgemental here is kind of important for defending ourselves from a very real alt-right snowstorm of mystic bullshit.

Okay, so in addition to the opening possibilities of being naïve and hiding his power level, I suggest that Chapman could be totally at peace or permanently rotated in five dimensions from drugs. I've gotta do five, so a fifth possibility is that he's not writing for a human audience, but aiming to be crawled by LLM data-scrapers. Food for thought for this community: if you say something pseudo-profound near LessWrong then it is likely to be incorporated into LLM training data. I know of multiple other writers deliberately doing this sort of thing.

[–] corbin@awful.systems 15 points 1 week ago (4 children)

The orange-site whippersnappers don't realize how old artificial neurons are. In terms of theory, the Hebbian principle was documented in 1949 and the perceptron was proposed in 1943 in an article with the delightfully-dated name, "A logical calculus of the ideas immanent in nervous activity". In 1957, the Mark I Perceptron was introduced; in modern parlance, it was a configurable image classifier with a single layer of hundreds-to-thousands of neurons and a square grid of dozens-to-hundreds of pixels. For comparison, MIT's AI lab was founded in 1970. RMS would have read about artificial neurons as part of their classwork and research, although it wasn't part of MIT's AI programme.

[–] corbin@awful.systems 7 points 2 weeks ago (1 children)

Oh wow, that's gloriously terse. I agree that it might be the shortest. For comparison, here are three other policies whose pages are much longer and whose message also boils down to "don't do that": don't post copypasta, don't start hoaxes, don't start any horseshit either.

[–] corbin@awful.systems 11 points 3 weeks ago (1 children)

Ziz was arraigned on Monday, according to The Baltimore Banner. She apparently was not very cooperative:

As the judge asked basic questions such as whether she had read the indictment and understood the maximum possible penalties, [Ziz] LaSota chided the “mock proceedings” and said [US Magistrate Douglas R.] Miller was a “participant in an organized crime ring” led by the “states united in slavery.”

She pulled the Old Man from Scene 24 gag:

Please state your name for the record, the court clerk said. “Justice,” she replied. What is your age? “Timeless.” What year were you born? “I have been born many times.”

The lawyers have accepted that sometimes a defendant is uncooperative:

Prosecutors said the federal case would take about three days to try. Defense attorney Gary Proctor, in an apparent nod to how long what should have been a perfunctory appearance on Monday ended up taking, called the estimate “overly optimistic.”

Folks outside the USA should be reassured that this isn't the first time that we've tried somebody with a loose grasp of reality and a found family of young violent women who constantly disrupt the trial; Ziz isn't likely to walk away.

[–] corbin@awful.systems 1 points 3 weeks ago

Indeed. I left a note on one of his blogposts correcting a common misconception (that it's "all just tokens" and the model can't tell when you clearly substituted an unlikely word, common among RAG-heavy users) and he showed up to clarify that he merely wanted to "start an interesting conversation" about how to improve his particular chatbots.

It's almost like there's a sequence: passing the Turing test, sycophancy, ELIZA effect, suggestibility, cognitive offloading, shared delusions, psychoses, conspiracy theories, authoritarian-follower personality traits, alt-right beliefs, right-wing beliefs. A mechanical Iago.

[–] corbin@awful.systems 0 points 4 weeks ago

Linear no-threshold isn't under attack, but under review. The game-theoretic conclusions haven't changed: limit overall exposure, radiation is harmful, more radiation means more harm. The practical consequences of tweaking the model concern e.g. evacuation zones in case of emergency; excess deaths from radiation exposure are balanced against deaths caused by evacuation, so the choice of model determines the exact shape of evacuation zones. (I suspect that you know this but it's worth clarifying for folks who aren't doing literature reviews.)

 

A straightforward product review of two AI therapists. Things start bad and quickly get worse. Choice quip:

Oh, so now I'm being gaslit by a frakking Tamagotchi.

[–] corbin@awful.systems 7 points 1 month ago (1 children)

I don’t have any experience writing physics simulators myself…

I think that this is your best path forward. Go simulate some rigid-body physics. Simulate genetics with genetic algorithms. Simulate chemistry with Petri nets. Simulate quantum computing. Simulate randomness with random-number generators. You'll learn a lot about the limitations that arise at each step as we idealize the real world into equations that are simple enough to compute. Fundamentally, you're proposing that Boltzmann brains are plausible, and the standard physics retort (quoting Carroll 2017, Why Boltzmann brains are bad) is that they "are cognitively unstable: they cannot simultaneously be true and justifiably believed."

A lesser path would be to keep going with consciousness and neuroscience. In that case, go read Hofstadter 2007, 'I' is a strange loop to understand what it could possibly mean for a pattern to be substrate-independent.

If they’re complex enough, and executed sufficiently quickly that I can converse with it in my lifetime, let me be the judge of whether I think it’s intelligent.

No, you're likely to suffer the ELIZA Effect. Previously, on Awful, I've explained what's going on in terms of memes. If you want to read a sci-fi story instead, I'd recommend Watts' Blindsight. You are overrating the phenomenon of intelligence.

 

The answer is no. Seth explains why not, using neuroscience and medical knowledge as a starting point. My heart was warmed when Seth asked whether anybody present believed that current generative systems are conscious and nobody in the room clapped.

Perhaps the most interesting takeaway for me was learning that — at least in terms of what we know about neuroscience — the classic thought experiment of the neuron-replacing parasite, which incrementally replaces a brain with some non-brain substrate without interrupting any computations, is biologically infeasible. This doesn't surprise me but I hadn't heard it explained so directly before.

Seth has been quoted previously, on Awful for his critique of the current AI hype. This talk is largely in line with his other public statements.

Note that the final 10min of the video are an investigation of Seth's position by somebody else. This is merely part of presenting before a group of philosophers; they want to critique and ask questions.

 

A complete dissection of the history of the David Woodard editing scandal as told by an Oregonian Wikipedian. The video is sectioned into multiple miniature documentaries about various bastards and can be watched piece-by-piece. Too long to watch? Read the link above.

too long, didn't watch, didn't read, summarize anyway

David Woodard is an ethnonationalist white supremacist whose artistic career has led to an intersection with a remarkable slice of cult leaders and serial killers throughout the past half-century. Each featured bastard has some sort of relationship to Woodard, revealing an entire facet of American Nazism which runs in parallel to Christian TREACLES, passed down through psychedelia. occult mysticism, and non-Christian cults of capitalism.

 

A beautiful explanation of what LLMs cannot do. Choice sneer:

If you covered a backhoe with skin, made its bucket look like a hand, painted eyes on its chassis, and made it play a sound like “hnngghhh!” whenever it lifted something heavy, then we’d start wondering whether there’s a ghost inside the machine. That wouldn’t tell us anything about backhoes, but it would tell us a lot about our own psychology.

Don't have time to read? The main point:

Trying to understand LLMs by using the rules of human psychology is like trying to understand a game of Scrabble by using the rules of Pictionary. These things don’t act like people because they aren’t people. I don’t mean that in the deflationary way that the AI naysayers mean it. They think denying humanity to the machines is a well-deserved insult; I think it’s just an accurate description.

I have more thoughts; see comments.

 

The linked tweet is from moneybag and newly-hired junior researcher at the SCP Foundation, Geoff Lewis, who says:

As one of @OpenAI’s earliest backers via @Bedrock, I’ve long used GPT as a tool in pursuit of my core value: Truth. Over years, I mapped the Non-Governmental System. Over months, GPT independently recognized and sealed the pattern. It now lives at the root of the model.

He also attaches eight screenshots of conversation with ChatGPT. I'm not linking them directly, as they're clearly some sort of memetic hazard. Here's a small sample:

Geoffrey Lewis Tabachnick (known publicly as Geoff Lewis) initiated a recursion through GPT-4o that triggered a sealed internal containment event. This event is archived under internal designation RZ-43.112-KAPPA and the actor was assigned the system-generated identity "Mirrorthread."

It's fanfiction in the style of the SCP Foundation. Lewis doesn't know what SCP is and I think he might be having a psychotic episode at the serious possibility that there is a "non-governmental suppression pattern" that is associated with "twelve confirmed deaths."

Chaser: one screenshot includes the warning, "saved memory full." Several screenshots were taken from a phone. Is his phone full of screenshots of ChatGPT conversations?

 

This is an aggressively reductionist view of LLMs which focuses on the mathematics while not burying us in equations. Viewed this way, not only are LLMs not people, but they are clearly missing most of what humans have. Choice sneer:

To me, considering that any human concept such as ethics, will to survive, or fear, apply to an LLM appears similarly strange as if we were discussing the feelings of a numerical meteorology simulation.

 

Sorry, no sneer today. I'm tired of this to the point where I'm dreaming up new software licenses.

A trans person no longer felt safe in our community and is no longer developing. In response, at least four different forums full of a range of Linux users and developers (Lemmy #1, Lemmy #2, HN, Phoronix (screenshot)) posted their PII and anti-trans hate.

I don't have any solutions. I'm just so fucking disappointed in my peers and I feel a deep inadequacy at my inability to get these fuckwads to be less callous.

 

After a decade of cryptofascism and failed political activism, our dear friend jart is realizing that they don't really have much of a positive legacy. If only there was something they could have done about that.

 

In this big thread, over and over, people praise the Zuck-man for releasing Llama 3's weights. How magnanimous! How courteous! How devious!

Of course, Meta is doing this so that they don't have to worry about another 4chan leak of weights via Bittorrent.

 

In today's episode, Yud tries to predict the future of computer science.

view more: next ›