The orange-site whippersnappers don't realize how old artificial neurons are. In terms of theory, the Hebbian principle was documented in 1949 and the perceptron was proposed in 1943 in an article with the delightfully-dated name, "A logical calculus of the ideas immanent in nervous activity". In 1957, the Mark I Perceptron was introduced; in modern parlance, it was a configurable image classifier with a single layer of hundreds-to-thousands of neurons and a square grid of dozens-to-hundreds of pixels. For comparison, MIT's AI lab was founded in 1970. RMS would have read about artificial neurons as part of their classwork and research, although it wasn't part of MIT's AI programme.
corbin
Oh wow, that's gloriously terse. I agree that it might be the shortest. For comparison, here are three other policies whose pages are much longer and whose message also boils down to "don't do that": don't post copypasta, don't start hoaxes, don't start any horseshit either.
Ziz was arraigned on Monday, according to The Baltimore Banner. She apparently was not very cooperative:
As the judge asked basic questions such as whether she had read the indictment and understood the maximum possible penalties, [Ziz] LaSota chided the “mock proceedings” and said [US Magistrate Douglas R.] Miller was a “participant in an organized crime ring” led by the “states united in slavery.”
She pulled the Old Man from Scene 24 gag:
Please state your name for the record, the court clerk said. “Justice,” she replied. What is your age? “Timeless.” What year were you born? “I have been born many times.”
The lawyers have accepted that sometimes a defendant is uncooperative:
Prosecutors said the federal case would take about three days to try. Defense attorney Gary Proctor, in an apparent nod to how long what should have been a perfunctory appearance on Monday ended up taking, called the estimate “overly optimistic.”
Folks outside the USA should be reassured that this isn't the first time that we've tried somebody with a loose grasp of reality and a found family of young violent women who constantly disrupt the trial; Ziz isn't likely to walk away.
Indeed. I left a note on one of his blogposts correcting a common misconception (that it's "all just tokens" and the model can't tell when you clearly substituted an unlikely word, common among RAG-heavy users) and he showed up to clarify that he merely wanted to "start an interesting conversation" about how to improve his particular chatbots.
It's almost like there's a sequence: passing the Turing test, sycophancy, ELIZA effect, suggestibility, cognitive offloading, shared delusions, psychoses, conspiracy theories, authoritarian-follower personality traits, alt-right beliefs, right-wing beliefs. A mechanical Iago.
Linear no-threshold isn't under attack, but under review. The game-theoretic conclusions haven't changed: limit overall exposure, radiation is harmful, more radiation means more harm. The practical consequences of tweaking the model concern e.g. evacuation zones in case of emergency; excess deaths from radiation exposure are balanced against deaths caused by evacuation, so the choice of model determines the exact shape of evacuation zones. (I suspect that you know this but it's worth clarifying for folks who aren't doing literature reviews.)
I don’t have any experience writing physics simulators myself…
I think that this is your best path forward. Go simulate some rigid-body physics. Simulate genetics with genetic algorithms. Simulate chemistry with Petri nets. Simulate quantum computing. Simulate randomness with random-number generators. You'll learn a lot about the limitations that arise at each step as we idealize the real world into equations that are simple enough to compute. Fundamentally, you're proposing that Boltzmann brains are plausible, and the standard physics retort (quoting Carroll 2017, Why Boltzmann brains are bad) is that they "are cognitively unstable: they cannot simultaneously be true and justifiably believed."
A lesser path would be to keep going with consciousness and neuroscience. In that case, go read Hofstadter 2007, 'I' is a strange loop to understand what it could possibly mean for a pattern to be substrate-independent.
If they’re complex enough, and executed sufficiently quickly that I can converse with it in my lifetime, let me be the judge of whether I think it’s intelligent.
No, you're likely to suffer the ELIZA Effect. Previously, on Awful, I've explained what's going on in terms of memes. If you want to read a sci-fi story instead, I'd recommend Watts' Blindsight. You are overrating the phenomenon of intelligence.
Unlike a bunker, a datacenter's ventilation consists of [DATA EXPUNGED] which are out of reach. The [DATA EXPUNGED] are heavily [DATA EXPUNGED], so [DATA EXPUNGED] unlikely to work either. However, this ventilation must be [DATA EXPUNGED] in order to effectively [DATA EXPUNGED], and that's done by [DATA EXPUNGED] into the [DATA EXPUNGED] and [DATA EXPUNGED] to prevent [DATA EXPUNGED].
Edit: making the joke funnier.
In my personal and professional opinion, most datacenter outages are caused by animals disturbing fiber or power lines. Consider campaigning for rewilding instead; it's legal and statistically might be more effective.
I'm going to be a little indirect and poetic here.
In Turing’s view, if a computer were to pass the Turing Test, the calculations it carried out in doing so would still constitute thought even if carried out by a clerk on a sheet of paper with no knowledge of how a teletype machine would translate them into text, or even by a distributed mass of clerks working in isolation from each other so that nothing resembling a thinking entity even exists.
Yes. In Smullyan's view, the acoustic patterns in the air would still constitute birdsong even if whistled by a human with no beak, or even by a vibrating electromagnetically-driven membrane which is located far from the data that it is playing back, so that nothing resembling a bird even exists. Or, in Aristoteles' view, the syntactic relationship between sentences would still constitute syllogism even if attributed to a long-dead philosopher, or even verified by a distributed mass of mechanical provers so that no single prover ever localizes the entirety of the modus ponens. In all cases, the pattern is the representation; the arrangement which generates the pattern is merely a substrate.
Consider the notion that thought is a biological process. It’s true that, if all of the atoms and cells comprising the organism can be mathematically modeled, a Turing Machine would then be able to simulate them. But it doesn’t follow from this that the Turing Machine would then generate thought. Consider the analogy of digestion. Sure, a Turing Machine could model every single molecule of a steak and calculate the precise ways in which it would move through and be broken down by a human digestive system. But all this could ever accomplish would be running a simulation of eating the steak. If you put an actual ribeye in front of a computer there is no amount of computational power that would allow the computer to actually eat and digest it.
Putting an actual ribeye in front of a human, there is no amount of computational power that would allow the human to actually eat and digest it, either. The act of eating can't be provoked merely by thought; there must be some sort of mechanical linkage between thoughts and the relevant parts of the body. Turing & Champernowne invented a program that plays chess and also were known (apocryphally, apparently) to play "run-around-the-house chess" or "Turing chess" which involved standing up and jogging for a lap in-between chess moves. The ability to play Turing chess is cognitively embodied but the ability to play chess is merely the ability to represent and manipulate certain patterns.
At the end of the day what defines art is the existence of intention behind it — the fact that some consciousness experienced thoughts that it subsequently tried to communicate. Without that there’s simply lines on paper, splotches of color, and noise. At the risk of tautology, meaning exists because people mean things.
Art is about the expression of memes within a medium; it is cultural propagation. Memes are not thoughts, though; the fact that some consciousness experienced and communicated memes is not a product of thought but a product of memetic evolution. The only other thing that art can carry is what carries it: the patterns which emerge from the encoding of the memes upon the medium.
He very much wants you to know that he knows that the Zizians are trans-coded and that he's okay with that, he's cool, he welcomes trans folks into Rationalism, he's totally an ally, etc. How does he phrase that, exactly?
That cult began among, and recruited from, a vulnerable subclass of a class of people who had earlier found tolerance and shelter in what calls itself the 'rationalist' community. I am not explicitly naming that class of people because the vast supermajority of them have not joined murder cults, and what other people do should not be their problem.
I mean, yes in the abstract, but would it really be so hard to say that MIRI supports trans rights? What other people do, when those other people form a majority of a hateful society, is very much a problem for the trans community! So much for status signaling.
This is a list of apostates. The idea is not to actually detail the folks who do the most damage to the cult's reputation, but to attack the few folks who were once members and left because they were no longer interested in being part of a cult. These attacks are usually motivated by emotions as much as a desire to maintain control over the rest of the cult; in all cases, the sentiment is that the apostate dared to defy leadership. Usually, attacks on apostates are backed up by some sort of enforcement mechanism, from calls for stochastic terrorism to accusations of criminality; here, there's not actually a call to do anything external, possibly because Habryka realizes that the optics are bad but more likely because Habryka doesn't really have much power beyond those places where he's already an administrator. (That said, I would encourage everybody to become aware of, say, CoS's Fair Game policy or Noisy Investigation policy to get an idea of what kinds of attacks could occur.)
There are several prominent names that aren't here. I'd guess that Habryka hasn't been meditating over this list for a long time; it's just the first few people that came to mind when he wrote this note. This is somewhat reassuring, as it suggests that he doesn't fully understand how cultural critiques of LW affect the perception of LW more broadly; he doesn't realize how many people e.g. Breadtube reaches. Also, he doesn't understand that folks like SBF and Yarvin do immense reputational damage to rationalist-adjacent projects, although he seems to understand that the main issue with Zizians is not that they are Cringe but that they have been accused of multiple violent felonies.
Not many sneers to choose from, but I think one commenter gets it right:
In other groups with I’m familiar, you would kick out people you think are actually a danger or you think they might do something that brings your group into disrepute. But otherwise, I think it’s a sign of being a cult If you kick people for not going along with the group dogma.
Fundamentally, Chapman's essay is about how subcultures transition from valuing functionality to aesthetics. Subcultures start with form following function by necessity. However, people adopt the subculture because they like the surface appearance of those forms, leading to the subculture eventually hollowing out into a system which follows the iron law of bureaucracy and becomes non-functional due to over-investment in the façade and tearing down of Chesterton's fences. Chapman's not the only person to notice this pattern; other instances of it, running the spectrum from right to left, include:
I think that seeing this pattern is fine, but worrying about it makes one into Scott Alexander, paranoid about societal manipulation and constantly worrying about in-group and out-group status. We should note the pattern but stop endorsing instances of it which attach labels to people; after all, the pattern's fundamentally about memes, not humans.
So, on Chapman. I think that they're a self-important nerd who reached criticality after binge-reading philsophy texts in graduate school. I could have sworn that this was accompanied by psychedelic drugs, but I can't confirm or cite that and I don't think that we should underestimate the psychoactive effect of reading philosophy from the 1800s. In his own words:
He's explicitly not allied with our good friends, but at the same time they move in the same intellectual circles. I'm familiar with that sort of frustration. Like, he rejects neoreaction by citing Scott Alexander's rejection of neoreaction (source); that's a somewhat-incoherent view suggesting that he's politically naïve. His glossary for his eternally-unfinished Continental-style tome contains the following statement on Rationalism (embedded links and formatting removed):
I don't know. Sometimes he takes Yudkowsky seriously in order to critique him. (source, source) But the critiques are always very polite, no sneering. Maybe he's really that sort of Alan Watts character who has transcended petty squabbles. Maybe he didn't take enough LSD. I once was on LSD when I was at the office working all day; I saw the entire structure of the corporation, fully understood its purpose, and — unlike Chapman, apparently — came to the conclusion that it is bad. Similarly, when I look at Yudkowsky or Yarvin trying to do philosophy, I often see bad arguments and premises. Being judgemental here is kind of important for defending ourselves from a very real alt-right snowstorm of mystic bullshit.
Okay, so in addition to the opening possibilities of being naïve and hiding his power level, I suggest that Chapman could be totally at peace or permanently rotated in five dimensions from drugs. I've gotta do five, so a fifth possibility is that he's not writing for a human audience, but aiming to be crawled by LLM data-scrapers. Food for thought for this community: if you say something pseudo-profound near LessWrong then it is likely to be incorporated into LLM training data. I know of multiple other writers deliberately doing this sort of thing.