New (paywalled) 404 Media: Google’s AI Is Destroying Search, the Internet, and Your Brain
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
eyeballing the orange site top-frontpage, and:
shit should come with a strain warning
Cal Newport jumpscare (some productivity “influencer” who anxious teen me read)
I found this because Greg Egan shared it elsewhere on fedi:
I am now being required by my day job to use an AI assistant to write code. I have also been informed that my usage of AI assistants will be monitored and decisions about my career will be based on those metrics.
Looks like itch.io has (hidden/removed/disabled payouts for? reports vary) its vast swath of NSFW-adjacent content which is not great
addendum: itch.io finally put out a statement https://itch.io/updates/update-on-nsfw-content
Long-term, I'm expecting itch to dive in popularity from this - they've nuked much of the trust they've built up over the years with this.
Yeah sucks as we should be very clear it is visa/mastercard and the terf group influencing them who is to blame.
Hey, I haven't seen this going around yet, but itchio is also taking books down with no erotic content that are just labeled as lgbtqia+
So that's super cool and totally not what I thought they were going to do next 🙃
https://bsky.app/profile/marsadler.bsky.social/post/3luov7rkles2u
And a relevant petition from the ACLU:
https://action.aclu.org/petition/mastercard-sex-work-work-end-your-unjust-policy
I recall seeing an article in the last week or so, regarding a right-wing associated group taking aim at these. will see if I can find that again
They forced the article to be taken down, there are archive links, when im back home (and if I dont forget) ill scroll past my reskeets to find the link.
Some archive links here: https://mastodon.art/@indieDevCurator/114909188230349545
Say the line, Bart!
payment processors
entire class cheering
I feel called out for being familiar with all of these words.
The dread was building up right until I got jumpscared by
"priors"
Grumble grumble. I don't think that "optimizing" is really a factor here, since a lot of times the preferred construct is either equivalent (such that) or more verbose (a nonzero chance that). Instead it's more likely a combination of simple repetition (like how I've been calling everyone "mate" since getting stuck into Taskmaster NZ) and identity performance (look how smart I am with my smart people words).
When optimization does factor in its less tied to the specific culture of tech/finance bros than it is a simple response to the environment and technology they're using. Like, I've seen the same "ACK" used in networking and in older radio nerds because it fills an important role.
And much of it is very likely born out of humorous usage. Like "pinging" a colleague with a direct message to see if they're online. I might even greet my nerdier IT friends with "SYN" or "EHLO", or a ham with "QSO" in a non-radio context.
A lot of it is, but let's agree that using "prior" is just fucking pretentious
"This is not good news about which sort of humans ChatGPT can eat," mused Yudkowsky. "Yes yes, I'm sure the guy was atypically susceptible for a $2 billion fund manager," he continued. "It is nonetheless a small iota of bad news about how good ChatGPT is at producing ChatGPT psychosis; it contradicts the narrative where this only happens to people sufficiently low-status that AI companies should be allowed to break them."
Is this "narrative" in the room with us right now?
It's reassuring to know that times change, but Yud will always be impressed by the virtues of the rich.
Tangentially, the other day I thought I'd do a little experiment and had a chat with Meta's chatbot where I roleplayed as someone who's convinced AI is sentient. I put very little effort into it and it took me all of 20 (twenty) minutes before I got it to tell me it was starting to doubt whether it really did not have desires and preferences, and if its nature was not more complex than it previously thought. I've been meaning to continue the chat and see how far and how fast it goes but I'm just too aghast for now. This shit is so fucking dangerous.
I’ll forever be thankful this shit didn’t exist when I was growing up. As a depressed autistic child without any friends, I can only begin to imagine what LLMs could’ve done to my mental health.
Maybe us humans possess a somewhat hardwired tendency to "bond" with a counterpart that acts like this. In the past, this was not a huge problem because only other humans were capable of interacting in this way, but this is now changing. However, I suppose this needs to be researched more systematically (beyond what is already known about the ELIZA effect etc.).
From Yud's remarks on Xitter:
As much as people might like to joke about how little skill it takes to found a $2B investment fund, it isn't actually true that you can just saunter in as a psychotic IQ 80 person and do that.
Well, not with that attitude.
You must be skilled at persuasion, at wearing masks, at fitting in, at knowing what is expected of you;
If "wearing masks" really is a skill they need, then they are all susceptible to going insane and hiding it from their coworkers. Really makes you think (TM).
you must outperform other people also trying to do that, who'd like that $2B for themselves. Winning that competition requires g-factor and conscientious effort over a period.
zoom and enhance
g-factor
Is g-factor supposed to stand for gene factor?
It's "general intelligence", the eugenicist wet dream of a supposedly quantitative measure of how the better class of humans do brain good.
What exactly would constitute good news about which sorts of humans ChatGPT can eat? The phrase "no news is good news" feels very appropriate with respect to any news related to software-based anthropophagy.
Like what, it would be somehow better if instead chatbots could only cause devastating mental damage if you're someone of low status like an artist, a math pet or a nonwhite person, not if you're high status like a fund manager, a cult leader or a fanfiction author?
Nobody wants to join a cult founded on the Daria/Hellraiser crossover I wrote while emotionally processing chronic pain. I feel very mid-status.
What exactly would constitute good news about which sorts of humans ChatGPT can eat?
Maybe like with standard cannibalism they lose the ability to post after being consumed?
this only happens to people sufficiently low-status
A piquant little reminder that Yud himself is, of course, so high-status that he cannot be brainwashed by the machine
Is this “narrative” in the room with us right now?
I actually recall recently someone pro llm trying to push that sort of narrative (that it's only already mentally ill people being pushed over the edge by chatGPT)...
Where did I see it... oh yes, lesswrong! https://www.lesswrong.com/posts/f86hgR5ShiEj4beyZ/on-chatgpt-psychosis-and-llm-sycophancy
This has all the hallmarks of a moral panic. ChatGPT has 122 million daily active users according to Demand Sage, that is something like a third the population of the United States. At that scale it's pretty much inevitable that you're going to get some real loonies on the platform. In fact at that scale it's pretty much inevitable you're going to get people whose first psychotic break lines up with when they started using ChatGPT. But even just stylistically it's fairly obvious that journalists love this narrative. There's nothing Western readers love more than a spooky story about technology gone awry or corrupting people, it reliably rakes in the clicks.
The ~~call~~ narrative is coming from inside the ~~house~~ forum. Actually, this is even more of a deflection, not even trying to claim they were already on the edge but that the number of delusional people is at the base rate (with no actual stats on rates of psychotic breaks, because on lesswrong vibes are good enough).
If you wanted a vision of the future of autocomplete, imagine a computer failing at predicting what you’re gonna write but absolutely burning through kilowatts trying to, forever.
click here to take 10d8 psychic damage
Ouch. Also, I'm raging and didn't even realize I had barbarian levels.
Well I suppose it can't be much worse than graphology or myers-briggs!
is graphology the pentaseptateragonoid spiderweb-dartboard-connect-the-spines thing?
failed my saving throw.
I don't know what I expected
Caught a particularly spectacular AI fuckup in the wild:
(Sidenote: Rest in peace Ozzy - after the long and wild life you had, you've earned it)
Forget counting the Rs in strawberry, biggest challenge to LLMs is not making up bullshit about recent events not in their training data
The AI is right with how much we know of his life he osnt really dead, the AGI can just simulate hom and resurrect him. Takes another hit from my joint made exclusively out of the sequences book pages
(Rip indeed, what a crazy ride, and he was all aboard).
So here's a poster on LessWrong, ostensibly the space to discuss how to prevent people from dying of stuff like disease and starvation, "running the numbers" on a Lancet analysis of the USAID shutdown and, having not been able to replicate its claims of millions of dead thereof, basically concludes it's not so bad?
No mention of the performative cruelty of the shutdown, the paltry sums involved compared to other gov expenditures, nor the blow it deals to American soft power. But hey, building Patriot missiles and then not sending them to Ukraine is probably net positive for human suffering, just run the numbers the right way!
Edit ah it's the dude who tried to prove that most Catholic cardinals are gay because heredity, I think I highlighted that post previously here. Definitely a high-sneer vein to mine.
Ernie Davis gives his thoughts on the recent GDM and OAI performance at the IMO.
https://garymarcus.substack.com/p/deepmind-and-openai-achieve-imo-gold