Chariots of the Gods was released in 1968. I think that ship may have sailed decades ago.
ShakingMyHead
Out of all of the things he did, that is one of those things.
Do you mean the retweets or actual replies? Because I'm not seeing any replies to his comment, even in xcancel.
Btw "Don't Die" is a Bryan Johnson adjacent longevity community slogan which the writer is very likely to have seen often around twitter
Or it could be a reference to what often is said before the start of a match of a video game (though they probably left out "kick ass" for marketing purposes).
Edit: actually, considering that, maybe there's a reveal at the end that they're in the Basilisk torture sim, so... there might be something there?
The response to his paper in the video seems to imply that, at least in the paper, he is more explicit about AI lacking the requirements of consciousness.
"An investigator from the San Francisco Public Defender's Office lawfully served a subpoena on Mr. Altman because he is a potential witness in a pending criminal case," spokesperson Valerie Ibarra said in a statement to SFGATE.
In a post on X, the group wrote that one of their public defenders had managed to serve Sam Altman with a subpoena, requiring him to testify at their upcoming trial. They explained that the case involves their previous non-violent demonstrations, including blocking the entrance and the road in front of OpenAI's offices on multiple occasions.
"All of our non-violent actions against OpenAI were an attempt to slow OpenAI down in their attempted murder of everyone and every living thing on earth."
So it's not because he's being prosecuted.
I could also see the response to the bubble bursting being something like "At least the economy crashing delayed the murderous superintelligence."
So, I'm not an expert study-reader or anything, but it looks like they took some questions from the MMLU, modified it in some unspecified way and put it into 3 categories (AI, human, AI-human), and after accounting for skill, determined that people with higher theory of mind had a slightly better outcome than people with lower theory of mind. They determined this based on what the people being tested wrote to the AI, but what they wrote isn't in the study.
What they didn't do is state that people with higher theory of mind are more likely to use AI or anything like that. The study also doesn't mention empathy at all, though I guess it could be inferred.
Not that any of that actually matters because how they determined how much "theory of mind" each person had was to ask Gemini 2.5 and GPT-4o.