this post was submitted on 14 Dec 2025
32 points (100.0% liked)

Technology

41014 readers
796 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
 

In a small room in San Diego last week, a man in a black leather jacket explained to me how to save the world from destruction by AI. Max Tegmark, a notable figure in the AI-safety movement, believes that “artificial general intelligence,” or AGI, could precipitate the end of human life. I was in town for NeurIPS, one of the largest AI-research conferences, and Tegmark had invited me, along with five other journalists, to a briefing on an AI-safety index that he would release the next day. No company scored better than a C+.

The threat of technological superintelligence is the stuff of science fiction, yet it has become a topic of serious discussion in the past few years. Despite the lack of clear definition—even OpenAI CEO Sam Altman has called AGI a “weakly defined term”—the idea that powerful AI contains an inherent threat to humanity has gained acceptance among respected cultural critics.

Granted, generative AI is a powerful technology that has already had a massive impact on our work and culture. But superintelligence has become one of several questionable narratives promoted by the AI industry, along with the ideas that AI learns like a human, that it has “emergent” capabilities, that “reasoning models” are actually reasoning, and that the technology will eventually improve itself.

I traveled to NeurIPS, held at the waterfront fortress that is the San Diego Convention Center, partly to understand how seriously these narratives are taken within the AI industry. Do AGI aspirations guide research and product development? When I asked Tegmark about this, he told me that the major AI companies were sincerely trying to build AGI, but his reasoning was unconvincing. “I know their founders,” he said. “And they’ve said so publicly.”

top 6 comments
sorted by: hot top controversial new old
[–] kbal@fedia.io 13 points 5 days ago (1 children)

The AI bubble is so ridiculously huge at this point that I think we're all living inside the AI bubble.

[–] I_am_10_squirrels@beehaw.org 5 points 5 days ago

We're all bubble boy

[–] spit_evil_olive_tips@beehaw.org 4 points 4 days ago (1 children)

In a small room in San Diego last week

...

I was in town for NeurIPS, one of the largest AI-research conferences, and Tegmark had invited me, along with five other journalists

congrats to this author on getting a business trip to San Diego during December. I bet it was nice and warm.

it seems like this is a pretty typical piece of access journalism:

The place to be, if you could get in, was the party hosted by Cohere...

...

With the help of a researcher friend, I secured an invite to a mixer hosted by the Mohamed bin Zayed University of Artificial Intelligence, the world’s first AI-focused university, named for the current UAE president.

...

On the roof of the Hard Rock Hotel...

leading to a "conclusion" pretty typical of access journalism:

It struck me that both might be correct: that many AI developers are thinking about the technology’s most tangible problems while public conversations about AI—including those among the most prominent developers themselves—are dominated by imagined ones.

what if the critics and the people they're criticizing are both correct? I am a very smart person who gets paid to write for The Atlantic.

[–] Powderhorn@beehaw.org 1 points 4 days ago

I sort of glossed over the access. One expects that from longform, so it felt like it came with the territory.

[–] Megaman_EXE@beehaw.org 8 points 5 days ago* (last edited 5 days ago) (1 children)

I recently saw this video that was an interesting slap back to reality. I hope this AI bubble pops quickly. Sooner the better

https://youtu.be/4lKyNdZz3Vw

[–] todotoro@midwest.social 5 points 5 days ago

Great link and good point. Subbed, ty for sharing. ( I'll believe what a NetNavi says about AI anyday).