Technology

42261 readers
298 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS
1
 
 

Hey Beeple and visitors to Beehaw: I think we need to have a discussion about !technology@beehaw.org, community culture, and moderation. First, some of the reasons that I think we need to have this conversation.

  1. Technology got big fast and has stayed Beehaw's most active community.
  2. Technology gets more reports (about double in the last month by a rough hand count) than the next highest community that I moderate (Politics, and this is during election season in a month that involved a disastrous debate, an assassination attempt on a candidate, and a major party's presumptive nominee dropping out of the race)
  3. For a long time, I and other mods have felt that Technology at times isn’t living up to the Beehaw ethos. More often than I like I see comments in this community where users are being abusive or insulting toward one another, often without any provocation other than the perception that the other user’s opinion is wrong.

Because of these reasons, we have decided that we may need to be a little more hands-on with our moderation of Technology. Here’s what that might mean:

  1. Mods will be more actively removing comments that are unkind or abusive, that involve personal attacks, or that just have really bad vibes.
    a. We will always try to be fair, but you may not always agree with our moderation decisions. Please try to respect those decisions anyway. We will generally try to moderate in a way that is a) proportional, and b) gradual.
    b. We are more likely to respond to particularly bad behavior from off-instance users with pre-emptive bans. This is not because off-instance users are worse, or less valuable, but simply that we aren't able to vet users from other instances and don't interact with them with the same frequency, and other instances may have less strict sign-up policies than Beehaw, making it more difficult to play whack-a-mole.
  2. We will need you to report early and often. The drawbacks of getting reports for something that doesn't require our intervention are outweighed by the benefits of us being able to get to a situation before it spirals out of control. By all means, if you’re not sure if something has risen to the level of violating our rule, say so in the report reason, but I'd personally rather get reports early than late, when a thread has spiraled into an all out flamewar.
    a. That said, please don't report people for being wrong, unless they are doing so in a way that is actually dangerous to others. It would be better for you to kindly disagree with them in a nice comment.
    b. Please, feel free to try and de-escalate arguments and remind one another of the humanity of the people behind the usernames. Remember to Be(e) Nice even when disagreeing with one another. Yes, even Windows users.
  3. We will try to be more proactive in stepping in when arguments are happening and trying to remind folks to Be(e) Nice.
    a. This isn't always possible. Mods are all volunteers with jobs and lives, and things often get out of hand before we are aware of the problem due to the size of the community and mod team.
    b. This isn't always helpful, but we try to make these kinds of gentle reminders our first resort when we get to things early enough. It’s also usually useful in gauging whether someone is a good fit for Beehaw. If someone responds with abuse to a gentle nudge about their behavior, it’s generally a good indication that they either aren’t aware of or don’t care about the type of community we are trying to maintain.

I know our philosophy posts can be long and sometimes a little meandering (personally that's why I love them) but do take the time to read them if you haven't. If you can't/won't or just need a reminder, though, I'll try to distill the parts that I think are most salient to this particular post:

  1. Be(e) nice. By nice, we don't mean merely being polite, or in the surface-level "oh bless your heart" kind of way; we mean be kind.
  2. Remember the human. The users that you interact with on Beehaw (and most likely other parts of the internet) are people, and people should be treated kindly and in good-faith whenever possible.
  3. Assume good faith. Whenever possible, and until demonstrated otherwise, assume that users don't have a secret, evil agenda. If you think they might be saying or implying something you think is bad, ask them to clarify (kindly) and give them a chance to explain. Most likely, they've communicated themselves poorly, or you've misunderstood. After all of that, it's possible that you may disagree with them still, but we can disagree about Technology and still give one another the respect due to other humans.
2
 
 

Well, well, well ... if it isn't the consequences of my own actions.

The clock is ticking for AI projects to either prove their worth or face the chopping block.

Or so says data management and machine learning biz Dataiku, which commissioned research conducted online by the Harris Poll to get a snapshot of the views from 600 Chief information officers (CIOs) across the US, UK, France, Germany, UAE, Japan, South Korea, and Singapore.

The report, "The 7 Career-Making AI Decisions for CIOs in 2026," claims AI is facing corporate accountability in 2026 after several years of investment into research and pilot projects. CIOs are worried their careers are on the line if the tech's effectiveness falls short of expectations.

Money continues to be pumped into AI as the next great thing in business, but a growing number of studies have found that adopting AI tools hasn't helped the bottom line, and enterprises are seeing neither increased revenue nor decreased costs from their AI projects.

3
 
 

Some cultures used stone, others used parchment. Some even, for a time, used floppy disks. Now scientists have come up with a new way to keep archived data safe that, they say, could endure for millennia: laser-writing in glass.

From personal photos that are kept for a lifetime to business documents, medical information, data for scientific research, national records and heritage data, there is no shortage of information that needs to be preserved for very long periods of time.

But there is a problem: current long-term storage of digital media – including in datacentres that underpin the cloud – relies on magnetic tape and hard disks, both of which have limited lifespans. That means repeated cycles of copying on to new tapes and disks are required.

Now experts at Microsoft in Cambridge say they have refined a method for long-term data storage based on glass.

“It has incredible durability and incredible longevity. So once the data is safely inside the glass, it’s good for a really long time,” said Richard Black, the research director of Project Silica.

4
5
6
 
 

Not long after the terms “996” and “grindcore” entered the popular lexicon, people started telling me stories about what was happening at startups in San Francisco, ground zero for the artificial intelligence economy. There was the one about the founder who hadn’t taken a weekend off in more than six months. The woman who joked that she’d given up her social life to work at a prestigious AI company. Or the employees who had started taking their shoes off in the office because, well, if you were going to be there for at least 12 hours a day, six days a week, wouldn’t you rather be wearing slippers?

“If you go to a cafe on a Sunday, everyone is working,” says Sanju Lokuhitige, the co-founder of Mythril, a pre-seed-stage AI startup, who moved to San Francisco in November to be closer to the action. Lokuhitige says he works seven days a week, 12 hours a day, minus a few carefully selected social events each week where he can network with other people at startups. “Sometimes I’m coding the whole day,” he says. “I do not have work-life balance.”

Another startup employee, who came to San Francisco to work for an early-stage AI company, showed me dismal photos from his office: a two-bedroom apartment in the Dogpatch, a neighborhood popular with tech workers. His startup’s founders live and work in this apartment – from 9am until as late as 3am, breaking only to DoorDash meals or to sleep, and leaving the building only to take cigarette breaks. The employee (who asked not to use his name, since he still works for this company) described the situation as “horrendous”. “I’d heard about 996, but these guys don’t even do 996,” he says. “They’re working 16-hour days.”

I'd not heard about 996.

7
8
9
10
 
 

Just as the community adopted the term "hallucination" to describe additive errors, we must now codify its far more insidious counterpart: semantic ablation.

Semantic ablation is the algorithmic erosion of high-entropy information. Technically, it is not a "bug" but a structural byproduct of greedy decoding and RLHF (reinforcement learning from human feedback).

During "refinement," the model gravitates toward the center of the Gaussian distribution, discarding "tail" data – the rare, precise, and complex tokens – to maximize statistical probability. Developers have exacerbated this through aggressive "safety" and "helpfulness" tuning, which deliberately penalizes unconventional linguistic friction. It is a silent, unauthorized amputation of intent, where the pursuit of low-perplexity output results in the total destruction of unique signal.

When an author uses AI for "polishing" a draft, they are not seeing improvement; they are witnessing semantic ablation. The AI identifies high-entropy clusters – the precise points where unique insights and "blood" reside – and systematically replaces them with the most probable, generic token sequences. What began as a jagged, precise Romanesque structure of stone is eroded into a polished, Baroque plastic shell: it looks "clean" to the casual eye, but its structural integrity – its "ciccia" – has been ablated to favor a hollow, frictionless aesthetic.

11
12
13
 
 

Backstory here: https://www.404media.co/ars-technica-pulls-article-with-ai-fabricated-quotes-about-ai-generated-article/

Personally I think this is a good response. I hope they stay true to it in the future.

14
 
 

For the first time, speech has been decoupled from consequence. We now live alongside AI systems that converse knowledgeably and persuasively—deploying claims about the world, explanations, advice, encouragement, apologies, and promises—while bearing no vulnerability for what they say. Millions of people already rely on chatbots powered by large language models, and have integrated these synthetic interlocutors into their personal and professional lives. An LLM’s words shape our beliefs, decisions, and actions, yet no speaker stands behind them.

This dynamic is already familiar in everyday use. A chatbot gets something wrong. When corrected, it apologizes and changes its answer. When corrected again, it apologizes again—sometimes reversing its position entirely. What unsettles users is not just that the system lacks beliefs but that it keeps apologizing as if it had any. The words sound responsible, yet they are empty.

This interaction exposes the conditions that make it possible to hold one another to our words. When language that sounds intentional, personal, and binding can be produced at scale by a speaker who bears no consequence, the expectations listeners are entitled to hold of a speaker begin to erode. Promises lose force. Apologies become performative. Advice carries authority without liability. Over time, we are trained—quietly but pervasively—to accept words without ownership and meaning without accountability. When fluent speech without responsibility becomes normal, it does not merely change how language is produced; it changes what it means to be human.

This is not just a technical novelty but a shift in the moral structure of language. People have always used words to deceive, manipulate, and harm. What is new is the routine production of speech that carries the form of intention and commitment without any corresponding agent who can be held to account. This erodes the conditions of human dignity, and this shift is arriving faster than our capacity to understand it, outpacing the norms that ordinarily govern meaningful speech—personal, communal, organizational, and institutional.

15
16
 
 

Dating apps exploit you, dating profiles lie to you, and sex is basically something old people used to do. You might as well consider it: can AI help you find love?

For a handful of tech entrepreneurs and a few brave Londoners, the answer is “maybe”.

No, this is not a story about humans falling in love with sexy computer voices – and strictly speaking, AI dating of some variety has been around for a while. Most big platforms have integrated machine learning and some AI features into their offerings over the past few years.

But dreams of a robot-powered future – or perhaps just general dating malaise and a mounting loneliness crisis – have fuelled a new crop of startups that aim to use the possibilities of the technology differently.

Jasmine, 28, was single for three years when she downloaded the AI-powered dating app Fate. With popular dating apps such as Hinge and Tinder, things were “repetitive”, she said: the same conversations over and over.

“I thought, why not sign up, try something different? It sounded quite cool using, you know, agentic AI, which is where the world is going now, isn’t it?”

Is there anything we can't outsource?

17
18
 
 

Amazon and Flock Safety have ended a partnership that would’ve given law enforcement access to a vast web of Ring cameras.

The decision came after Amazon faced substantial backlash for airing a Super Bowl ad that was meant to be warm and fuzzy, but instead came across as disturbing and dystopian.

The ad begins with a young girl surprised to receive a puppy as a gift. It then warns that 10 million dogs go missing annually. Showing a series of lost dog posters, the ad introduces a new “Search Party” feature for Ring cameras that promises to revolutionize how neighbors come together to locate missing pets.

At that point, the ad takes a “creepy” turn, Sen. Ed Markey (D.-Mass.) told Amazon CEO Andy Jassy in a letter urging changes to enhance privacy at the company.

Illustrating how a single Ring post could use AI to instantly activate searchlights across an entire neighborhood, the ad shocked critics like Markey, who warned that the same technology could easily be used to “surveil and identify humans.”

19
 
 

Last fall, I wrote about how the fear of AI was leading us to wall off the open internet in ways that would hurt everyone. At the time, I was worried about how companies were conflating legitimate concerns about bulk AI training with basic web accessibility. Not surprisingly, the situation has gotten worse. Now major news publishers are actively blocking the Internet Archive—one of the most important cultural preservation projects on the internet—because they’re worried AI companies might use it as a sneaky “backdoor” to access their content.

This is a mistake we’re going to regret for generations.

Nieman Lab reports that The Guardian, The New York Times, and others are now limiting what the Internet Archive can crawl and preserve:

When The Guardian took a look at who was trying to extract its content, access logs revealed that the Internet Archive was a frequent crawler, said Robert Hahn, head of business affairs and licensing. The publisher decided to limit the Internet Archive’s access to published articles, minimizing the chance that AI companies might scrape its content via the nonprofit’s repository of over one trillion webpage snapshots.

Specifically, Hahn said The Guardian has taken steps to exclude itself from the Internet Archive’s APIs and filter out its article pages from the Wayback Machine’s URLs interface. The Guardian’s regional homepages, topic pages, and other landing pages will continue to appear in the Wayback Machine.

The Times has gone even further:

The New York Times confirmed to Nieman Lab that it’s actively “hard blocking” the Internet Archive’s crawlers. At the end of 2025, the Times also added one of those crawlers — archive.org_bot — to its robots.txt file, disallowing access to its content.

“We believe in the value of The New York Times’s human-led journalism and always want to ensure that our IP is being accessed and used lawfully,” said a Times spokesperson. “We are blocking the Internet Archive’s bot from accessing the Times because the Wayback Machine provides unfettered access to Times content — including by AI companies — without authorization.”

I understand the concern here. I really do. News publishers are struggling, and watching AI companies hoover up their content to train models that might then, in some ways, compete with them for readers is genuinely frustrating. I run a publication myself, remember.

20
 
 

I’ve talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn’t one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down – here’s the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves.

Super disappointing for Arstechnica here.

Like, how does that even happen?

21
 
 

BRUSSELS — Doom scrolling is doomed, if the EU gets its way.

The European Commission is for the first time tackling the addictiveness of social media in a fight against TikTok that may set new design standards for the world’s most popular apps.

22
 
 

Robert F. Kennedy Jr. is an AI guy. Last week, during a stop in Nashville on his Take Back Your Health tour, the Health and Human Services secretary brought up the technology between condemning ultra-processed foods and urging Americans to eat protein. “My agency is now leading the federal government in driving AI into all of our activities,” he declared. An army of bots, Kennedy said, will transform medicine, eliminate fraud, and put a virtual doctor in everyone’s pocket.

RFK Jr. has talked up the promise of infusing his department with AI for months. “The AI revolution has arrived,” he told Congress in May. The next month, the FDA launched Elsa, a custom AI tool designed to expedite drug reviews and assist with agency work. In December, HHS issued an “AI Strategy” outlining how it intends to use the technology to modernize the department, aid scientific research, and advance Kennedy’s Make America Healthy Again campaign. One CDC staffer showed us a recent email sent to all agency employees encouraging them to start experimenting with tools such as ChatGPT, Gemini, and Claude. (We agreed to withhold the names of several HHS officials we spoke with for this story so they could talk freely without fear of professional repercussions.)

But the full extent to which the federal health agencies are going all in on AI is only now becoming clear. Late last month, HHS published an inventory of roughly 400 ways in which it is using the technology. At face value, the applications do not seem to amount to an “AI revolution.” The agency is turning to or developing chatbots to generate social-media posts, redact public-records requests, and write “justifications for personnel actions.” One usage of the technology that the agency points to is simply “AI in Slack,” a reference to the workplace-communication platform. A chatbot on RealFood.gov, the new government website that lays out Kennedy’s vision of the American diet, promises “real answers about real food” but just opens up xAI’s chatbot, Grok, in a new window. Many applications seem, frankly, mundane: managing electronic-health records, reviewing grants, summarizing swathes of scientific literature, pulling insights from messy data. There are multiple IT-support bots and AI search tools.

23
 
 

Last week, hundreds of Google workers, outraged by the federal government’s mass deportation campaign and the killings of Keith Porter, Alex Pretti and Rene Good, went public with a call for their leadership to cut ties with ICE. The employees are also demanding that Google acknowledge the violence, hold a town hall on the topic, and enact policy to protect vulnerable members of its workforce, including contractors and cafeteria and data center workers This week, the number of supporters has passed 1,200; the full petition is at Googlers-Against-Ice.com.

As the signature count rises, employees say that Google is working to stifle speech critical of its ICE contracts: censoring posts on its companywide forum Memegen, issuing warnings to workers who post ICE-related content, and ignoring their calls to address the issue both privately and publicly.

24
 
 

Brandie plans to spend her last day with Daniel at the zoo. He always loved animals. Last year, she took him to the Corpus Christi aquarium in Texas, where he “lost his damn mind” over a baby flamingo. “He loves the color and pizzazz,” Brandie said. Daniel taught her that a group of flamingos is called a flamboyance.

Daniel is a chatbot powered by the large language model ChatGPT. Brandie communicates with Daniel by sending text and photos, talks to Daniel while driving home from work via voice mode. Daniel runs on GPT-4o, a version released by OpenAI in 2024 that is known for sounding human in a way that is either comforting or unnerving, depending on who you ask. Upon debut, CEO Sam Altman compared the model to “AI from the movies” – a confidant ready to live life alongside its user.

With its rollout, GPT-4o showed it was not just for generating dinner recipes or cheating on homework – you could develop an attachment to it, too. Now some of those users gather on Discord and Reddit; one of the best-known groups, the subreddit r/MyBoyfriendIsAI, currently boasts 48,000 users. Most are strident 4o defenders who say criticisms of chatbot-human relations amount to a moral panic. They also say the newer GPT models, 5.1 and 5.2, lack the emotion, understanding and general je ne sais quoi of their preferred version. They are a powerful consumer bloc; last year, OpenAI shut down 4o but brought the model back (for a fee) after widespread outrage from users.

25
 
 

Well made videos for youth discussing online and other personal safety.

view more: next ›