Technology

42604 readers
244 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS
1
 
 

Hey Beeple and visitors to Beehaw: I think we need to have a discussion about !technology@beehaw.org, community culture, and moderation. First, some of the reasons that I think we need to have this conversation.

  1. Technology got big fast and has stayed Beehaw's most active community.
  2. Technology gets more reports (about double in the last month by a rough hand count) than the next highest community that I moderate (Politics, and this is during election season in a month that involved a disastrous debate, an assassination attempt on a candidate, and a major party's presumptive nominee dropping out of the race)
  3. For a long time, I and other mods have felt that Technology at times isn’t living up to the Beehaw ethos. More often than I like I see comments in this community where users are being abusive or insulting toward one another, often without any provocation other than the perception that the other user’s opinion is wrong.

Because of these reasons, we have decided that we may need to be a little more hands-on with our moderation of Technology. Here’s what that might mean:

  1. Mods will be more actively removing comments that are unkind or abusive, that involve personal attacks, or that just have really bad vibes.
    a. We will always try to be fair, but you may not always agree with our moderation decisions. Please try to respect those decisions anyway. We will generally try to moderate in a way that is a) proportional, and b) gradual.
    b. We are more likely to respond to particularly bad behavior from off-instance users with pre-emptive bans. This is not because off-instance users are worse, or less valuable, but simply that we aren't able to vet users from other instances and don't interact with them with the same frequency, and other instances may have less strict sign-up policies than Beehaw, making it more difficult to play whack-a-mole.
  2. We will need you to report early and often. The drawbacks of getting reports for something that doesn't require our intervention are outweighed by the benefits of us being able to get to a situation before it spirals out of control. By all means, if you’re not sure if something has risen to the level of violating our rule, say so in the report reason, but I'd personally rather get reports early than late, when a thread has spiraled into an all out flamewar.
    a. That said, please don't report people for being wrong, unless they are doing so in a way that is actually dangerous to others. It would be better for you to kindly disagree with them in a nice comment.
    b. Please, feel free to try and de-escalate arguments and remind one another of the humanity of the people behind the usernames. Remember to Be(e) Nice even when disagreeing with one another. Yes, even Windows users.
  3. We will try to be more proactive in stepping in when arguments are happening and trying to remind folks to Be(e) Nice.
    a. This isn't always possible. Mods are all volunteers with jobs and lives, and things often get out of hand before we are aware of the problem due to the size of the community and mod team.
    b. This isn't always helpful, but we try to make these kinds of gentle reminders our first resort when we get to things early enough. It’s also usually useful in gauging whether someone is a good fit for Beehaw. If someone responds with abuse to a gentle nudge about their behavior, it’s generally a good indication that they either aren’t aware of or don’t care about the type of community we are trying to maintain.

I know our philosophy posts can be long and sometimes a little meandering (personally that's why I love them) but do take the time to read them if you haven't. If you can't/won't or just need a reminder, though, I'll try to distill the parts that I think are most salient to this particular post:

  1. Be(e) nice. By nice, we don't mean merely being polite, or in the surface-level "oh bless your heart" kind of way; we mean be kind.
  2. Remember the human. The users that you interact with on Beehaw (and most likely other parts of the internet) are people, and people should be treated kindly and in good-faith whenever possible.
  3. Assume good faith. Whenever possible, and until demonstrated otherwise, assume that users don't have a secret, evil agenda. If you think they might be saying or implying something you think is bad, ask them to clarify (kindly) and give them a chance to explain. Most likely, they've communicated themselves poorly, or you've misunderstood. After all of that, it's possible that you may disagree with them still, but we can disagree about Technology and still give one another the respect due to other humans.
2
 
 

The Match Group strikes again!

3
 
 

he open web is something extraordinary: anybody can use whatever tools they have, to create content following publicly documented specifications, published using completely free and open platforms, and then share that work with anyone, anywhere in the world, without asking for permission from anyone. Think about how radical that is.

Now, from content to code, communities to culture, we can see example after example of that open web under attack. Every single aspect of the radical architecture I just described is threatened, by those who have profited most from that exact system.

Today, the good people who act as thoughtful stewards of the web infrastructure are still showing the same generosity of spirit that has created opportunity for billions of people and connected society in ways too vast to count while —not incidentally— also creating trillions of dollars of value and countless jobs around the world. But the increasingly-extremist tycoons of Big Tech have decided that that's not good enough.

Now, the hectobillionaires have begun their final assault on the last, best parts of what's still open, and likely won't rest until they've either brought all of the independent and noncommercial parts of the Internet under their control, or destroyed them. Whether or not they succeed is going to be decided by decisions that we all make as a community in the coming months. Even though there have always been threats to openness on the web, the stakes have never been higher than they are this time.

4
5
6
 
 
7
 
 
8
 
 

Call me crazy, but I don’t think an official government app should be loading executable code from a random person’s GitHub account. Or tracking your GPS location in the background. Or silently stripping privacy consent dialogs from every website you visit through its built-in browser. And yet here we are.

The White House released a new app last week for iOS and Android, promising “unparalleled access to the Trump Administration.” A security researcher, who goes by Thereallo, pulled the APKs and decompiled them — extracting the actual compiled code and examining what’s really going on under the hood. The propaganda stuff — cherry-picked news, a one-tap button to report your neighbors to ICE, a text that auto-populates “Greatest President Ever!” — which Engadget covered, is embarrassing enough. The code underneath is something else entirely.

Let’s start with the most alarming behavior. Every time you open a link in the app’s built-in browser, the app silently injects JavaScript and CSS into the page. Here’s what it does:

It hides: Cookie banners GDPR consent dialogs OneTrust popups Privacy banners Login walls Signup walls Upsell prompts Paywall elements CMP (Consent Management Platform) boxes

It forces body { overflow: auto !important } to re-enable scrolling on pages where consent dialogs lock the scroll. Then it sets up a MutationObserver to continuously nuke any consent elements that get dynamically added.

An official United States government app is injecting CSS and JavaScript into third-party websites to strip away their cookie consent dialogs, GDPR banners, login gates, and paywalls.

9
 
 
10
 
 

Apple has discontinued the Mac Pro – but it's just the first of the tower computers to go. The rest will follow soon.

Fruit-sniffers extraordaire 9-to-5 Mac got the news yesterday, complete with official confirmation from Apple itself. It's official and it's happened, but there have been warning signs for months – in November 2025, Bloomberg's Matt Gurman said "The Mac Pro is on the back burner."

The phantom fruit-flingers of Silicon Valley launched the seven-thousand-buck Apple Silicon-based Mac Pro in June 2023, with an M2 Ultra SoC. It sported seven PCIe slots – but the problem was that cash-rich customers couldn't add the sorts of expansion that normally go into a PCIe slot… to the extent that Apple publishes a page about PCIe cards you can install in your Mac Pro (2023). Notably, the machine did not support add-on GPUs: only the GPU that's integrated into the CPU complex along with the machine's RAM and primary flash storage. The machine also had no RAM expansion whatsoever.

Presumably, this limited its appeal for many traditional buyers, and the machine never saw an M3 or M4 model, let alone the M5 SoC that The Register covered shortly before Bloomberg called the Arm64 cheesegrater's fate.

11
12
13
14
15
 
 

Yekaterina Chudnovsky, online biographies say, is a mother-of-four who “enjoys spending time with her family and teaching them the importance of giving back and helping others”. They add that Ukrainian-born Chudnovsky, known as Katie, finds sanctuary in walks on the beach.

In interviews, Chudnovsky has spoken warmly about her commitment to philanthropy, her dedication to supporting cancer research and her work as a lawyer for an unnamed global technology firm. Pornography is never mentioned.

Now, it may become unavoidable. After the death of Chudnovsky’s husband, Leonid Radvinsky, from cancer last week at the age of 43, she is now understood to have a controlling interest through a family trust in the London-based adult content site, OnlyFans.

Chudnovsky is set to have a crucial role in deciding what happens to the business that made her husband a billionaire before he turned 40. The family stake is valued at about $5.5bn (£4.1bn).

Chudnovsky’s views on pornography will determine the site’s future business model, and whether it continues to generate huge sums of money by taking a 20% cut from the earnings of about 4 million content creators globally, a large proportion of whom generate money for the business by undressing and performing explicit content on the platform.

16
 
 

The young woman at the heart of what has been called the tech industry’s “big tobacco” moment was on YouTube at six and Instagram by nine. More than a decade later, she says, she still can’t live without the social media she became addicted to.

“I can’t, it’s too hard to be without it,” Kaley, now 20, told a jury at Los Angeles’ superior court. This week, five men and seven women handed down a verdict on the design of two of the world’s most popular apps that vindicated Kaley’s position.

The ruling sent shockwaves through Silicon Valley and sparked hope among families and child safety campaigners that change may finally be coming to social media. Mark Zuckerberg’s Meta and Google’s YouTube were found liable for deliberately designing addictive products used by Kaley and millions of other young people.

It was one case centred on the suffering of one young person who became depressed at 10 and self-harmed, but Kaley, referred to by her first name or the initials KGM in order to protect her privacy, was the figurehead for a much bigger fight.

“We wanted them to feel it,” one of the jurors explained to reporters. “We wanted them to realise this was unacceptable.”

17
18
 
 
19
 
 

AI models that lie and cheat appear to be growing in number with reports of deceptive scheming surging in the last six months, a study into the technology has found.

AI chatbots and agents disregarded direct instructions, evaded safeguards and deceived humans and other AI, according to research funded by the UK government-funded AI Safety Institute (AISI). The study, shared with the Guardian, identified nearly 700 real-world cases of AI scheming and charted a five-fold rise in misbehaviour between October and March, with some AI models destroying emails and other files without permission.

The snapshot of scheming by AI agents “in the wild”, as opposed to in laboratory conditions, has sparked fresh calls for international monitoring of the increasingly capable models and come as Silicon Valley companies aggressively promote the technology as a economically transformative. Last week the UK chancellor also launched a drive to get millions more Britons using AI.

20
 
 

Citing national security fears, America is effectively banning any new consumer-grade network routers made abroad.

The Federal Communications Commission (FCC) has updated its Covered List to include all foreign-made consumer routers, prohibiting the approval of any new models.

For clarification, the FCC says this change does not prevent the import, sale, or use of any existing models that the agency previously authorized.

That Covered List details equipment and services covered by Section 2 of The Secure Networks Act, which, by their inclusion, are deemed to pose an unacceptable risk to US national security.

According to the FCC, this move follows a determination by a "White House-convened Executive Branch interagency body with appropriate national security expertise," in line with President Trump's National Security Strategy that the US must not be dependent on any other country for core components necessary to the nation's defense or economy.

Its determination was that foreign-produced routers introduce a supply chain vulnerability which could disrupt critical infrastructure and national defense, and pose a severe cybersecurity risk that could harm Americans.

21
 
 

The first basic income program for workers who have lost pay, jobs, or opportunities to AI began sending out its first funds this week. The program is run by the nonprofits the AI Commons Project and What We Will, who together are administering the AI Dividend, which will issue a no-strings payment of $1,000 a month for a year to a cohort of 25-50 impacted workers. The project’s organizers say they have $300,000 in initial funding, and hope to expand quickly. They plan to distribute $3 million in funds in 2026—and aim to do so by pushing the major AI companies to contribute to the effort.

“Over the last few years, I’ve been mentoring students who have really struggled to land any jobs,” Kaitlin Cort, a veteran software engineer and programming instructor, tells me.

Cort is one of the organizers behind the AI Dividend, and she says she was alerted to a growing problem as she’s tried to find jobs for graduates of her programming classes. (She’s taught for Per Scholas, Future Code, and NYC Tech Talent Pipeline programs.) Cort says she’s seen the job market for entry level programmers dry up as executives and managers across the tech industry embrace Copilot and Claude. “The few jobs that students have landed have often been demeaning,” Cort says, “and not really allowing them to do real engineering work, but rather asking them to revie

22
23
 
 

Science fiction author Neal Stephenson, who coined the term “metaverse” in his 1992 novel Snow Crash, has argued he and others who believed immersive environments would require head-mounted hardware got it wrong.

In a post penned to mark Meta’s recent decision to end its work on the Metaverse after blowing through $80 billion, Stephenson said that twenty years ago, when he worked at virtual reality hardware company Magic Leap, he would ask “Do you really think that twenty years from now everyone is still going to be going around all day staring at little rectangles in their hands?”

“At the time it seemed obvious to me that the answer was no,” he wrote. Now he thinks that another 20 years into the future, devices like smartphones will still dominate. “Or at least that is the case if the only alternative is wearing things on their faces.”

24
 
 

Many people start their work with AI by prompting the machine to imagine it is an expert at the task they want it to perform, a technique that boffins have found may be futile.

Persona-based prompting – which involves using directives such as "You're an expert machine learning programmer" in a model prompt – dates back to 2023, when researchers began to explore how role-playing instructions influenced AI models’ output.

It's now common to find online prompting guides that include passages like, "You are an expert full-stack developer tasked with building a complete, production-ready full-stack web application from scratch."

But academics who have researched this approach report it does not always produce superior results.

In a pre-print paper titled "Expert Personas Improve LLM Alignment but Damage Accuracy: Bootstrapping Intent-Based Persona Routing with PRISM," researchers affiliated with the University of Southern California (USC) find that persona-based prompting is task-dependent – which they say explains the mixed results.

For alignment-dependent tasks, like writing, role-playing, and safety, personas do improve model performance. For pretraining-dependent tasks like math and coding, using the technique produces worse results.

25
view more: next ›