Technology

42587 readers
437 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS
1
 
 

Hey Beeple and visitors to Beehaw: I think we need to have a discussion about !technology@beehaw.org, community culture, and moderation. First, some of the reasons that I think we need to have this conversation.

  1. Technology got big fast and has stayed Beehaw's most active community.
  2. Technology gets more reports (about double in the last month by a rough hand count) than the next highest community that I moderate (Politics, and this is during election season in a month that involved a disastrous debate, an assassination attempt on a candidate, and a major party's presumptive nominee dropping out of the race)
  3. For a long time, I and other mods have felt that Technology at times isn’t living up to the Beehaw ethos. More often than I like I see comments in this community where users are being abusive or insulting toward one another, often without any provocation other than the perception that the other user’s opinion is wrong.

Because of these reasons, we have decided that we may need to be a little more hands-on with our moderation of Technology. Here’s what that might mean:

  1. Mods will be more actively removing comments that are unkind or abusive, that involve personal attacks, or that just have really bad vibes.
    a. We will always try to be fair, but you may not always agree with our moderation decisions. Please try to respect those decisions anyway. We will generally try to moderate in a way that is a) proportional, and b) gradual.
    b. We are more likely to respond to particularly bad behavior from off-instance users with pre-emptive bans. This is not because off-instance users are worse, or less valuable, but simply that we aren't able to vet users from other instances and don't interact with them with the same frequency, and other instances may have less strict sign-up policies than Beehaw, making it more difficult to play whack-a-mole.
  2. We will need you to report early and often. The drawbacks of getting reports for something that doesn't require our intervention are outweighed by the benefits of us being able to get to a situation before it spirals out of control. By all means, if you’re not sure if something has risen to the level of violating our rule, say so in the report reason, but I'd personally rather get reports early than late, when a thread has spiraled into an all out flamewar.
    a. That said, please don't report people for being wrong, unless they are doing so in a way that is actually dangerous to others. It would be better for you to kindly disagree with them in a nice comment.
    b. Please, feel free to try and de-escalate arguments and remind one another of the humanity of the people behind the usernames. Remember to Be(e) Nice even when disagreeing with one another. Yes, even Windows users.
  3. We will try to be more proactive in stepping in when arguments are happening and trying to remind folks to Be(e) Nice.
    a. This isn't always possible. Mods are all volunteers with jobs and lives, and things often get out of hand before we are aware of the problem due to the size of the community and mod team.
    b. This isn't always helpful, but we try to make these kinds of gentle reminders our first resort when we get to things early enough. It’s also usually useful in gauging whether someone is a good fit for Beehaw. If someone responds with abuse to a gentle nudge about their behavior, it’s generally a good indication that they either aren’t aware of or don’t care about the type of community we are trying to maintain.

I know our philosophy posts can be long and sometimes a little meandering (personally that's why I love them) but do take the time to read them if you haven't. If you can't/won't or just need a reminder, though, I'll try to distill the parts that I think are most salient to this particular post:

  1. Be(e) nice. By nice, we don't mean merely being polite, or in the surface-level "oh bless your heart" kind of way; we mean be kind.
  2. Remember the human. The users that you interact with on Beehaw (and most likely other parts of the internet) are people, and people should be treated kindly and in good-faith whenever possible.
  3. Assume good faith. Whenever possible, and until demonstrated otherwise, assume that users don't have a secret, evil agenda. If you think they might be saying or implying something you think is bad, ask them to clarify (kindly) and give them a chance to explain. Most likely, they've communicated themselves poorly, or you've misunderstood. After all of that, it's possible that you may disagree with them still, but we can disagree about Technology and still give one another the respect due to other humans.
2
 
 

Yekaterina Chudnovsky, online biographies say, is a mother-of-four who “enjoys spending time with her family and teaching them the importance of giving back and helping others”. They add that Ukrainian-born Chudnovsky, known as Katie, finds sanctuary in walks on the beach.

In interviews, Chudnovsky has spoken warmly about her commitment to philanthropy, her dedication to supporting cancer research and her work as a lawyer for an unnamed global technology firm. Pornography is never mentioned.

Now, it may become unavoidable. After the death of Chudnovsky’s husband, Leonid Radvinsky, from cancer last week at the age of 43, she is now understood to have a controlling interest through a family trust in the London-based adult content site, OnlyFans.

Chudnovsky is set to have a crucial role in deciding what happens to the business that made her husband a billionaire before he turned 40. The family stake is valued at about $5.5bn (£4.1bn).

Chudnovsky’s views on pornography will determine the site’s future business model, and whether it continues to generate huge sums of money by taking a 20% cut from the earnings of about 4 million content creators globally, a large proportion of whom generate money for the business by undressing and performing explicit content on the platform.

3
4
5
6
7
8
 
 

The young woman at the heart of what has been called the tech industry’s “big tobacco” moment was on YouTube at six and Instagram by nine. More than a decade later, she says, she still can’t live without the social media she became addicted to.

“I can’t, it’s too hard to be without it,” Kaley, now 20, told a jury at Los Angeles’ superior court. This week, five men and seven women handed down a verdict on the design of two of the world’s most popular apps that vindicated Kaley’s position.

The ruling sent shockwaves through Silicon Valley and sparked hope among families and child safety campaigners that change may finally be coming to social media. Mark Zuckerberg’s Meta and Google’s YouTube were found liable for deliberately designing addictive products used by Kaley and millions of other young people.

It was one case centred on the suffering of one young person who became depressed at 10 and self-harmed, but Kaley, referred to by her first name or the initials KGM in order to protect her privacy, was the figurehead for a much bigger fight.

“We wanted them to feel it,” one of the jurors explained to reporters. “We wanted them to realise this was unacceptable.”

9
 
 
10
 
 

AI models that lie and cheat appear to be growing in number with reports of deceptive scheming surging in the last six months, a study into the technology has found.

AI chatbots and agents disregarded direct instructions, evaded safeguards and deceived humans and other AI, according to research funded by the UK government-funded AI Safety Institute (AISI). The study, shared with the Guardian, identified nearly 700 real-world cases of AI scheming and charted a five-fold rise in misbehaviour between October and March, with some AI models destroying emails and other files without permission.

The snapshot of scheming by AI agents “in the wild”, as opposed to in laboratory conditions, has sparked fresh calls for international monitoring of the increasingly capable models and come as Silicon Valley companies aggressively promote the technology as a economically transformative. Last week the UK chancellor also launched a drive to get millions more Britons using AI.

11
 
 

The first basic income program for workers who have lost pay, jobs, or opportunities to AI began sending out its first funds this week. The program is run by the nonprofits the AI Commons Project and What We Will, who together are administering the AI Dividend, which will issue a no-strings payment of $1,000 a month for a year to a cohort of 25-50 impacted workers. The project’s organizers say they have $300,000 in initial funding, and hope to expand quickly. They plan to distribute $3 million in funds in 2026—and aim to do so by pushing the major AI companies to contribute to the effort.

“Over the last few years, I’ve been mentoring students who have really struggled to land any jobs,” Kaitlin Cort, a veteran software engineer and programming instructor, tells me.

Cort is one of the organizers behind the AI Dividend, and she says she was alerted to a growing problem as she’s tried to find jobs for graduates of her programming classes. (She’s taught for Per Scholas, Future Code, and NYC Tech Talent Pipeline programs.) Cort says she’s seen the job market for entry level programmers dry up as executives and managers across the tech industry embrace Copilot and Claude. “The few jobs that students have landed have often been demeaning,” Cort says, “and not really allowing them to do real engineering work, but rather asking them to revie

12
 
 

Citing national security fears, America is effectively banning any new consumer-grade network routers made abroad.

The Federal Communications Commission (FCC) has updated its Covered List to include all foreign-made consumer routers, prohibiting the approval of any new models.

For clarification, the FCC says this change does not prevent the import, sale, or use of any existing models that the agency previously authorized.

That Covered List details equipment and services covered by Section 2 of The Secure Networks Act, which, by their inclusion, are deemed to pose an unacceptable risk to US national security.

According to the FCC, this move follows a determination by a "White House-convened Executive Branch interagency body with appropriate national security expertise," in line with President Trump's National Security Strategy that the US must not be dependent on any other country for core components necessary to the nation's defense or economy.

Its determination was that foreign-produced routers introduce a supply chain vulnerability which could disrupt critical infrastructure and national defense, and pose a severe cybersecurity risk that could harm Americans.

13
 
 

Science fiction author Neal Stephenson, who coined the term “metaverse” in his 1992 novel Snow Crash, has argued he and others who believed immersive environments would require head-mounted hardware got it wrong.

In a post penned to mark Meta’s recent decision to end its work on the Metaverse after blowing through $80 billion, Stephenson said that twenty years ago, when he worked at virtual reality hardware company Magic Leap, he would ask “Do you really think that twenty years from now everyone is still going to be going around all day staring at little rectangles in their hands?”

“At the time it seemed obvious to me that the answer was no,” he wrote. Now he thinks that another 20 years into the future, devices like smartphones will still dominate. “Or at least that is the case if the only alternative is wearing things on their faces.”

14
 
 

Many people start their work with AI by prompting the machine to imagine it is an expert at the task they want it to perform, a technique that boffins have found may be futile.

Persona-based prompting – which involves using directives such as "You're an expert machine learning programmer" in a model prompt – dates back to 2023, when researchers began to explore how role-playing instructions influenced AI models’ output.

It's now common to find online prompting guides that include passages like, "You are an expert full-stack developer tasked with building a complete, production-ready full-stack web application from scratch."

But academics who have researched this approach report it does not always produce superior results.

In a pre-print paper titled "Expert Personas Improve LLM Alignment but Damage Accuracy: Bootstrapping Intent-Based Persona Routing with PRISM," researchers affiliated with the University of Southern California (USC) find that persona-based prompting is task-dependent – which they say explains the mixed results.

For alignment-dependent tasks, like writing, role-playing, and safety, personas do improve model performance. For pretraining-dependent tasks like math and coding, using the technique produces worse results.

15
 
 

Has Microsoft finally reckoned with Windows 11's many failings - or has its OS chief, Pavan Davuluri, simply offered more soothing platitudes to users fed up with bugs and unwanted AI?

Davuluri wrote a lengthy post on the Windows blog that was long on promises that things will get better, but short on words like "sorry," "apologize," or even the Americanism "our bad."

According to Davuluri, the movable taskbar dropped from Windows 11 is returning. Windows Update will stop forcing restarts quite so relentlessly. File Explorer will work as it should. And Windows itself will be less of a resource hog, faster, and more reliable.

Microsoft has also promised to rethink its obsession with AI. Davuluri said: "We are reducing unnecessary Copilot entry points, starting with apps like Snipping Tool, Photos, Widgets, and Notepad."

Not that Copilot is going away. "You will see us be more intentional about how and where Copilot integrates across Windows, focusing on experiences that are genuinely useful and well‑crafted," Davuluri said.

This implies that, up to now, the changes have not been intentional. So spraying Windows with the assistant, regardless of how users felt about it, was somehow an accident?

Windows 11 has become a bit of a car crash in the last few years - borked update after borked update. Rather than fixing problems, Microsoft instead focused on adding AI to Notepad and Paint. Users cried out for the return of seemingly minor functionality, such as the ability to move the taskbar, but Microsoft instead offered widgets and more Copilot.

16
17
18
19
 
 

A New Mexico jury on Tuesday ordered Meta to pay $375m in civil penalties after it found the company misled consumers about the safety of its platforms and enabled harm, including child sexual exploitation, against its users.

This is the first jury trial to find Meta liable for acts committed on its platform.

“The jury’s verdict is a historic victory for every child and family who has paid the price for Meta’s choice to put profits over kids’ safety,” said New Mexico’s attorney general, Raúl Torrez.

“Meta executives knew their products harmed children, disregarded warnings from their own employees, and lied to the public about what they knew. Today the jury joined families, educators, and child safety experts in saying enough is enough.”

The lawsuit was brought by Torrez’s office in December 2023. The lawsuit followed a two-year Guardian investigation published in April of that year revealing how Facebook and Instagram had become marketplaces for child sex trafficking. That investigation was cited several times in the complaint.

The jury ordered Meta to pay the maximum penalty under the law of $5,000 per violation, totaling $375m in civil penalties for violating New Mexico’s consumer protection laws. The jury found Meta liable for both claims brought by the state of New Mexico under the Unfair Practices Act.

20
21
22
23
24
25
 
 

Just what I want in my distro.

After weeks of debate, code to record user age was finally merged into the Linux world's favorite system management daemon.

Pull request #40954 to the systemd project is titled "userdb: add birthDate field to JSON user records." It's a new function for the existing userdb service, which adds a field to hold the user's date of birth:

Stores the user's birth date for age verification, as required by recent laws in California (AB-1043), Colorado (SB26-051), Brazil (Lei 15.211/2025), etc.

The contents of the field will be protected from modification except by users with root privileges.

The change comes after the recent release of systemd 260 but unless it is reverted for some reason, it will be part of systemd 261. One of the justifications is to facilitate the new parental controls in Flatpak, which are still in the draft stage.

view more: next ›