Technology

42572 readers
233 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS
1
 
 

Hey Beeple and visitors to Beehaw: I think we need to have a discussion about !technology@beehaw.org, community culture, and moderation. First, some of the reasons that I think we need to have this conversation.

  1. Technology got big fast and has stayed Beehaw's most active community.
  2. Technology gets more reports (about double in the last month by a rough hand count) than the next highest community that I moderate (Politics, and this is during election season in a month that involved a disastrous debate, an assassination attempt on a candidate, and a major party's presumptive nominee dropping out of the race)
  3. For a long time, I and other mods have felt that Technology at times isn’t living up to the Beehaw ethos. More often than I like I see comments in this community where users are being abusive or insulting toward one another, often without any provocation other than the perception that the other user’s opinion is wrong.

Because of these reasons, we have decided that we may need to be a little more hands-on with our moderation of Technology. Here’s what that might mean:

  1. Mods will be more actively removing comments that are unkind or abusive, that involve personal attacks, or that just have really bad vibes.
    a. We will always try to be fair, but you may not always agree with our moderation decisions. Please try to respect those decisions anyway. We will generally try to moderate in a way that is a) proportional, and b) gradual.
    b. We are more likely to respond to particularly bad behavior from off-instance users with pre-emptive bans. This is not because off-instance users are worse, or less valuable, but simply that we aren't able to vet users from other instances and don't interact with them with the same frequency, and other instances may have less strict sign-up policies than Beehaw, making it more difficult to play whack-a-mole.
  2. We will need you to report early and often. The drawbacks of getting reports for something that doesn't require our intervention are outweighed by the benefits of us being able to get to a situation before it spirals out of control. By all means, if you’re not sure if something has risen to the level of violating our rule, say so in the report reason, but I'd personally rather get reports early than late, when a thread has spiraled into an all out flamewar.
    a. That said, please don't report people for being wrong, unless they are doing so in a way that is actually dangerous to others. It would be better for you to kindly disagree with them in a nice comment.
    b. Please, feel free to try and de-escalate arguments and remind one another of the humanity of the people behind the usernames. Remember to Be(e) Nice even when disagreeing with one another. Yes, even Windows users.
  3. We will try to be more proactive in stepping in when arguments are happening and trying to remind folks to Be(e) Nice.
    a. This isn't always possible. Mods are all volunteers with jobs and lives, and things often get out of hand before we are aware of the problem due to the size of the community and mod team.
    b. This isn't always helpful, but we try to make these kinds of gentle reminders our first resort when we get to things early enough. It’s also usually useful in gauging whether someone is a good fit for Beehaw. If someone responds with abuse to a gentle nudge about their behavior, it’s generally a good indication that they either aren’t aware of or don’t care about the type of community we are trying to maintain.

I know our philosophy posts can be long and sometimes a little meandering (personally that's why I love them) but do take the time to read them if you haven't. If you can't/won't or just need a reminder, though, I'll try to distill the parts that I think are most salient to this particular post:

  1. Be(e) nice. By nice, we don't mean merely being polite, or in the surface-level "oh bless your heart" kind of way; we mean be kind.
  2. Remember the human. The users that you interact with on Beehaw (and most likely other parts of the internet) are people, and people should be treated kindly and in good-faith whenever possible.
  3. Assume good faith. Whenever possible, and until demonstrated otherwise, assume that users don't have a secret, evil agenda. If you think they might be saying or implying something you think is bad, ask them to clarify (kindly) and give them a chance to explain. Most likely, they've communicated themselves poorly, or you've misunderstood. After all of that, it's possible that you may disagree with them still, but we can disagree about Technology and still give one another the respect due to other humans.
2
 
 

The first basic income program for workers who have lost pay, jobs, or opportunities to AI began sending out its first funds this week. The program is run by the nonprofits the AI Commons Project and What We Will, who together are administering the AI Dividend, which will issue a no-strings payment of $1,000 a month for a year to a cohort of 25-50 impacted workers. The project’s organizers say they have $300,000 in initial funding, and hope to expand quickly. They plan to distribute $3 million in funds in 2026—and aim to do so by pushing the major AI companies to contribute to the effort.

“Over the last few years, I’ve been mentoring students who have really struggled to land any jobs,” Kaitlin Cort, a veteran software engineer and programming instructor, tells me.

Cort is one of the organizers behind the AI Dividend, and she says she was alerted to a growing problem as she’s tried to find jobs for graduates of her programming classes. (She’s taught for Per Scholas, Future Code, and NYC Tech Talent Pipeline programs.) Cort says she’s seen the job market for entry level programmers dry up as executives and managers across the tech industry embrace Copilot and Claude. “The few jobs that students have landed have often been demeaning,” Cort says, “and not really allowing them to do real engineering work, but rather asking them to revie

3
 
 

Citing national security fears, America is effectively banning any new consumer-grade network routers made abroad.

The Federal Communications Commission (FCC) has updated its Covered List to include all foreign-made consumer routers, prohibiting the approval of any new models.

For clarification, the FCC says this change does not prevent the import, sale, or use of any existing models that the agency previously authorized.

That Covered List details equipment and services covered by Section 2 of The Secure Networks Act, which, by their inclusion, are deemed to pose an unacceptable risk to US national security.

According to the FCC, this move follows a determination by a "White House-convened Executive Branch interagency body with appropriate national security expertise," in line with President Trump's National Security Strategy that the US must not be dependent on any other country for core components necessary to the nation's defense or economy.

Its determination was that foreign-produced routers introduce a supply chain vulnerability which could disrupt critical infrastructure and national defense, and pose a severe cybersecurity risk that could harm Americans.

4
 
 

Science fiction author Neal Stephenson, who coined the term “metaverse” in his 1992 novel Snow Crash, has argued he and others who believed immersive environments would require head-mounted hardware got it wrong.

In a post penned to mark Meta’s recent decision to end its work on the Metaverse after blowing through $80 billion, Stephenson said that twenty years ago, when he worked at virtual reality hardware company Magic Leap, he would ask “Do you really think that twenty years from now everyone is still going to be going around all day staring at little rectangles in their hands?”

“At the time it seemed obvious to me that the answer was no,” he wrote. Now he thinks that another 20 years into the future, devices like smartphones will still dominate. “Or at least that is the case if the only alternative is wearing things on their faces.”

5
 
 

Has Microsoft finally reckoned with Windows 11's many failings - or has its OS chief, Pavan Davuluri, simply offered more soothing platitudes to users fed up with bugs and unwanted AI?

Davuluri wrote a lengthy post on the Windows blog that was long on promises that things will get better, but short on words like "sorry," "apologize," or even the Americanism "our bad."

According to Davuluri, the movable taskbar dropped from Windows 11 is returning. Windows Update will stop forcing restarts quite so relentlessly. File Explorer will work as it should. And Windows itself will be less of a resource hog, faster, and more reliable.

Microsoft has also promised to rethink its obsession with AI. Davuluri said: "We are reducing unnecessary Copilot entry points, starting with apps like Snipping Tool, Photos, Widgets, and Notepad."

Not that Copilot is going away. "You will see us be more intentional about how and where Copilot integrates across Windows, focusing on experiences that are genuinely useful and well‑crafted," Davuluri said.

This implies that, up to now, the changes have not been intentional. So spraying Windows with the assistant, regardless of how users felt about it, was somehow an accident?

Windows 11 has become a bit of a car crash in the last few years - borked update after borked update. Rather than fixing problems, Microsoft instead focused on adding AI to Notepad and Paint. Users cried out for the return of seemingly minor functionality, such as the ability to move the taskbar, but Microsoft instead offered widgets and more Copilot.

6
 
 

Many people start their work with AI by prompting the machine to imagine it is an expert at the task they want it to perform, a technique that boffins have found may be futile.

Persona-based prompting – which involves using directives such as "You're an expert machine learning programmer" in a model prompt – dates back to 2023, when researchers began to explore how role-playing instructions influenced AI models’ output.

It's now common to find online prompting guides that include passages like, "You are an expert full-stack developer tasked with building a complete, production-ready full-stack web application from scratch."

But academics who have researched this approach report it does not always produce superior results.

In a pre-print paper titled "Expert Personas Improve LLM Alignment but Damage Accuracy: Bootstrapping Intent-Based Persona Routing with PRISM," researchers affiliated with the University of Southern California (USC) find that persona-based prompting is task-dependent – which they say explains the mixed results.

For alignment-dependent tasks, like writing, role-playing, and safety, personas do improve model performance. For pretraining-dependent tasks like math and coding, using the technique produces worse results.

7
8
9
10
11
 
 

A New Mexico jury on Tuesday ordered Meta to pay $375m in civil penalties after it found the company misled consumers about the safety of its platforms and enabled harm, including child sexual exploitation, against its users.

This is the first jury trial to find Meta liable for acts committed on its platform.

“The jury’s verdict is a historic victory for every child and family who has paid the price for Meta’s choice to put profits over kids’ safety,” said New Mexico’s attorney general, Raúl Torrez.

“Meta executives knew their products harmed children, disregarded warnings from their own employees, and lied to the public about what they knew. Today the jury joined families, educators, and child safety experts in saying enough is enough.”

The lawsuit was brought by Torrez’s office in December 2023. The lawsuit followed a two-year Guardian investigation published in April of that year revealing how Facebook and Instagram had become marketplaces for child sex trafficking. That investigation was cited several times in the complaint.

The jury ordered Meta to pay the maximum penalty under the law of $5,000 per violation, totaling $375m in civil penalties for violating New Mexico’s consumer protection laws. The jury found Meta liable for both claims brought by the state of New Mexico under the Unfair Practices Act.

12
13
14
15
 
 

Just what I want in my distro.

After weeks of debate, code to record user age was finally merged into the Linux world's favorite system management daemon.

Pull request #40954 to the systemd project is titled "userdb: add birthDate field to JSON user records." It's a new function for the existing userdb service, which adds a field to hold the user's date of birth:

Stores the user's birth date for age verification, as required by recent laws in California (AB-1043), Colorado (SB26-051), Brazil (Lei 15.211/2025), etc.

The contents of the field will be protected from modification except by users with root privileges.

The change comes after the recent release of systemd 260 but unless it is reverted for some reason, it will be part of systemd 261. One of the justifications is to facilitate the new parental controls in Flatpak, which are still in the draft stage.

16
 
 

I'm old enough that digital cameras only came out about the time I was legal to drink. Photos were a way to capture vacation moments, not food styling with a conspicuous bite missing. I rarely even take photos anymore, despite having been heavily into photography in college, to the extent that I developed my own Tri-X in the newsroom darkroom and going E-6 Velvia 50 when I wanted to do landscapes in colour. I bought film in bulk and had one of those machines that spooled film into canisters, which worked out in my favour, as Ivy Seright charged the same for processing a 36-shot roll even if I'd squeezed 40 into the canister.

All of which is to say: What the fuck problem is MS solving?

Microsoft is rolling out technology to transform OneDrive photos into AI-infused masterpieces. Or top up the bucket of slop, depending on your perspective.

The feature, called AI Restyle, allows users to apply a range of styles to photos in OneDrive. Where users might once have been happy with contrast tweaks or lighting adjustments, Microsoft has gone further by adding the ability to create a new version of a photo using either a preset or a prompt.

Reckon your family photos would look better in anime style? Microsoft's OneDrive can now make your dreams come true.

The ability to "Ghibli-fy" images with AI is not new. However, the functionality turning up in OneDrive - which means users can skip third-party services - is.

Microsoft is rolling the feature out to iOS and Android versions of the OneDrive application, and the web for users with a Microsoft 365 subscription. We asked the company whether processing is on-device or in Microsoft's cloud, but it has yet to respond.

If processing takes place in the cloud, users will need to consider where their data is going. Then again, if you're using OneDrive, that ship has already sailed.

17
25
submitted 2 days ago* (last edited 2 days ago) by XLE@piefed.social to c/technology@beehaw.org
 
 

Original WSJ exclusive: OpenAI Scraps Sora App in Continued Push to Focus on Coding and ‘Agent’ Tools

Paywall removal: https://archive.is/cKWkf

18
19
20
 
 

A patent granted to Google on January 27, 2026 titled “AI-generated content page tailored to a specific user” describes a system that evaluates your company’s landing page in real time and, if it decides the page won’t perform well enough for a specific user, replaces it with an AI-generated version assembled on the fly. The user never sees what your team built, they see what Google's machine learning model thinks they should see instead.

This isn’t a feature announcement, it’s a patent, meaning Google has legally protected the ability to do this. Whether and when they deploy it is a separate question, but the direction is unmistakable – your website may soon be optional.

The system described in the patent is more sophisticated than a simple redirect. When a user submits a query, Google generates a standard search result page. But simultaneously, the system scores the most relevant landing page using signals like conversion rate, bounce rate, click-through rate, and design quality. If that score falls below a threshold – or if the page simply lacks the desired content – search results maybe be updated to include a navigation link to an AI-generated alternative.

That alternative page isn’t a cached copy of your site. It’s a dynamically assembled page built from the user’s current query, their search history, their account context, and whatever Google can extract from your original page. The patent describes possible elements including personalized headlines, suggested product filters, a product feed, sitelinks to product detail pages, and even an embedded AI chatbot. In other words, a complete brand experience built by Google. Not you.

On the plus side, this kills the SEO market.

21
 
 

There was a post earlier about the NEMA 1-15 plug that was, unfortunately, just spam. However, it's kind of an interesting topic, and better yet made me remember this delightfully old-school website: The Digital Museum of Plugs and Sockets. The history and overview sections for plug standards in different parts of the world are genuinely interesting, and the site as a whole is impressively comprehensive and is a well constructed HTML website (I don't know how it looks on mobile but on desktop it's a very clean looking site)

22
23
24
 
 

The tech giant says it's listening to user feedback.

25
 
 

When Rep. Leigh Finke spoke last month before the Minnesota House Commerce Finance and Policy Committee to testify against HF1434, a broad-sweeping proposal to age-gate the internet, she began with something disarming: agreement.

“I want to support the basic part of this,” she said, the shared goal of protecting young people online. Because that is not controversial: everyone wants kids to be safe. But HF1434, Minnesota’s proposed age-verification bill, simply won’t “protect children.” It mandates that websites hosting speech that is protected by the First Amendment for both adults and young people to verify users’ identities, often through government IDs or biometric data. As we’ve discussed before, the bill’s definition of speech that lawmakers deem “harmful to minors” is notoriously broad—broad enough to sweep in lawful, non-pornographic speech about sexual orientation, sexual health, and gender identity.

Rep. Finke, an openly transgender lawmaker, next raised a point that her critics have since tried to distort: age-verification laws like the Minnesota bill are already being used to block young LGBTQ+ people from exercising their First Amendment rights to access information that may be educational, affirming, or life-saving. Referencing the Supreme Court case Free Speech Coalition v. Paxton, she noted that state attorneys general have been “almost jubilant” about the ability to use these laws to restrict queer youth from accessing content. “We know that ‘prurient interest’ could be for many people, the very existence of transgender kids,” she added, referring to the malleable legal standard that would govern what content must be age-gated under the law.

But despite years’ worth of evidence to back her up, Finke has faced a wave of attacks from countless media outlets and religious advocacy groups for her statements. Rep. Finke’s testimony was repeatedly mischaracterized as not having young people’s best interests in mind, when really she was accurately describing the lived reality of LGBTQ+ youth and advocating in support of their access to vital resources and community.

view more: next ›