Technology

42410 readers
324 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS
1
 
 

Hey Beeple and visitors to Beehaw: I think we need to have a discussion about !technology@beehaw.org, community culture, and moderation. First, some of the reasons that I think we need to have this conversation.

  1. Technology got big fast and has stayed Beehaw's most active community.
  2. Technology gets more reports (about double in the last month by a rough hand count) than the next highest community that I moderate (Politics, and this is during election season in a month that involved a disastrous debate, an assassination attempt on a candidate, and a major party's presumptive nominee dropping out of the race)
  3. For a long time, I and other mods have felt that Technology at times isn’t living up to the Beehaw ethos. More often than I like I see comments in this community where users are being abusive or insulting toward one another, often without any provocation other than the perception that the other user’s opinion is wrong.

Because of these reasons, we have decided that we may need to be a little more hands-on with our moderation of Technology. Here’s what that might mean:

  1. Mods will be more actively removing comments that are unkind or abusive, that involve personal attacks, or that just have really bad vibes.
    a. We will always try to be fair, but you may not always agree with our moderation decisions. Please try to respect those decisions anyway. We will generally try to moderate in a way that is a) proportional, and b) gradual.
    b. We are more likely to respond to particularly bad behavior from off-instance users with pre-emptive bans. This is not because off-instance users are worse, or less valuable, but simply that we aren't able to vet users from other instances and don't interact with them with the same frequency, and other instances may have less strict sign-up policies than Beehaw, making it more difficult to play whack-a-mole.
  2. We will need you to report early and often. The drawbacks of getting reports for something that doesn't require our intervention are outweighed by the benefits of us being able to get to a situation before it spirals out of control. By all means, if you’re not sure if something has risen to the level of violating our rule, say so in the report reason, but I'd personally rather get reports early than late, when a thread has spiraled into an all out flamewar.
    a. That said, please don't report people for being wrong, unless they are doing so in a way that is actually dangerous to others. It would be better for you to kindly disagree with them in a nice comment.
    b. Please, feel free to try and de-escalate arguments and remind one another of the humanity of the people behind the usernames. Remember to Be(e) Nice even when disagreeing with one another. Yes, even Windows users.
  3. We will try to be more proactive in stepping in when arguments are happening and trying to remind folks to Be(e) Nice.
    a. This isn't always possible. Mods are all volunteers with jobs and lives, and things often get out of hand before we are aware of the problem due to the size of the community and mod team.
    b. This isn't always helpful, but we try to make these kinds of gentle reminders our first resort when we get to things early enough. It’s also usually useful in gauging whether someone is a good fit for Beehaw. If someone responds with abuse to a gentle nudge about their behavior, it’s generally a good indication that they either aren’t aware of or don’t care about the type of community we are trying to maintain.

I know our philosophy posts can be long and sometimes a little meandering (personally that's why I love them) but do take the time to read them if you haven't. If you can't/won't or just need a reminder, though, I'll try to distill the parts that I think are most salient to this particular post:

  1. Be(e) nice. By nice, we don't mean merely being polite, or in the surface-level "oh bless your heart" kind of way; we mean be kind.
  2. Remember the human. The users that you interact with on Beehaw (and most likely other parts of the internet) are people, and people should be treated kindly and in good-faith whenever possible.
  3. Assume good faith. Whenever possible, and until demonstrated otherwise, assume that users don't have a secret, evil agenda. If you think they might be saying or implying something you think is bad, ask them to clarify (kindly) and give them a chance to explain. Most likely, they've communicated themselves poorly, or you've misunderstood. After all of that, it's possible that you may disagree with them still, but we can disagree about Technology and still give one another the respect due to other humans.
2
 
 

IT consultant and services provider Accenture has agreed to buy Speedtest and Downdetector owner Ookla from Ziff Davis for $1.2 billion in cash.

Accenture plans to integrate Ookla’s data products into its own offerings that are targeted at helping communications service providers, hyperscalers, government entities, and other types of customers “optimize … mission-critical Wi-Fi and 5G networks,” Accenture’s announcement today said.

Ookla’s platform also includes Ekahau, which offers tools for troubleshooting and designing wireless networks, and RootMetrics, which monitors mobile network performance.

Accenture plans to use data gathered from Ookla’s services for applications such as helping hyperscalers and cloud providers “ensure the resilience of AI infrastructure and edge datacenters, which deliver most of the inference workload,” improving fraud prevention in banks, conducting smart home analytics in utilities, and retail traffic optimization.

3
 
 

News Corp’s global chief executive has described news organisations as a valuable “input” for artificial intelligence, as the media empire signs an AI content licensing deal with Meta worth up to US$50m (A$71m) a year.

In an upbeat presentation, the chief executive of Rupert Murdoch’s company, Robert Thomson, said the “reliable” breaking news and information in publications like the Australian, the Times of London and Dow Jones was “hard to beat” as an “input” for AI.

The Meta deal, which was revealed by the Murdoch-owned Wall Street Journal earlier this week and is expected to last at least three years, will allow Facebook and Instagram’s parent company to scrape News Corp’s US and UK content to train its artificial-intelligence products.

The outlets include the Journal and the New York Post, but the Australian mastheads, which include the Daily Telegraph and the Herald Sun, are not part of the deal.

“We’re essentially an input company,” Thomson told a Morgan Stanley tech conference in San Francisco on Monday ahead of the landmark Meta deal.

4
 
 

Study finds chat bots can sway opinions on historical events
nationaltoday.com/us/wa/seattl…

5
 
 

Originally came across the skit on YouTube, but glad to find it on a different platform for sharing. Kinda funny and too real..

6
 
 

Burner accounts on social media sites can increasingly be analyzed to identify the pseudonymous users who post to them using AI in research that has far-reaching consequences for privacy on the Internet, researchers said.

The finding, from a recently published research paper, is based on results of experiments correlating specific individuals with accounts or posts across more than one social media platform. The success rate was far greater than existing classical deanonymization work that relied on humans assembling structured data sets suitable for algorithmic matching or manual work by skilled investigators. Recall—that is, how many users were successfully deanonymized—was as high as 68 percent. Precision—meaning the rate of guesses that correctly identify the user—was up to 90 percent.

The findings have the potential to upend pseudonymity, an imperfect but often sufficient privacy measure used by many people to post queries and participate in sometimes sensitive public discussions while making it hard for others to positively identify the speakers. The ability to cheaply and quickly identify the people behind such obscured accounts opens them up to doxxing, stalking, and the assembly of detailed marketing profiles that track where speakers live, what they do for a living, and other personal information. This pseudonymity measure no longer holds.

7
24
submitted 1 day ago* (last edited 1 day ago) by harfang@slrpnk.net to c/technology@beehaw.org
 
 

Hello,

Is there any ethical (Privacy focused) alternative to Canva ?

Edit : Especially for slides creation :)

As Canva is sharing your data and is going to stock exchange this year, I feel very urgent to move out of this service.

Best, H

8
 
 

The hed here is a bit misleading ... 40% of total staff is not 40% of an individual job.

Jack Dorsey cited AI as the driving force behind cutting 40% of his company’s employees, but other factors such as a weak crypto market, overstaffing and a declining stock price may also have motivated the move.

Last week, the financial technology company Block announced that it would lay off 4,000 of its 10,000 workers. Dorsey, Block’s CEO, said in a letter to shareholders that advances in AI “have changed what it means to build and run a company”.

“We’re already seeing it internally. A significantly smaller team, using the tools we’re building, can do more and do it better. And intelligence tool capabilities are compounding faster every week,” he wrote. He also said that Block’s business remained strong and that these cuts weren’t an austerity measure.

Can AI operate 40% of a business? Perhaps, but other specters haunt Dorsey’s company.

I'd be surprised if LLMs can handle 40% of anyone's job. You know what often can? Good, old-fashioned automation. It handles tedious tasks no one wanted to do in the first place and produces improved, predictable and testable results.

9
 
 

Within hours on Friday, the Pentagon blacklisted one AI company for refusing to drop its safety commitments on surveillance and autonomous weapons, then turned around and praised a competitor for signing a deal that supposedly preserved those exact same commitments.

This confused some people. Why would the Pentagon seek to destroy one company over the same terms it agreed to with its largest competitor just hours later?

There’s an answer though: the words in OpenAI’s contract likely don’t mean what most people think they mean.

This isn’t speculation about future abuse. It’s the documented operating procedure of the NSA for decades—a practice exposed repeatedly by whistleblowers, litigated in courts, and eventually confirmed in declassified documents.

10
 
 
11
12
 
 

As you’ve probably heard, on Friday that political caprice came home to roost for many in Silicon Valley when Defense Secretary Pete Hegseth announced he was declaring Anthropic a “supply chain risk” and that no one with US military contracts could have a commercial relationship with the company any more (a gross exaggeration of what being declared a supply chain risk actually means, but that’s besides the point).

We’ve criticized these “supply chain risk” designations going back years, but mainly for how they tend to be used to prop up American companies against foreign (usually Chinese) competitors with little evidence regarding the actual risk. Of course, you can easily understand the stated intent of an “SCR” designation: if there’s a foreign company with ties to a government that is averse to the US, there is always a risk that the company could agree to sneak backdoors or spyware into the network and do something bad. Hell, it’s what the US does.

But here, it makes no sense at all. The only “risk” was Anthropic saying its technology shouldn’t be used for domestic mass surveillance or to power autonomous killing machines. There is no underlying risk.

13
 
 

cross-posted from: https://lemmy.ml/post/43923687

cross-posted from: https://lemmy.ml/post/43923170

We're happy to announce a long-term partnership with Motorola. We're collaborating on future devices meeting our privacy and security standards with official GrapheneOS support.

https://motorolanews.com/motorola-three-new-b2b-solutions-at-mwc-2026/

14
 
 

Amazon says “objects” struck a data center in the UAE, throwing sparks and starting a fire. Almost certainly due to the ongoing war.

Separately, another data center is offline due to a "localized power issue."

15
 
 

cross-posted from: https://lemmy.ca/post/61101616

Absolutely brilliant campaign (in English) by the Norwegian Consumer Council.

16
17
 
 

Defense Secretary Pete Hegseth deemed artificial intelligence firm Anthropic a supply chain risk on Friday, following days of increasingly heated public conflict with the AI company.

18
 
 

cross-posted from: https://piefed.social/c/technology/p/1826628/openai-strikes-a-deal-with-the-defense-department-to-deploy-its-ai-models

Sam Altman says "the DoW displayed a deep respect for safety."

Not 24 hours ago, he seemed to back Anthropic "supporting our warfighters" as long as two "red lines" weren't crossed, though his tepid support was laden with five instances of "I think" and one "mostly."

The two "red lines" in question:

  • Domestic mass surveillance
    (presumably, foreign mass surveillance is ok)
  • Autonomous weapons
    (likely because they would be held legally liable for misfires)
19
 
 

Twitter co-founder Jack Dorsey’s financial services company Block has announced it will fire 40 percent of staff – around 4,000 people – because new "intelligence tools" the company is implementing “can do more and do it better.”

The company announced the sackings in the shareholder letter [PDF] accompanying its Q4 earnings announcement on Thursday. The payments and crypto company reported quarterly revenue of about $6.25 billion – up 3.6 percent year-over-year – and gross profit of around $2.9 billion. The company made $1 billion of gross profit in December 2025 alone. Full-year revenue came in at about $24.2 billion, and gross profit was around $10.36 billion.

“2025 was a strong year for us,” Dorsey wrote in the shareholder letter, before posing the question, “Why are we changing how we operate going forward?”

His answer, spread across the letter and a Xeet, is that AI has already changed the way Block works, so it needs to change its structure.

“We're already seeing that the intelligence tools we’re creating and using, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company. and that's accelerating rapidly,” he wrote on X.

20
21
 
 

We’ve been saying this for years now, and we’re going to keep saying it until the message finally sinks in: mandatory age verification creates massive, centralized honeypots of sensitive biometric data that will inevitably be breached. Every single time. And every single time it happens, the politicians who mandated these systems and the companies that built them act shocked—shocked!—that collecting enormous databases of government IDs, facial scans, and biometric data from millions of people turns out to be a security nightmare.

Well, here we go again.

A couple weeks ago, Discord announced it would launch “teen-by-default” settings for its global audience, meaning all users would be shunted into a restricted experience unless they verified their age through biometric scanning. The internet, predictably, was not thrilled. But while many users were busy venting their frustration, a group of security researchers decided to do something more useful: they took a look under the hood at Persona, one of the companies Discord was using for verification (specifically for users in the UK).

What they found, according to The Rage, was exactly what we would predict:

Together with two other researchers, they set out to look into Persona, the San Francisco-based startup that’s used by Discord for biometric identity verification – and found a Persona frontend exposed to the open internet on a US government authorized server.

In 2,456 publicly accessible files, the code revealed the extensive surveillance Persona software performs on its users, bundled in an interface that pairs facial recognition with financial reporting – and a parallel implementation that appears designed to serve federal agencies.

Let me say that again: 2,456 publicly accessible files sitting on a government-authorized server, exposed to the open internet.

22
 
 

Taalas HC1: 17,000 tokens/sec on Llama 3.1 8B vs Nvidia H200's 233 tokens/sec. 73x faster at one-tenth the power. Each chip runs ONE model, hardwired into the transistors.

23
 
 

Ballooning memory prices are forecast to kill off entry-level PCs, leading to a decline in global shipments this year - and a similar effect is going to hit smartphones.

Analyst biz Gartner is projecting a drop in PC shipments of more than 10 percent during 2026, and a decline of around 8 percent for smartphones, all due to the AI-driven memory shortage.

Some types of memory have doubled or quadrupled in price since last year, and Gartner believes DRAM and NAND flash used in PCs and phones is set for a further 130 percent rise by the end of 2026.

The upshot of this is that the budget PC will disappear, simply because vendors won't be able to build them at a price that will satisfy cost-conscious buyers, according to Gartner research director Ranjit Atwal.

"Because the price of memory is increasing so much, vendors lose the ability to provide entry-level PCs – those below about $500," he told The Register.

24
25
 
 

Imagine this: You're on Reddit, Hacker News, or some forum, posting with a silly username like GamerCat2025 or SecretCoderX. You think you are anonymous, and no one knows you and so you can freely express your thoughts.

Well, a brand-new research paper just blew that idea apart. It's called "Large-scale online deanonymization with LLMs" which is a fancy way of saying "figuring out the real person behind a secret online name".

The researchers include people ETH Zurich and, Anthropic (parent company of Claude), and a research group called MATS and they proved that today's super-powerful AI chatbots can play detective and unmask people way better than ever before.

view more: next ›