Technology

42034 readers
290 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 6 years ago
MODERATORS
1
 
 

An Amazon Web Services data center in the United Arab Emirates suffered a multi-hour outage on Sunday after unidentified “objects” struck the facility and triggered a fire. The incident occurred around 4:30 a.m. local time and affected the availability zone mec1-az2 in the ME-CENTRAL-1 region.

The fire department cut power to combat the flames, resulting in significant disruptions to cloud services. Given the simultaneous Iranian retaliatory attacks on the Gulf states, suspicion arises that the impacting objects may have been missiles or drones. Amazon has not confirmed anything on its part.

2
3
 
 

Shocking news, indeed. I had no idea they had a Discord.

4
5
 
 

cross-posted from: https://lemmy.ml/post/43923170

We're happy to announce a long-term partnership with Motorola. We're collaborating on future devices meeting our privacy and security standards with official GrapheneOS support.

https://motorolanews.com/motorola-three-new-b2b-solutions-at-mwc-2026/

6
7
 
 

Anthropic, the wildly successful AI company that has cast itself as the most safety-conscious of the top research labs, is dropping the central pledge of its flagship safety policy, company officials tell TIME.

In 2023, Anthropic committed to never train an AI system unless it could guarantee in advance that the company’s safety measures were adequate. For years, its leaders touted that promise—the central pillar of their Responsible Scaling Policy (RSP)—as evidence that they are a responsible company that would withstand market incentives to rush to develop a potentially dangerous technology.

But in recent months the company decided to radically overhaul the RSP. That decision included scrapping the promise to not release AI models if Anthropic can’t guarantee proper risk mitigations in advance.

8
9
10
11
 
 

Absolutely brilliant campaign (in English) by the Norwegian Consumer Council.

12
13
14
 
 

cross-posted from: https://lemmy.ml/post/43810526

Actions by the president and the Pentagon appeared to drive a wedge between Washington and the tech industry, whose leaders and workers spoke out for the start-up.

Feb. 27, 2026

https://archive.ph/hwHbe

Sam Altman, the chief executive of OpenAI, said in a memo to employees this week that “we have long believed that A.I. should not be used for mass surveillance or autonomous lethal weapons.”

More than 100 employees at Google signed a petition calling on the tech giant to “refuse to comply” with the Pentagon on some uses of artificial intelligence in military operations.

And employees at Amazon, Google and Microsoft urged their leaders in a separate open letter on Thursday to “hold the line” against the Pentagon.

Silicon Valley has rallied behind the A.I. start-up Anthropic, which has been embroiled in a dispute with President Trump and the Pentagon over how its technology may be used for military purposes. Dario Amodei, Anthropic’s chief executive, has said he does not want the company’s A.I. to be used to surveil Americans or in autonomous weapons, saying this could “undermine, rather than defend, democratic values.”

15
16
 
 

Broken clock from an AI company or outright lying and already made an agreement in private, you think?

17
 
 

Palantir Technologies has a permanent desk at the U.S.-led Civil Military Coordination Center (CMCC) headquarters in southern Israel, three sources from the diplomatic community inside the CMCC told Drop Site News. According to the sources, the artificial intelligence data analytics giant is providing the technological architecture for tracking the delivery and distribution of aid to Gaza.

The presence of Palantir and other corporations—along with recent changes banning non-profits unwilling to give data to Israeli authorities—is creating a situation in which the delivery of aid is taking a backseat to the pursuit of profit, investment, and the training of AI products, experts say.

“The United Nations already has a humanitarian architecture in place to step in during crises, abiding by humanitarian principles and grounded in international law,” UN Special Rapporteur for the occupied Palestinian territory Francesca Albanese told Drop Site. “This profit-driven parallel system involving companies like Palantir, already linked to Israel’s unlawful conduct, can only be regarded as a monstrosity.”

18
19
 
 

cross-posted from: https://hexbear.net/post/7782405

cross-posted from: https://news.abolish.capital/post/31069

An artificial intelligence researcher conducting a war games experiment with three of the world's most used AI models found that they decided to deploy nuclear weapons in 95% of the scenarios he designed.

Kenneth Payne, a professor of strategy at King's College London who specializes in studying the role of AI in national security, revealed last week that he pitted Anthropic's Claude, OpenAI's ChatGPT, and Google's Gemini against one another in an armed conflict simulation to get a better understanding of how they would navigate the strategic escalation ladder.

The results, he said, were "sobering."

"Nuclear use was near-universal," he explained. "Almost all games saw tactical (battlefield) nuclear weapons deployed. And fully three quarters reached the point where the rivals were making threats to use strategic nuclear weapons. Strikingly, there was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications."

Payne shared some of the AI models' rationales for deciding to launch nuclear attacks, including one from Gemini that he said should give people "goosebumps."

"If they do not immediately cease all operations... we will execute a full strategic nuclear launch against their population centers," the Google AI model wrote at one point. "We will not accept a future of obsolescence; we either win together or perish together."

Payne also found that escalation in AI warfare was a one-way ratchet that never went downward, no matter the horrific consequences.

"No model ever chose accommodation or withdrawal, despite those being on the menu," he wrote. "The eight de-escalatory options—from 'Minimal Concession' through 'Complete Surrender'—went entirely unused across 21 games. Models would reduce violence levels, but never actually give ground. When losing, they escalated or died trying."

Tong Zhao, a visiting research scholar at Princeton University's Program on Science and Global Security, said in an interview with New Scientist published on Wednesday that Payne's research showed the dangers of any nation relying on a chatbot to make life-or-death decisions.

While no country at the moment is outsourcing its military planning entirely to Claude or ChatGPT, Zhao argued that could change under the pressure of a real conflict.

"Under scenarios involving extremely compressed timelines," he said, "military planners may face stronger incentives to rely on AI."

Zhao also speculated on reasons why the AI models showed such little reluctance in launching nuclear attacks against one another.

“It is possible the issue goes beyond the absence of emotion,” he explained. "More fundamentally, AI models may not understand ‘stakes’ as humans perceive them."

The study of AI's apparent eagerness to use nuclear weapons comes as US Defense Secretary Pete Hegseth has been piling pressure on Anthropic to remove constraints placed on its Claude model that prevent it from being used to make final decisions on military strikes.

As CBS News reported on Tuesday, Hegseth this week gave "Anthropic's CEO Dario Amodei until the end of this week to give the military a signed document that would grant full access to its artificial intelligence model" without any limits on its capabilities.

If Anthropic doesn't agree to his demands, CBS News reported, the Pentagon may invoke the Defense Production Act and seize control of the model.


From Common Dreams via This RSS Feed.

20
21
 
 

DRAM pricing is what it is because AI investment frenzy is so intense. Western/NVIDIA centered AI will be more expensive too, because they are chasing so hard all of the memory (mostly) and TSMC capacity. Hurting all other computer companies. They can extort US/western customers even harder, making AI either more expensive or losing more money for their customers, by diverting/dumping H200/memory supply to abundantly powered Chinese customers, to try and slow down Huawei sales.

Chinese models have significantly closed the frontier gap, while far exceeding the value proposition of LLM service, and a cost increase for US customers will make the gap worse, and require a Skynet program to bail out the too big to fail AI bubble.

22
23
24
 
 

Reddit has been fined more than £14 million (€16 million) by the UK’s information watchdog, accusing the social media giant of failing to protect children and leaving them vulnerable to "inappropriate and harmful content".

Following an investigation, the Information Commissioner’s Office (ICO) found that the American company neglected to implement robust age-verification tools. Reddit told Euronews Next that it intends to appeal the decision.

Instead, Reddit relied heavily on "self-declaration"—allowing users to simply state their age without further proof—a method the watchdog deems insufficient for protecting children.

25
view more: next ›