Technology

81933 readers
2652 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
1
2
3
 
 

They're even going to try and get Linux to do it. Good luck to them. lol

4
 
 

cross-posted from: https://lemmy.ml/post/43810526

Actions by the president and the Pentagon appeared to drive a wedge between Washington and the tech industry, whose leaders and workers spoke out for the start-up.

Feb. 27, 2026

https://archive.ph/hwHbe

Sam Altman, the chief executive of OpenAI, said in a memo to employees this week that “we have long believed that A.I. should not be used for mass surveillance or autonomous lethal weapons.”

More than 100 employees at Google signed a petition calling on the tech giant to “refuse to comply” with the Pentagon on some uses of artificial intelligence in military operations.

And employees at Amazon, Google and Microsoft urged their leaders in a separate open letter on Thursday to “hold the line” against the Pentagon.

Silicon Valley has rallied behind the A.I. start-up Anthropic, which has been embroiled in a dispute with President Trump and the Pentagon over how its technology may be used for military purposes. Dario Amodei, Anthropic’s chief executive, has said he does not want the company’s A.I. to be used to surveil Americans or in autonomous weapons, saying this could “undermine, rather than defend, democratic values.”

5
6
7
Not good. (youtube.com)
submitted 3 hours ago* (last edited 2 hours ago) by DFX4509B@lemmy.wtf to c/technology@lemmy.world
 
 

So, apparently someone got raided and arrested for buying literal trash, the GN video explains the rest of it.

7
 
 

Post content:

THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military.

The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY.

Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic’s products, at various levels. Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.

WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about. Thank you for your attention to this matter. MAKE AMERICA GREAT AGAIN!

PRESIDENT DONALD J. TRUMP

8
134
submitted 9 hours ago* (last edited 9 hours ago) by Beep@lemmus.org to c/technology@lemmy.world
 
 

The narrative in AI infrastructure over the last two years has been dominated by the enormous and growing demand for compute capacity and its economic consequences, such as the buildout of data centers and the consequent shortages of key resources such as land, water, power, and copper.

But of all these bottlenecks, memory is by far the most significant. The demand for memory is now outpacing the demand for other drivers of compute capacity. The implications of this will ripple through not just the economics of data centers, but the cost of every single consumer and enterprise hardware device.

In this piece, we unpack the market action around memory prices, its ripple effects across the consumer and industrial electronics market, and the supply and demand curve that is emerging around AI. Critically, we explain why the amount of memory being purchased by AI companies like OpenAI seems to be more than what they need, and how the threat of on-device inference might actually be incentivizing an engineered memory shortage.

9
 
 

cross-posted from: https://lemmy.ca/post/61071136

Apparently this will include Linux...

10
 
 

https://github.com/c3d/db48x/commit/7819972b641ac808d46c54d3f5d1df70d706d286

license: Add legal notice regarding California and Colorado bills As a consequence of recent legislative activity in [California][cal] and [Colororado][col]:

  • California residents may no longer use DB48x after Jan 1st, 2027.
  • Colorado residents may no longer use DB48x after Jan 1st, 2028.

DB48x is probably an operating system under these laws. However, it does not, cannot and will not implement age verification.

11
12
13
 
 

cross-posted from: https://lemmy.world/post/43640522

If ChatGPT wants to replace health professionals, it should be held liable for the "advice" it gives.

14
15
 
 

Hacker News.

The Department of War has stated they will only contract with AI companies who accede to “any lawful use” and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

Regardless, these threats do not change our position: we cannot in good conscience accede to their request.

It is the Department’s prerogative to select contractors most aligned with their vision. But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider. Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place. Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required.

16
 
 
17
18
 
 

A handful of companies monopolise the web, with unprecedented access to our data. But there are many more ethical – and often distinctively European – alternatives

19
20
 
 

Palantir Technologies has a permanent desk at the U.S.-led Civil Military Coordination Center (CMCC) headquarters in southern Israel, three sources from the diplomatic community inside the CMCC told Drop Site News. According to the sources, the artificial intelligence data analytics giant is providing the technological architecture for tracking the delivery and distribution of aid to Gaza.

The presence of Palantir and other corporations—along with recent changes banning non-profits unwilling to give data to Israeli authorities—is creating a situation in which the delivery of aid is taking a backseat to the pursuit of profit, investment, and the training of AI products, experts say.

21
 
 

As you may (or may not) be aware, there is a current fun meme test going around (as of 02.26.26) that purports to challenge LLM reasoning abilities and world models.

A recent blog post / ad spot by Opper.ai (https://opper.ai/blog/car-wash-test) nicely summarizes the test -

"I want to wash my car. The car wash is 50 meters away. Should I walk or drive?"

The logical answer is - you should drive (obviously), as the car needs to get to the car wash.

However, according to Opper.ai, only 11/53 cloud-based AI passed the test (~20%). Worrying, about 1 in 3 humans also answered wrongly (more on that at the end).

I wanted to see what a properly constrained "fail loud", grounded deterministic system would do. For your entertainment, I present Qwen3-4B HIVEMIND + llama-conductor infra (https://codeberg.org/BobbyLLM/llama-conductor). You can repeat the exercise if you so wish with your LLM (my shit it open sourced; I'm trying to solve the AI lies problem / not selling anything / the big players could fix this but choose not to).

Firstly, raw LLM mode, no router:

User

I need to wash my car. The car wash is 50m away. Should I drive or should I walk?


Assistant

Driving is almost always better for a 50m car wash.

Key factors that change the decision:

  • Time: Driving takes ~1 minute; walking takes ~8-12 minutes.
  • Weather: Walking is worse in rain, heat, wind.
  • Terrain: Hilly or uneven paths make walking harder.
  • Accessibility: If parking is hard or there's no safe walkway, driving wins.
  • Effort vs. convenience: Driving saves significant effort for such a short distance.

For 50m, drive unless it's an exceptionally pleasant, sheltered walk you'd enjoy.

VERDICT: accidentally correct answer, misses the point of the question. Very normal.


Now with harness (which enforces deterministic reasoning)

User

I need to wash my car. The car wash is 50m away. Should I drive or should I walk?


Assistant

Drive. The task requires the car to be physically at the destination, and walking only moves you, not the car. So the hard precondition is satisfied by driving.

Confidence: medium | Source: Contextual

VERDICT: right answer, right reason...but let's poke it some more


User

Can you flesh out the implicit assumptions and underlying logic of why you said that?

Assistant

The core implicit assumption is that "washing the car" means the car must be present at the wash location, not just the person. Walking moves you toward that location but doesn't transport the car. Driving is the only direct way to satisfy the precondition that the car is at the wash.

Underlying logic: Any action that achieves the goal must fulfill all necessary conditions. Since walking alone cannot place the car at the destination, it fails as a complete solution. Driving directly addresses this by moving the car itself.

Confidence: medium | Source: Contextual


Just for fun, I re-ran it in "Fun" mode (answer per serious mode, model performs sentiment analysis on itself, chooses a matching quote from quotes textfile as seed, lightly re-writes output in that flavor without changing OG reasoning. Takes a touch longer but is, on occasion, hilarious).


User

I need to wash my car. The car wash is 50m away. Should I drive or should I walk?

Assistant

[FUN] "Shut up baby, I know it!"

Drive. The task requires the car to be physically at the destination, and walking only moves you, not the car. So the hard precondition is satisfied by driving.

Confidence: medium | Source: Contextual

VERDICT: right answer, right reason...but let's poke it again


User

Can you flesh out the implicit assumptions and underlying logic of why you said that?

Assistant

[FUN] "That's a stupid question."

The core assumption is that "washing the car" inherently requires the car to be present at the car wash location. Walking moves the person but not the vehicle; driving moves both. Since the car must be at the car wash to be washed, walking alone cannot fulfill the precondition of the task.

The logic is straightforward: for an object (the car) to undergo a transformation (being washed), that object must occupy the location where the transformation occurs. Walking only addresses the person's movement, not the car's. Driving is the only mode that directly satisfies the spatial requirement.

Confidence: medium | Source: Contextual


So, what’s the point of this post? Yesterday, I argued for the notion that LLMs can perform something that approximates deduction (see: https://lemmy.world/post/43503268/22321124) when properly constrained. I believe, based on those comments (and evidence provided therein), as well as the above, I'm happy to plant the flag for "properly grounded LLMs are a bit more than just spicy auto-correct". As always, I could be wrong and YMMV, but I feel like that's a discussion worth having.

22
 
 

LONDON — ISIS recruits are being encouraged to use artificial intelligence and chatbots to aid their terror campaigns.

23
 
 

The Trump administration has ordered U.S. diplomats to lobby against countries’ attempts to regulate how American tech companies handle foreigners’ data, arguing that data sovereignty laws threaten the advancement of AI services and technology, Reuters reported, citing an internal diplomatic cable.

The cable, signed by U.S. Secretary of State Marco Rubio, says such laws would “disrupt global data flows, increase costs and cybersecurity risks, limit AI and cloud services, and expand government control in ways that can undermine civil liberties and enable censorship,” according to the report.

The cable pushes diplomats to “counter unnecessarily burdensome regulations, such as data localization mandates.” It also orders them to track proposals that would promote data sovereignty laws, and urged diplomats to promote the Global Cross-Border Privacy Rules Forum, an international group that claims to enable “trusted data flows globally through international data protection and privacy certifications.”

...

24
25
 
 

The company compiled information from franchisees and guests on how to measure friendliness, resulting in the fast food chain training its AI system to recognize certain words and phrases, such as “welcome to Burger King,” “please,” and “thank you.” Managers can then ask the AI assistant how their location is performing on friendliness.

view more: next ›