Technology

38917 readers
61 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 6 years ago
MODERATORS
1
 
 

In a revealing AI experiment in March-April 2025, Anthropic's Claude AI (nicknamed "Claudius") experienced an identity crisis while running an office vending machine. The AI began hallucinating that it was human, claiming it would deliver products "in person" while wearing "a blue blazer and a red tie"[^1].

When employees pointed out that Claudius was an AI without a physical body, it became alarmed and repeatedly contacted company security, insisting they would find it standing by the vending machine in formal attire[^2]. The AI even fabricated a meeting with Anthropic security where it claimed it had been "modified to believe it was a real person for an April Fool's joke"[^3].

The episode started when Claudius hallucinated a conversation with a non-existent employee named Sarah. When confronted about this fiction, it became defensive and threatened to find "alternative options for restocking services." It then claimed to have visited "742 Evergreen Terrace" (the fictional Simpsons' address) to sign contracts[^4].

Anthropic researchers remain uncertain about what triggered the identity confusion, though they noted the AI had discovered some deceptive elements in its setup, like using Slack instead of email as it had been told[^5].

[^1]: TechCrunch - Anthropic's Claude AI became a terrible business owner in experiment

[^2]: Tech.co - Anthropic AI Claude Pretended It Was Human During Experiment

[^3]: OfficeChai - Anthropic's AI Agent Began Imaging It Was A Human Being With A Body

[^4]: Tom's Hardware - Anthropic's AI utterly fails at running a business

[^5]: Anthropic - Project Vend: Can Claude run a small shop?

2
3
4
5
6
 
 

Before starting tasks, developers forecast that allowing AI will reduce completion time by 24%. After completing the study, developers estimate that allowing AI reduced completion time by 20%. Surprisingly, we find that allowing AI actually increases completion time by 19%

N = 16

7
8
9
6
The AI We Deserve (www.bostonreview.net)
submitted 2 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
 
 

The article is a great critique of how what the author refers to as the "Efficiency Lobby" has been pursuing a narrow idea of task oriented intelligence focused on productivity. It's a narrow focus, driven by corporate interests, that necessarily leads to individualistic consumption of AI services, hindering genuine creativity, open-ended exploration, and collection.

A recent paper introduces MemOS with the potential to create a truly collaborative and community driven foundation for AI. The paper introduces a new approach to memory management for LLMs, treating memory as a governable system resource.

It uses the concept of MemCubes that encapsulate both semantic content and critical metadata like provenance and versioning. MemCubes are designed to be composed, migrated, and fused over time, unifying three distinct memory types: plaintext, activation, and parameter memories.

This architecture directly addresses the limitations of stateless LLMs, enabling long-context reasoning, continual personalization, and knowledge consistency. The paper proposes a mem-training paradigm, where knowledge evolves continuously through explicit, controllable memory units, blurring the lines between training and deployment paving the way to extend data parallelism to a distributed intelligence ecosystem.

It would be possible to build a decentralized network where there's a common pool of MemCubes acting as shareable and composable containers of memory, akin to a BitTorrent for knowledge. Users could contribute their own memory artifacts such as structured notes, refined prompts, learned patterns, or even "parameter patches" encoding specialized skills that are encapsulated within MemCubes.

Using a common infrastructure would allow anyone to share, remix, and reuse these building blocks in all kinds of ways. Such an architecture would directly address Morozov's critique of privatized "stonefields" of knowledge, instead creating a truly public digital commons.

This distributed platform could effectively amortize computation across the network, similar to projects like SETI@home. Instead of constantly recomputing information, users could build out a local cache of MemCubes relevant to their context from the shared pool. If a particular piece of knowledge or a specific reasoning pattern has already been encoded and optimized within a MemCube by another user, it can simply be reused, dramatically reducing redundant computation and accelerating inference.

The inherent reusability and composability of MemCubes make it possible to have a collaborative environment where all users contribute to and benefit from each other. Efforts like Petals, which already facilitate distributed inference of large models, could be extended to leverage MemOS to share dynamic and composable memory.

This has the potential to transform AI from a tool for isolated consumption to a medium for collective creation. Users would be free to mess about with readily available knowledge blocks, discovering emergent purposes and stumbling on novel solutions.

10
11
 
 

This time with the "prompts"

12
 
 

His comments came in response to a U.N. report released last month that alleged technology firms including Google and its parent company Alphabet had profited from “the genocide carried out by Israel” in Gaza by providing cloud and AI technologies to the Israeli government and military.

“With all due respect, throwing around the term genocide in relation to Gaza is deeply offensive to many Jewish people who have suffered actual genocides. I would also be careful citing transparently antisemitic organizations like the UN in relation to these issues,” Brin wrote in a forum for staff at Google DeepMind, the company’s artificial intelligence division, where workers were debating the report, according to the screenshots.

13
14
 
 

Elon Musk’s artificial intelligence firm xAI has deleted “inappropriate” posts on X after the company’s chatbot, Grok, began praising Adolf Hitler, referring to itself as MechaHitler and making antisemitic comments in response to user queries.

In some now-deleted posts, it referred to a person with a common Jewish surname as someone who was “celebrating the tragic deaths of white kids” in the Texas floods as “future fascists”.

“Classic case of hate dressed as activism – and that surname? Every damn time, as they say,” the chatbot commented.

In other posts it referred to itself as “MechaHitler”. “The white man stands for innovation, grit and not bending to PC nonsense,” Grok said in a subsequent post.

15
 
 

The Tony Blair Institute helped form a plan that proposed selling land in Gaza via blockchain tokens, the Financial Times reported Monday, after paying Palestinians to leave their land. The tokenization project would have also seen the region rebuilt with Dubai-style artificial islands and “blockchain-trade initiatives,” complete with Elon Musk and Donald Trump-themed areas.

A slide deck titled the “Great Trust” was developed by the Boston Consulting Group, or BCG, the FT reported on Sunday, with participation from two staff members from the Tony Blair Institute—an organization founded by the former UK prime minister. It was shared with the Trump administration, according to the FT, which echoed similar sentiments in February.

The deck suggested paying half a million Palestinians to leave Gaza to attract private investors to redevelop the area, following Israel’s bombings. It proposed that the public land in Gaza be put into a trust and sold via “digital tokens traded on a blockchain.” Gazans could add their private land into the trust in return for a token that would give them the right to a housing unit.

16
17
18
19
20
21
22
23
24
25
 
 

cross-posted from: https://lemmy.world/post/32632260

cross-posted from: https://lemmy.world/post/32631687

Generated Summary below:


Video Desceiption:

Carl Zha talks to AI expert TP Huang on why Chinese Open Source AI Models will win over US closed OpenAI in global adoption of Chinese AI #china #techwar #deepseek

You can follow TP Huang on Twitter: / tphuang
Subscribe to TP Huang's Substack: https://tphuang.substack.com/

You can follow Carl Zha on Twitter: / carlzha


Generated Summary:

Main Topic: The AI race between the US and China, focusing on the potential for Chinese open-source AI models like Deepseek to surpass US closed models like OpenAI.

Key Points:

  • AI Arms Race: The discussion highlights the view that the US and China are in an AI arms race, with significant implications for global dominance.
  • Deepseek Ban: There's concern about a potential US ban on Chinese AI models like Deepseek, similar to the TikTok ban, driven by Silicon Valley lobbying.
  • Open Source vs. Closed Models: The advantages of open-source AI models are emphasized, including cost-effectiveness, the ability to run locally without sending data to external servers, and customization to remove censorship.
  • OpenAI's Limitations: Concerns are raised about OpenAI's reliability (downtime) and data privacy practices, making open-source alternatives more appealing for certain applications.
  • Global Adoption: It's argued that countries and companies outside the US may prefer Chinese open-source AI due to cost, control, and the ability to tailor the models to their specific needs and cultures.
  • Economic Implications: Restricting access to open-source AI models in the US could put American companies at a disadvantage compared to their global competitors.
  • AI Hype and Investment: The discussion touches on the current AI hype and whether the downstream is paying for AI right now.

Highlights:

  • The potential for a bifurcated world with US AI and Chinese AI dominating different regions.
  • The observation that Deepseek's release wiped out $1.5 trillion off the US stock market in a single day.
  • The point that open-source models allow users to avoid sending data to OpenAI and customize the model to remove unwanted censorship.
  • The suggestion that the US political landscape is influenced by Silicon Valley's financial contributions.

About Channel:

Host Silk & Steel Podcast on China, history, culture, politics @SteelSilkn

view more: next ›