Technology

82296 readers
4477 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
1
 
 

Elon Musk’s xAI has lost its bid for a preliminary injunction that would have temporarily blocked California from enforcing a law that requires AI firms to publicly share information about their training data.

xAI had tried to argue that California’s Assembly Bill 2013 (AB 2013) forced AI firms to disclose carefully guarded trade secrets.

The law requires AI developers whose models are accessible in the state to clearly explain which dataset sources were used to train models, when the data was collected, if the collection is ongoing, and whether the datasets include any data protected by copyrights, trademarks, or patents. Disclosures would also clarify whether companies licensed or purchased training data and whether the training data included any personal information. It would also help consumers assess how much synthetic data was used to train the model, which could serve as a measure of quality.

2
3
4
5
2
submitted 42 minutes ago* (last edited 42 minutes ago) by Beep@lemmus.org to c/technology@lemmy.world
 
 

Hacker News.

  • We introduce a new measure of AI displacement risk, observed exposure, that combines theoretical LLM capability and real-world usage data, weighting automated (rather than augmentative) and work-related uses more heavily
  • AI is far from reaching its theoretical capability: actual coverage remains a fraction of what's feasible
  • Occupations with higher observed exposure are projected by the BLS to grow less through 2034
  • Workers in the most exposed professions are more likely to be older, female, more educated, and higher-paid
  • We find no systematic increase in unemployment for highly exposed workers since late 2022, though we find suggestive evidence that hiring of younger workers has slowed in exposed occupations
6
 
 
  • Stanford researchers launched an open-source platform for collecting screenome data to study health behaviors.
  • The system simplifies data collection for researchers without technical expertise, removing previous barriers to screenome research.
  • By addressing privacy concerns with strict protocols, the initiative seeks to make meaningful interventions possible, paving the way for personalized health insights.
7
8
 
 

Earlier this week, PCWorld published a roundup of Windows 12 rumors translated from PCWelt that does not meet our editorial standards. We’re deeply embarrassed by it, and I personally apologize that the article was published. It should not have been, but we’re keeping the article live (with an editor’s note at the top) so it remains in the public record.

Windows Central published a response detailing its errors. Thanks for keeping us accountable, guys — genuinely. In the same spirit of accountability, I want to explain how this happened, and what we’re doing to ensure a mistake like this never occurs again.

Let’s start by discussing how PCWorld handles translated articles, and then I’ll dive into the issues with the article itself.

9
10
 
 

Liberty has costs, but it's worth it.

11
12
13
14
15
16
17
18
19
20
 
 

A new organization launched to fight public corruption is suing President Trump and his attorney general, accusing them of flouting the law when they blessed the sale of TikTok's U.S. assets to White House allies.

The case, filed in a federal court in Washington, D.C., accuses the Trump administration of ignoring legislation designed to stop the spread of Chinese propaganda — and instead helping to broker a partial sale to businessmen close to Trump.

21
22
23
24
25
 
 

AI translated articles swapped sources or added unsourced sentences with no explanation, while others added paragraphs sourced from completely unrelated material.

The issue in this case starts with an organization called the Open Knowledge Association (OKA), a non-profit organization dedicated to improving Wikipedia and other open platforms.

Wikipedia editors investigated how OKA was operating and found that it was mostly relying on cheap labor from contractors in the Global South, and that these contractors were instructed to copy/paste articles to popular LLMs to produce translations.

For example, a public spreadsheet used by OKA translators to keep track of what articles they’re translating instructs them to “pick an article, copy the lead section into Gemini or chatGPT, then review if some of the suggestions are an improvement to readability. Make edits to the Wiki articles only if the suggestions are an improvement and don't change the meaning of the lead. Do not change the content unless you have checked that what Gemini says is correct!”

Lebleu told me, and other editors have noted in their public on-site discussion of the issue, that these same instructions previously told OKA translators to use Grok, Elon Musk’s LLM, for the same purpose. Grok, which also produces an entirely automated alternative to Wikipedia called Grokepedia, is prone to errors precisely because it does not use humans to vet its output.

“Following the recent discussion, we have strengthened our safeguards,” [OKA's] Zimmerman told me. “We are now rolling out a second, independent LLM review step. Translators must run the completed draft through a separate model using a dedicated comparison prompt designed to identify potential discrepancies, omissions, or inaccuracies relative to the source text. Initial findings suggest this is highly effective at detecting potential issues.”

Zimmerman added that if this method proves insufficient, OKA is considering introducing formal peer review mechanisms.

Using AI to check the output of AI for errors is a method that is historically prone to errors. For example, we recently reported on an AI-powered private school that used AI to check AI-generated questions for students. Internal testing found it had at least a 10 percent failure rate.

view more: next ›