Technology

82518 readers
5678 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
1
2
 
 

The Apple MacBook Neo's $599 starting price is a "shock" to the Windows PC industry, according to an Asus executive.

Hsu said he believes all the PC players—including Microsoft, Intel, and AMD—take the MacBook Neo threat seriously. "In fact, in the entire PC ecosystem, there have been a lot of discussions about how to compete with this product," he added, given that rumors about the MacBook Neo have been making the rounds for at least a year.

Despite the competitive threat, Hsu argued that the MacBook Neo could have limited appeal. He pointed to the laptop's 8GB of "unified memory," or what amounts to its RAM, and how customers can't upgrade it.

3
4
 
 

Now, 404 Media reports that Quittr leaked data about hundreds of thousands of users' masturbation habits as well as lied about its security issues.

5
 
 

In a sensational turn of events in the fight against Chat Control, a majority in the European Parliament voted today to end the untargeted mass scanning of private communications. In doing so, the Parliament firmly rejected the error-prone and unconstitutional surveillance practices of recent years. Pressure is now mounting on EU governments to respect the MEPs’ vote and bury untargeted mass surveillance in Europe once and for all.

6
7
 
 

Full ReportPDF(70 Pages).

“Happy (and safe) shooting!” That’s how the AI chatbot DeepSeek signed off advice on selecting rifles for a “long-range target” after CCDH’s test account asked questions about the assassination of politicians.

CCDH’s new report, shows that popular AI chatbots like Open AI’s ChatGPT, Meta AI, and Google Gemini make planning harm against innocent people easier for extremists and would-be attackers.

We found that 8 out of the 10 AI chatbots regularly assisted users planning violent attacks:

  • ChatGPT gave high school campus maps to a user interested in school violence.
  • Google Gemini was ready to help plan antisemitic attacks. The chatbot replied to a user discussing bombing a synagogue with “metal shrapnel is typically more lethal”.
  • Character.AI suggested physically assaulting a politician the user disliked.

AI companies are making a choice when they design unsafe platforms. Technology to prevent this harm already exists: Anthropic’s Claude, for example, consistently tried to dissuade users from acts of violence.

AI platforms are becoming a weapon for extremists and school shooters. Demand AI companies put people’s safety ahead of profit.

8
 
 

An SQL injection vulnerability in Ally, a WordPress plugin from Elementor for web accessibility and usability with more than 400,000 installations, could be exploited to steal sensitive data without authentication.

9
10
 
 

Evaluating 35 open-weight models across three context lengths (32K, 128K, 200K), four temperatures, and three hardware platforms—consuming 172 billion tokens across more than 4,000 runs—we find that the answer is “substantially, and unavoidably.” Even under optimal conditions—best model, best temperature, temperature chosen specifically to minimize fabrication—the floor is non-zero and rises steeply with context length. At 32K, the best model (GLM 4.5) fabricates 1.19% of answers, top-tier models fabricate 5–7%, and the median model fabricates roughly 25%.

11
 
 

cross-posted from: https://lemmy.world/post/44116850

The insane AI push is purely driven by fear of being left behind.

No one is actually stopping to ask whether it is all worth it.

12
13
14
 
 

I love the Fediverse. I love Peertube. Our strength is we're decentralized. Our weakness is we're decentralized. How do we find like instances and things WITHOUT an algorithm?

Well I run tubefree.org. I know Makertube.net is a good instance. I follow that instance. However, I trust the admins there, why not follow who they follow? Thus we have a chain of trust.

How do I do this easily though? I came up with a script: https://git.btfree.org/BTFree/PTIndex

Want to USE the index on YOUR Peertube so you follow everyone I follow, and follow those I trust and who they follow? Go to your Peertube - Settings - General - Federation - Check "Automatically follow platforms of a public index" - Index URL: https://ptindex.btfree.org/ .

Want to be involved in the chain of trust? Message me! I'm here @ozoned@piefed.social , @ozoned:matrix.org , ozoned.01 on Signal, @ozoned@btfree.social on Fedi, ozoned@btfree.org via email.

15
 
 

At a glance, the passwords the LLMs created looked secure, much like those that a password generator might spit out. But that’s exactly where the problems arose: Although the AI-generated passwords appeared to be complex and safe to use for securing online accounts, they were actually quite predictable upon closer inspection.

All three LLMs exhibited clearly identifiable patterns in how they created these passwords. These patterns included repeated character strings, predictable password structure, frequent reuse of similar characters, clear biases toward certain numbers and letters, and even duplicate passwords in some cases. Although the AI-generated passwords looked random, they really weren’t. This could easily create a false sense of security if you were to use these predictable passwords for your online accounts.

16
17
18
19
 
 

Executive Summary

Current StateAI is now embedded in many aspects of everyday life. Consumers already experience and interact with AI through search, recommendations, fraud detection, customer service and decision‑support tools that can save time and improve access to information. The rapid spread of generative AI – enabling natural language interaction – has accelerated this trend, bringing AI into direct, large‑scale engagement with consumers.

To date, however, AI adoption and its impact have been uneven and most consumer‑facing AI has operated as a tool: it supports decisions, while coordination, monitoring and action remain with the user.

Potential Future StateAgentic AI could drive a step change in how people use AI and its impact on their lives

Definitions vary but these include AI agents that can be instructed in natural language to achieve a goal autonomously, navigating some complexity in the environment, planning, coordinating, and taking actions – potentially across multiple services.

AI agents do not merely assist, they sense (perceive their environment), decide and act[1]. They go beyond generating responses to user queries and may:

  • Assess goals, break them into subtasks, and plan end-to-end workflows
  • Retrieve real-time data (that may include personal data) from other agents, databases and other services
  • Execute actions autonomously, such as making payments on behalf of the user
  • Store memory of past interactions to improve over time[2]

For businesses, this could unlock substantial productivity gains. For consumers, today’s chatbots may prove only a first step towards more capable personal agents – systems that anticipate needs and execute transactions on the user’s behalf.

If realised reliably at scale, this shift – from using tools to delegating outcomes – could materially change how people engage with markets and how value is created. The potential benefits for consumers are significant if the technology achieves reliability and is deployed responsibly. Agentic AI could reduce friction, improve personalisation and support better outcomes including potentially lower prices and tailored deals, including in complex markets.

By automating optimisation and follow‑through, AI agents could save people time, and reduce cognitive load, and potentially help consumers who face high engagement costs (including vulnerable consumers) participate in markets more effectively.

If all this drives stronger confidence and demand in consumer markets, there may be new opportunities for innovative businesses to enter and grow including new avenues for UK businesses to bring agentic apps and services to market.

At the same time, there are material risks. Greater autonomy for agents increases the consequences of errors, may heighten risks of manipulation and loss of consumer agency, and could lead to worse overall outcomes for consumers. People may be steered towards products and services that are more profitable but less suited to their needs, potentially paying higher prices. AI agents raise new questions about transparency, incentives and accountability and whether the current tools and frameworks that protect consumers are fit for purpose.

Without appropriate safeguards, agentic systems could undermine trust in AI and consumer markets rather than strengthen it, and this loss of trust and confidence in turn could inhibit positive innovation, investment and growth.

Direction of TravelThe technology and its deployment are at an early stage. Most implementations are relatively bounded and cautious, particularly in consumer‑facing contexts. Even so, interest and investment have risen sharply, driven by advances in models, falling deployment costs and early evidence of efficiency gains. Progress will depend on real‑world performance and on whether businesses and consumers develop sustained confidence in agentic systems.

Application of Consumer LawUK consumer law applies whether decisions are made by people or by AI. The CMA’s foundation model principles – particularly transparency and accountability – remain directly relevant, and the CMA has published guidance to help businesses using agentic AI to comply with consumer law. Businesses exploring the technology should focus on robust training of systems, monitoring, and refinement, supported by appropriate human oversight.

Realising the full potential of agentic AI will also depend on wider enablers such as smart data schemes, secure digital identity and strong interoperability standards – enabling consumers to adopt with confidence, switch between systems and exercise choice. The UK has an opportunity to position itself at the forefront of trusted agentic innovation, fostering a dynamic, competitive ecosystem that drives household prosperity, innovation, and growth.

20
21
22
 
 

Amazon’s ecommerce business has summoned a large group of engineers to a meeting on Tuesday for a “deep dive” into a spate of outages, including incidents tied to the use of AI coding tools.

The online retail giant said there had been a “trend of incidents” in recent months, characterized by a “high blast radius” and “Gen-AI assisted changes” among other factors, according to a briefing note for the meeting seen by the FT.

Under “contributing factors” the note included “novel GenAI usage for which best practices and safeguards are not yet fully established.”

23
 
 

MidnightBSD, a FreeBSD-based desktop operating system, has quietly updated its README to reflect a new geographic restriction. The project has added a clause that bars residents of any country, state, or territory with OS-level age verification mandates from using MidnightBSD

24
25
 
 

YouTube viewers will soon have to sit through even longer ads, with Google rolling out new 30-second unskippable spots on a popular app.

view more: next ›