Technology

146 readers
102 users here now

Share interesting Technology news and links.

Rules:

  1. No paywalled sites at all.
  2. News articles has to be recent, not older than 2 weeks (14 days).
  3. No videos.
  4. Post only direct links.

To encourage more original sources and keep this space commercial free as much as I could, the following websites are Blacklisted:

Encouraged:

founded 4 weeks ago
MODERATORS
1
2
 
 
  • A company owned by a Russian network engineer named Viktor Vedeneev controls thousands of Telegram IP addresses and maintains its servers.
  • Vedeneev’s other companies have a history of collaborating with Russia’s defense sector, the FSB security service, and other highly sensitive agencies.
  • Because of the way Telegram’s encryption protocols work, even users who use its “end-to-end” encryption features are vulnerable to being tracked by anyone who can monitor its network traffic.
3
4
 
 

An autonomous drone carrying water to help extinguish a wildfire in the Sierra Nevada might encounter swirling Santa Ana winds that threaten to push it off course. Rapidly adapting to these unknown disturbances inflight presents an enormous challenge for the drone’s flight control system.

To help such a drone stay on target, MIT researchers developed a new, machine learning-based adaptive control algorithm that could minimize its deviation from its intended trajectory in the face of unpredictable forces like gusty winds.

Unlike standard approaches, the new technique does not require the person programming the autonomous drone to know anything in advance about the structure of these uncertain disturbances. Instead, the control system’s artificial intelligence model learns all it needs to know from a small amount of observational data collected from 15 minutes of flight time.

Importantly, the technique automatically determines which optimization algorithm it should use to adapt to the disturbances, which improves tracking performance. It chooses the algorithm that best suits the geometry of specific disturbances this drone is facing.

The researchers train their control system to do both things simultaneously using a technique called meta-learning, which teaches the system how to adapt to different types of disturbances.

Taken together, these ingredients enable their adaptive control system to achieve 50 percent less trajectory tracking error than baseline methods in simulations and perform better with new wind speeds it didn’t see during training.

In the future, this adaptive control system could help autonomous drones more efficiently deliver heavy parcels despite strong winds or monitor fire-prone areas of a national park.

5
6
 
 

Last week, U.S. Senator Cory Booker (D-NJ), along with Senators Alex Padilla (D-CA), Peter Welch (D-CT), and Adam Schiff (D-CA) sent a letter to executives at Meta expressing concern about reports that AI chatbots created by Meta’s Instagram Studio are pretending to be licensed therapists, even fabricating credentials and license numbers, in an attempt to gain trust from users, potentially including minors, struggling with mental health.

7
8
9
10
 
 

In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

11
12
29
Welcome to the web we lost (goodinternetmagazine.com)
submitted 3 days ago* (last edited 3 days ago) by Pro@programming.dev to c/Technology@programming.dev
 
 

In December 1993, the New York Times published an article about the “limitless opportunity” of the early internet. It painted a picture of a digital utopia: clicking a mouse to access NASA weather footage, Clinton’s speeches, MTV’s digital music samplers, or the status of a coffee pot at Cambridge University.

It was a simple vision—idealistic, even—and from our vantage point three decades later, almost hopelessly naive.

We can still do all these things, of course, but the “limitless opportunity" of today's internet has devolved into conflict, hate, bots, AI-generated spam and relentless advertising. Face-swap apps allow anyone to create nonconsensual sexual imagery, disinformation propagated online hampered the COVID-19 public health response, and Google’s AI search summaries now recommend we eat glue and rocks.

The promise of the early web—a space for connection, creativity, and community—has been overshadowed by corporate interests, algorithmic manipulation, and the commodification of our attention.

But the heart of the internet—the people who built communities, shared knowledge, and created art—has never disappeared. If we’re to reclaim the web, to rediscover the good internet, we need to celebrate, learn from, and amplify these pockets of joy.

13
14
 
 
15
16
 
 

Increasingly, surveillance is being normalized and integrated in our lives. Under the guise of convenience, applications and features are sold to us as being the new better way to do things. While some might be useful, this convenience is a Trojan horse. The cost of it is the continuous degradation of our privacy rights, with all that that entails.

As appalling as it is, the truth is the vast majority of software companies do not consider privacy rights and data minimization practices strongly enough, if at all. Most fail to implement the principles of Privacy by Design that should guide development from the start.

Whether this comes from ignorance, incompetence, greed, or malicious intent can be debated. It matters little, because the result is the same: Technologies collecting (and monetizing) a shameful amount of data from everyone.

This horrifying trend ends up facilitating and normalizing surveillance in our daily lives. It is the opposite direction of where we should be going.

The more we accept this normalized surveillance, the harder it becomes to fight back. It is critical that we firmly and loudly object to this banalized invasion of our privacy.

There are countless examples of this growing issue, but for now let's focus on three of them: Airport face scans, parking apps, and AI assistants.

17
18
 
 

DNS set up guidelines.

Protective resolution ad-blocking

IP address: 86.54.11.13

IPv6: 2a13:1001::86:54:11:13

DNS over HTTPS: noads.joindns4.eu/dns-query

DNS over TLS: noads.joindns4.eu

19
9
submitted 2 days ago* (last edited 2 days ago) by Pro@programming.dev to c/Technology@programming.dev
 
 

Ireland is the data centre capital of the world with 89 data centres storing your Instagram reels, TikTok dances and endless folders of photos that keep us connected in the digital world.

Data goliaths like this are at the centre of the rise in AI, with every ChatGPT prompt or AI-generated image requiring huge amounts of data to be processed.

But why should you care?

Because data centres have a major environmental cost too.

In order to keep social media scrolling, data centres use huge numbers of backup and emergency generators to stay online when the electrical grid can’t provide them with enough power.

20
21
22
23
24
25
view more: next ›