Technology

41328 readers
550 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS
1
 
 

Hey Beeple and visitors to Beehaw: I think we need to have a discussion about !technology@beehaw.org, community culture, and moderation. First, some of the reasons that I think we need to have this conversation.

  1. Technology got big fast and has stayed Beehaw's most active community.
  2. Technology gets more reports (about double in the last month by a rough hand count) than the next highest community that I moderate (Politics, and this is during election season in a month that involved a disastrous debate, an assassination attempt on a candidate, and a major party's presumptive nominee dropping out of the race)
  3. For a long time, I and other mods have felt that Technology at times isn’t living up to the Beehaw ethos. More often than I like I see comments in this community where users are being abusive or insulting toward one another, often without any provocation other than the perception that the other user’s opinion is wrong.

Because of these reasons, we have decided that we may need to be a little more hands-on with our moderation of Technology. Here’s what that might mean:

  1. Mods will be more actively removing comments that are unkind or abusive, that involve personal attacks, or that just have really bad vibes.
    a. We will always try to be fair, but you may not always agree with our moderation decisions. Please try to respect those decisions anyway. We will generally try to moderate in a way that is a) proportional, and b) gradual.
    b. We are more likely to respond to particularly bad behavior from off-instance users with pre-emptive bans. This is not because off-instance users are worse, or less valuable, but simply that we aren't able to vet users from other instances and don't interact with them with the same frequency, and other instances may have less strict sign-up policies than Beehaw, making it more difficult to play whack-a-mole.
  2. We will need you to report early and often. The drawbacks of getting reports for something that doesn't require our intervention are outweighed by the benefits of us being able to get to a situation before it spirals out of control. By all means, if you’re not sure if something has risen to the level of violating our rule, say so in the report reason, but I'd personally rather get reports early than late, when a thread has spiraled into an all out flamewar.
    a. That said, please don't report people for being wrong, unless they are doing so in a way that is actually dangerous to others. It would be better for you to kindly disagree with them in a nice comment.
    b. Please, feel free to try and de-escalate arguments and remind one another of the humanity of the people behind the usernames. Remember to Be(e) Nice even when disagreeing with one another. Yes, even Windows users.
  3. We will try to be more proactive in stepping in when arguments are happening and trying to remind folks to Be(e) Nice.
    a. This isn't always possible. Mods are all volunteers with jobs and lives, and things often get out of hand before we are aware of the problem due to the size of the community and mod team.
    b. This isn't always helpful, but we try to make these kinds of gentle reminders our first resort when we get to things early enough. It’s also usually useful in gauging whether someone is a good fit for Beehaw. If someone responds with abuse to a gentle nudge about their behavior, it’s generally a good indication that they either aren’t aware of or don’t care about the type of community we are trying to maintain.

I know our philosophy posts can be long and sometimes a little meandering (personally that's why I love them) but do take the time to read them if you haven't. If you can't/won't or just need a reminder, though, I'll try to distill the parts that I think are most salient to this particular post:

  1. Be(e) nice. By nice, we don't mean merely being polite, or in the surface-level "oh bless your heart" kind of way; we mean be kind.
  2. Remember the human. The users that you interact with on Beehaw (and most likely other parts of the internet) are people, and people should be treated kindly and in good-faith whenever possible.
  3. Assume good faith. Whenever possible, and until demonstrated otherwise, assume that users don't have a secret, evil agenda. If you think they might be saying or implying something you think is bad, ask them to clarify (kindly) and give them a chance to explain. Most likely, they've communicated themselves poorly, or you've misunderstood. After all of that, it's possible that you may disagree with them still, but we can disagree about Technology and still give one another the respect due to other humans.
2
3
4
 
 

Just the Browser removes a bunch of AI cruft and telemetry garbage, and it's incredibly easy to use. It supports Firefox and Edge, too!

5
6
 
 

I've been an AI realist from the start. What can the models actually do? What are the limitations? Ultimately, I think there is a place in the market for them - for people who understand the limitations of them and know when they're spewing BS.

I am not shocked at all that OpenAI (and Microsoft being one of it's largest shareholders) is burning cash and suddenly is realizing it may not be able to make good on it's promises. AI reality vs AI hype. Us actual tech people have known since the beginning this was all hype, now finance people are starting to notice (about time).

7
 
 

European consumers are fighting back against the U.S. following Trump’s threats to take control of Greenland, a Danish territory. As a result, two mobile apps that offer a way to determine if products are made in America, then suggest local alternatives, have surged to the top of the Danish App Store in recent days.

The boost in downloads comes as Danish consumers have been organizing a grassroots boycott of American-made products, which also included canceling their U.S. vacations and ditching their subscriptions to U.S.-based streaming services, like Netflix.

Across both iOS and Android, two apps, NonUSA and Made O’Meter, have entered the top 10 this month, according to new data from market intelligence provider Appfigures.

8
9
10
 
 

From : https://techhub.social/@sawaba@infosec.exchange/115924627853343043 (mastodon)

The enshittification of computer repair is happening.

AI has amazingly managed to make repairable computers practically worthless.

The increase in memory and storage pricing is destroying the second-hand market for computing hardware and this makes me sad. I watched a video from someone that runs a repair shop, and this is what's happening:

The memory/storage alone is worth more than the rest of the computer, so people are stripping them out to sell separately.

The second hand market is now flooded with computers that have no memory or storage. Buying new memory or storage to put in these used computers is now more expensive than buying a new computer.

So we now suddenly have a giant e-waste problem PLUS a giant problem for repair shops that want to stay in business.

In the video, he was basically saying that they have to pivot to the only computers that folks aren't stripping RAM and storage out of - computers that have those things soldered on. The irony here is that repair shops now have to ignore the most repairable computers and focus on the least repairable computers instead.

11
 
 

European organizations are about to launch their own social media platform, W, amid rising tensions with the United States.

The new platform, W, will require identification and photo validation to ensure that its users are both humans and who they claim to be, Danish news media outlet Politiken.dk reports.

12
 
 
13
 
 

Because that's what the world needs. Spicier ChatGPT.

OpenAI says it has begun deploying an age prediction model to determine whether ChatGPT users are old enough to view "sensitive or potentially harmful content."

Chatbots from OpenAI and its rivals are linked to a series of suicides, sparking litigation and a congressional hearing. AI outfits therefore have excellent reasons to make the safety of their services more than a talking point, both for minors and the adult public.

Hence we have OpenAI's Teen Safety Blueprint, introduced in November 2025, and its Under-18 Principles for Model Behavior, which debuted the following month.

OpenAI is under pressure to turn a profit, knows its plan to serve ads needs to observe rules about marketing to minors, and has erotica in the ChatGPT pipeline. That all adds up to a need to partition its audience and prevent exposing them to damaging material.

14
15
 
 

OpenAI has announced that it is starting the roll out of its age prediction tool for ChatGPT consumer accounts.

16
 
 

This conclusion comes from a three-continent investigation—current and former employees across R&D, Business, and Marketing at headquarters in China and regional offices in the US, India, and Europe. It’s confirmed by four independent analyst firms whose market data verifies what OnePlus won’t say. And it’s informed by 15 years covering OnePlus and the smartphone industry’s business dynamics—watching Samsung and Apple rise while Nokia, BlackBerry, HTC, and LG followed this exact pattern into irrelevance.

The evidence is damning. Shipments in freefall. A premium stronghold that collapsed almost overnight. Headquarters shuttered without announcement. Partnerships ended. Western teams gutted to skeleton crews. Product cancellations—the Open 2 foldable and 15s compact flagship have both been scrapped; neither will launch as planned. And every major decision now flows from China—regional offices don’t strategize anymore, they take orders.

17
18
 
 

ALEXANDRIA, VA — Dr. Gladys West, the pioneering mathematician whose work laid the foundation for modern GPS technology, has died. She passed away

19
 
 

Crossposted from https://fedia.io/m/fuck/_ai@lemmy.world/t/3317969

Court records show that NVIDIA executives allegedly authorized the use of millions of pirated books from Anna's Archive to fuel its AI training.

20
 
 

Seriously, what the fuck is going on with fabs right now?

Micron has found a way to add new DRAM manufacturing capacity in a hurry by acquiring a chipmaking campus from Taiwanese outfit Powerchip Semiconductor Manufacturing Corporation (PSMC).

The two companies announced the deal last weekend. Micron’s version of events says it’s signed a letter of intent to acquire Powerchip’s entire P5 site in Tongluo, Taiwan, for total cash consideration of US$1.8 billion.

21
22
 
 

The promise of Just the Browser sounds good. Rather than fork one of the big-name browsers, just run a tiny script that turns off all the bits and functions you don't want.

Just the Browser is a new project by developer Corbin Davenport. It aims to fight the rising tide of undesirable browser features such as telemetry, LLM bot features billed as AI, and sponsored content by a clever lateral move. It uses the enterprise management features built into the leading browsers to turn these things off.

The concept is simple and appealing. Enough people want de-enshittified browsers that there are multiple forks of the big names. For Firefox, there are Waterfox and Zen as well as LibreWolf and Floorp, and projects based off much older versions of the codebase such as Pale Moon. Most people, though, tend to use Chrome and there are lots of browsers based on its Chromium upstream too, including Microsoft Edge, the Chinese-owned Opera, and from some of the people behind the original Norwegian Opera browser, Vivaldi.

23
24
 
 

The worst examples are when bots can get through the "ban" just by paying a monthly fee.

So-called "AI filters"

An increasing number of websites lately are claiming to ban AI-generated content. This is a lie deeply tied to other lies.

Building on a well-known lie: that they can tell what is and isn't generated by a chat bot, when every "detector tool" has been proven unreliable, and sometimes we humans can also only guess.

Helping slip a bigger lie past you: that today's "AI algorithms" are "more AI" than the algorithms a few years ago. The lie that machine learning has just changed at the fundamental level, that suddenly it can truly understand. The lie that this is the cusp of AGI - Artificial General Intelligence.

Supporting future lying opportunities:

  • To pretend a person is a bot, because the authorities don't like the person
  • To pretend a bot is a person, because the authorities like the bot
  • To pretend bots have become "intelligent" enough to outsmart everyone and break "AI filters" (yet another reframing of gullible people being tricked by liars with a shiny object)
  • Perhaps later - when bots are truly smart enough to reliably outsmart these filters - to pretend it's nothing new, it was the bots doing it the whole time, don't look beind the curtain at the humans who helped
  • And perhaps - with luck - to suggest you should give up on the internet, give up on organizing for a better future, give up on artistry, just give up on everything, because we have no options that work anymore

It's also worth mentioning some of the reasons why the authorities might dislike certain people and like certain bots.

For example, they might dislike a person because the person is honest about using bot tools, when the app tests whether users are willing to lie for convenience.

For another example, they might like a bot because the bot pays the monthly fee, when the app tests whether users are willing to participate in monetizing discussion spaces.

The solution: Web of Trust

You want to show up in "verified human" feeds, but you don't know anyone in real life that uses a web of trust app, so nobody in the network has verified you're a human.

You ask any verified human to meet up with you for lunch. After confirming you exist, they give your account the "verified human" tag too.

They will now see your posts in their "tagged human by me" feed.

Their followers will see your posts in the "tagged human by me and others I follow" feed.

And their followers will see your posts in the "tagged human by me, others I follow, and others they follow" feed...

And so on.

I've heard everyone is generally a maximum 6 degrees of separation from everyone else on Earth, so this could be a more robust solution than you'd think.

The tag should have a timestamp on it. You'd want to renew it, because the older it gets, the less people trust it.

This doesn't hit the same goalposts, of course.

If your goal is to avoid thinking, and just be told lies that sound good to you, this isn't as good as a weak "AI filter."

If your goal is to scroll through a feed where none of the creators used any software "smarter" than you'd want, this isn't as good as an imaginary strong "AI filter" that doesn't exist.

But if your goal is to survive, while others are trying to drive the planet to extinction...

If your goal is to be able to tell the truth and not be drowned out by liars...

If your goal is to be able to hold the liars accountable, when they do drown out honest statements...

If your goal is to have at least some vague sense of "public opinion" in online discussion, that actually reflects what humans believe, not bots...

Then a "human tag" web of trust is a lot better than nothing.

It won't stop someone from copying and pasting what ChatGPT says, but it should make it harder for them to copy and paste 10 answers across 10 fake faces.

Speaking of fake faces - even though you could use this system for ID verification, you might never need to. People can choose to be anonymous, using stuff like anime profile pictures, only showing their real face to the person who verifies them, never revealing their name or other details. But anime pictures will naturally be treated differently from recognizable individuals in political discussions, making it more difficult for themselves to game the system.

To flood a discussion with lies, racist statements, etc., the people flooding the discussion should have to take some accountability for those lies, racist statements, etc. At least if they want to show up on people's screens and be taken seriously.

A different dark pattern design

You could say the human-tagging web of trust system is "dark pattern design" too.

This design takes advantage of human behavioral patterns, but in a completely different way.

When pathological liars encounter this system, they naturally face certain temptations. Creating cascading webs of false "human tags" to confuse people and waste time. Meanwhile, accusing others of doing it - wasting even more time.

And a more important temptation: echo chambering with others who use these lies the same way. Saying "ah, this person always accuses communists of using false human tags, because we know only bots are communists. I will trust this person."

They can cluster together in a group, filtering everyone else out, calling them bots.

And, if they can't resist these temptations, it will make them just as easy to filter out, for everyone else. Because at the end of the day, these chat bots aren't late-gen Synths from Fallout. Take away the screen, put us face to face, and it's very easy to discern a human from a machine. These liars get nothing to hide behind.

So you see, like strong is the opposite of weak [citation needed], the strong filter's "dark pattern design" is quite different from the weak filter's. Instead of preying on honesty, it preys on the predatory.

Perhaps, someday, systems like this could even change social pressures and incentives to make more people learn to be honest.

25
view more: next ›