110
submitted 5 months ago by L4s@lemmy.world to c/technology@lemmy.world

The New Luddites Aren’t Backing Down::Activists are organizing to combat generative AI and other technologies—and reclaiming a misunderstood label in the process.

all 38 comments
sorted by: hot top controversial new old
[-] veeesix@lemmy.ca 54 points 5 months ago

”Tech is not supposed to be a master tool to colonize every aspect of our being. We need to reevaluate how it serves us.”

I consider myself a Luddite not because I want to halt progress or reject technology itself. But I believe, as the original Luddites argued in a particularly influential letter threatening the industrialists, that we must consider whether a technology is “hurtful to commonality”—whether it causes many to suffer for the benefit of a few—and oppose it when necessary.

[-] Hestia@lemmy.world 31 points 5 months ago

The author states that she's been a tech writer for 10 years and that she thinks AI is going to ruin journalism because it gives too much power to AI providers.

But, have you seen the state of journalism? AI killing it would just be an act of mercy at this point. How much SEO optimized, grammatically correct, appropriately filtered, but ultimately useless "content" do I really need to sift through to get even something as simple as a recipe?

The author can bemoan AI until she's blue in the face, but she's willfully ignoring that the information that most people get today is already controlled by a handful of people and organizations.

[-] laurelraven@lemmy.blahaj.zone 14 points 5 months ago

AI will make all of that So Much Worse.

Hell, it already has.

[-] afraid_of_zombies@lemmy.world 1 points 5 months ago

I often use chatgpt to summarize articles for me.

[-] dukk@programming.dev 2 points 5 months ago

Journalists use AI to write longer articles. People use AI to summarize those articles.

The circle of LLMs.

[-] EncryptKeeper@lemmy.world 12 points 5 months ago

Man I hate to tell you this but a good chunk of the content you’re describing is already written by AI. Thats a huge driver behind how shit it’s all gotten.

[-] cyberfae@lemmy.world 20 points 5 months ago
[-] CorrodedCranium@leminal.space 16 points 5 months ago

The original Luddites were hailed as folk heroes—they were cheered in the streets as they smashed machinery, and they were championed by Lord Byron. Today, at a time when a majority of Americans are in favor of stronger tech regulation, workers like the writers and actors pushing for protections against AI are popular too. In one Gallup poll, Americans sympathized with the writers over the studios by 72 to 19 percent

I don't know if it's just where I went to school but the Luddites weren't portrayed as folk heroes there. They were portrayed as people digging their heels in the sand against change.

That's also an extremely big range for a percentage. I wonder how the poll was setup.

[-] laurelraven@lemmy.blahaj.zone 5 points 5 months ago

A 9% undecided actually sounds about right, and actually smaller than I would have expected considering how poorly most people understand or care about the subject matter

And "Luddite" today has been repainted into what you said, but yeah, they weren't seen like that at the time

[-] treadful@lemmy.zip 9 points 5 months ago* (last edited 5 months ago)

“Luddism and science fiction concern themselves with the same questions: not merely what the technology does, but who it does it for and who it does it to.”

The problem with Luddism is that it objectifies unwanted behavior. Instead of "hiring children to run machines is bad," the argument becomes "the machines are bad because people hire children to run them."

The machines are just machines. They have no inherent benefits or harms. It's always the people and what they do with them.

[-] FlyingSquid@lemmy.world 12 points 5 months ago

Luddism was about industrialization taking jobs away. It was not against the machines. The machines were seen as a tool of the wealthy plutocrats taking away their jobs. They sabotaged the machines as revenge. They didn't blame the machines, they blamed the wealthy. But they couldn't get revenge on the wealthy so easily.

[-] treadful@lemmy.zip 7 points 5 months ago

They still took hammers to machines and not the wealthy. The modern variant of Luddites are talking about banning technologies outright instead of uses of said tech. Also, the discussion I've seen online is almost always strictly black and white and often ignores the people, instead focusing on the tech.

The actions and words of the Luddites don't seem reflect what you're saying from my PoV.

[-] FlyingSquid@lemmy.world 8 points 5 months ago

They took hammers to the machine and not the wealthy because they had access to the machines and not the wealthy.

[-] General_Effort@lemmy.world 1 points 5 months ago

I don't think that's true, at least not generally. To my knowledge, they saw themselves as enforcing the law. Indeed, old laws banned certain types of machines, limited who could possess them, and how many. These corporations had been influential in previous centuries, and so laws protected their interests, but also balanced the interests of individual members. (Today we would probably call it a cartel or trust, rather than a corporation.)

At the time of the Luddites, these laws were no longer enforced. They had tried before the courts and by writing government, but their lobbying was unsuccessful. So they took it upon themselves to break the "illegal" machines and again limit competition and productivity.

[-] AA5B@lemmy.world 8 points 5 months ago* (last edited 5 months ago)

I wonder how much support this will get - it’s not the tool that’s the problem, but how it gets used.

  • as a tech person, generative AI is already a useful tool, similar to how search engines are. However I’m not afraid of it taking my job because someone still needs to tell it what to do, plus it’s still pretty limited. I liken it to previous attempts to outsource software to the lowest bidder in the cheapest country. In general that was a failure and companies are looking for ability even in cheap labor markets, not just cheapness
  • as someone who reads news and opinions online, I see the enshittification overtaking that industry over the last decade. Most content is clearly no longer written by journalists nor adhering to any standards for informing the user, but written by formula and template for SEO, and invoking outrage or other emotion. As someone watching videos, I see more choices than ever, but mostly poorly written and produced. It feels like these industries are racing for the bottom and not stopping. Generative AI can actually do a better job than most of the crap, and the most important skill of an online citizen is how to wade through the oceans of crap to find those morsels of journalism. How do we bring back journalism as a whole, regardless of what tools the hacks use to fill our attention and sell ads?
[-] laskoune@lemmy.world 10 points 5 months ago

It was actually the same thing with the original luddites. They didn’t oppose the new tool but the way it was used.

From the article :

The first Luddites were artisans and cloth workers in England who, at the onset of the Industrial Revolution, protested the way factory owners used machinery to undercut their status and wages. Contrary to popular belief, they did not dislike technology; most were skilled technicians.

[-] realharo@lemm.ee 3 points 5 months ago

However I’m not afraid of it taking my job because someone still needs to tell it what to do

Why couldn't it do that part too? - purely based on a simple high-level objective that anyone can formulate. Which part exactly do you think is AI-resistant?

I'm not talking about today's models, but more like 5-10 years into the future.

[-] anlumo@lemmy.world 1 points 5 months ago

That’s what I’ve been arguing with a fellow programmer recently. Right now you have to tell these programmer LLMs what to do on a function-by-function basis, because it doesn’t have enough capacity to think on a project level. However, that’s exactly what can be improved by scaling the neural network up. Right now the LLMs are limited by hardware, but they’re still using off-the-shelf GPUs that were designed for a completely different use case. The accelerators designed for AI are currently in the preproduction phase, very close to getting used in the AI data centers.

[-] Drewelite@lemmynsfw.com 4 points 5 months ago

Yeah I've seen a lot of weird takes on AI. It all seems to come down to ego guarding: But it can't take my job, it just regurgitates combinations of what it was taught unlike me, only humans can be creative, who wants coffee made by a machine, well you still need a person to do things in the physical world, etc.. Really highlights how difficult it is for people to think about change. Especially a change that might not end with a place for them.

[-] anlumo@lemmy.world 3 points 5 months ago

The creativity argument I don't get at all. Being creative these days means taking a bunch of known ideas and mashing them up, and that's exactly what an LLM does. Very few people can really think outside the box.

I've had a few things where it was actually the other way around. I'm running a lot of TTRPGs, and my storylines are always pretty bland because I'm not that creative. I've started to use ChatGPT4 to give me a few ideas for stories, and it helps me break out of that box by suggesting completely different things than what I'd have thought of.

[-] Drewelite@lemmynsfw.com 2 points 5 months ago

I'll argue it's always been that way. It's Just that the pool of data that people are pulling from these days is more homogeneous. It used to be that people had a lot more unique and personal experiences that weren't known to the world. But today everything is shared and given a label by our culture. So if you come up with an idea it's much more likely that someone that has had similar experiences to you, thought of it already. People say there's no more new ideas. Maybe that's true in a sense, but I'd argue nothing's changed except that people know about all the ideas.

[-] laurelraven@lemmy.blahaj.zone 2 points 5 months ago

Best explanation of the problem with AI and our jobs I've seen:

I'm not worried that AI can do my job. I'm worried that my boss will be convinced it can.

[-] WanderingVentra@lemm.ee 6 points 5 months ago* (last edited 5 months ago)

It is a little strange to me that people say it won't change things because the AI will need someone to tell it what to do. It's like saying robots won't change the automotive industry because someone will need to fix them. Well it turns out, if you only need one person to fix all the machines or tell the AI what to do, then the companies will fire everyone else, especially if that was their main skill set and where their experience was. They can get a different job but it will be entry level and they might not be able to live the same quality of life, support a family, fill their retirement or pay debts they may have accrued with the expectation of a certain salary.

There are manufacturing center towns that are basically graveyards now because of that (yes, also globalist and international capitalism, too, but it's both. Jon Oliver had an episode about it with sources). The same thing happened to call centers and operators before. Things sucked for certain people during that time, and from an abstract POV society was okay, but imagine if the person it sucked for is you. Then you can understand why lots of people are freaking out.

[-] OldWoodFrame@lemm.ee 3 points 5 months ago

Maybe the worry for the next 20 years is that we will only get jobs fixing the robots but the economy used to be 90% farmers that's not actually worrying to me.

The scary part is that eventually the robots can fix themselves better than we can and there will be literally no reason for most humans to work. We really have to get working on a plan for that. Our only plan so far as automation has made us more productive is to continue working the same amount but on different things, and AGI is where that really breaks down.

[-] wahming@monyet.cc 4 points 5 months ago

number of other incidents—including one in which a Cruise self-driving car hit a pedestrian and dragged them 20 feet

I see the author is perfectly fine with misrepresenting incidents to favour their narrative.

[-] elephantium@lemmy.world 4 points 5 months ago

Are you referring to this incident?

If so, how would you want someone to refer to it?

I'm out of the loop on this one -- I don't recall hearing about this "dragged 20 feet" incident until now.

[-] wahming@monyet.cc 2 points 5 months ago* (last edited 5 months ago)

The pedestrian was first hit by a human driver, who drove off without stopping. They were knocked under the self driving car, which responded to the incident by braking ASAP. Which unfortunately stopped them on top of the victim. Calling it the fault of the AI is badly misrepresenting the situation.

[-] systemglitch@lemmy.world 2 points 5 months ago

This is what futile looks like.

[-] mesamunefire@lemmy.world 2 points 5 months ago

Full source?

this post was submitted on 05 Feb 2024
110 points (89.3% liked)

Technology

55755 readers
1398 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS