this post was submitted on 02 Sep 2024
817 points (93.0% liked)

solarpunk memes

2836 readers
582 users here now

For when you need a laugh!

The definition of a "meme" here is intentionally pretty loose. Images, screenshots, and the like are welcome!

But, keep it lighthearted and/or within our server's ideals.

Posts and comments that are hateful, trolling, inciting, and/or overly negative will be removed at the moderators' discretion.

Please follow all slrpnk.net rules and community guidelines

Have fun!

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] FMT99@lemmy.world 44 points 2 months ago (28 children)

Most of the hate is coming from people who don't really know anything about "AI" (LLM) Which makes sense, companies are marketing dumb gimmicks to people who don't need them and, after the novelty wore off, aren't terribly impressed by them.

But LLMs are absolutely going to be transformational in some areas. And in a few years they may very well become useful and usable as daily drivers on your phone etc, it's hard to say for sure. But both the hype and the hate are just kneejerk reactionary nonsense for the moment.

[–] MajorHavoc@programming.dev 66 points 2 months ago (5 children)

Most of the hate is coming from people who don't really know anything about "AI" (LLM)

No.

As an actual subject matter expert, I hate all of this, because assholes are overselling it to people who don't know better.

[–] Quill7513@slrpnk.net 42 points 2 months ago (3 children)

My hatred of AI comes from seeing the double standard between how mass market media companies treat us when we steal from them vs when they steal from us. They want it to be a fully one way street when it comes to law and enforcement. House of Mouse owns all the media they create and that remixes work they create. When we create a new original idea, by the nature of the training model, they want to own that, too.

I also work with these tech bro industry leaders. I know what they're like. When they say to you they want to make it easier for non-artistic people to create art, they're not telling you about an egalitarian and magnificent future. They're telling you about how they want to stop paying the graphic designers and copy editors who work in their company. The vision they have for the future is based on a fundamental misunderstanding about whether or not the future presented in Bladerunner is:

a) Cool and awesome b) Horrifying

They want to enslave sentient beings to do the hard work of mining, driving, and shopping for them. They don't want those people doing art and poetry because they want them to be too busy mining, driving, and shopping. This whole thing. This whole current wave of AI technology, it doesn't benefit you except for fleetingly. LLMs, ethically trained, could, indeed, benefit society at large, but that's now who's developing them. That's not how they're being trained. Their models are intrinsically tainted by the double standard these corporations have because their only goal is to benefit from our labor without benefiting us.

[–] MajorHavoc@programming.dev 18 points 2 months ago

They want to enslave sentient beings to do the hard work of mining, driving, and shopping for them. They don't want those people doing art and poetry because they want them to be too busy mining, driving, and shopping.

That's a great summary of the core issue!

I adore the folks doing cool new things with AI. I am unhappy with the folks deciding what should get funded next in AI.

load more comments (2 replies)
[–] areyouevenreal@lemm.ee 7 points 2 months ago

The people being oversold are the people who don't know anything about it. I guess you can hate the people doing the over selling, but don't hate the field. It's one of the most promising areas of computer research being done right now.

load more comments (3 replies)
[–] CeruleanRuin@lemmings.world 29 points 2 months ago* (last edited 2 months ago) (6 children)

No, the "hate" is from people trying to raise alarms about the safeguards we need to put in place NOW to protect workers and creators before it's too late, to say nothing of what it will do to the information sphere. We are frustrated by tone deaf responses like this that dismiss it as a passing fad to hate on AI.

OF COURSE it will be transformational. No shit. That's exactly why many people are very justifiably up in arms about it. It's going to change a lot of things, probably everything, irreversibly, and if we don't get ahead of it with regulations and standards, we won't be able to. And the people who will use tools like this to exploit others -- because those people will ALWAYS use new tools to exploit others -- they want that inaction, and love it when they hear people like you saying it's just a kneejerk reaction.

load more comments (6 replies)
[–] kibiz0r@midwest.social 4 points 2 months ago

I dabbled a bit in ML before GPT, and when the most recent hype-rocket launched I did a deep dive into LLMs, and I gotta say…

None of my hopes or horrors regarding “AI” have changed much along the way.

It’s pretty much the same thing we’ve been doing since the industrial revolution, which is to try to map human behavior onto mechanical processes so that we can optimize for from a quantitative, objective frame of reference.

GenAI is only unique in that it’s an especially mask-off moment for the ruling technocrats. We are destined to become wetware plugins for a capitalist machine whose goal isn’t even as interesting as turning everything into paperclips. It’s worse than a rogue superintelligence.

[–] blazeknave@lemmy.world 3 points 2 months ago

I'm completely over taxed mentally, and I offload so much to it from reconciling bank statements and sorting game mods, to a home brew ongoing multiverse starring my son and which emojis to use in notion at work.

[–] msage@programming.dev 3 points 2 months ago

I'm just going to keep linking this: LLMentalist

load more comments (23 replies)
[–] Sequentialsilence@lemmy.world 41 points 2 months ago (2 children)

Eh, most of the marketing around ai is complete bullshit, but I do use it on a regular basis for my work. Several years ago it would have just been called machine learning, but it saves me hours every day. Is it a magic bullet that fixes everything? No. But is it a powerful tool that helps speed up the process? Yes.

[–] bamfic@lemmy.world 8 points 2 months ago (3 children)

Who is getting the reward for speeding up your work? Do you get to slack off more? How long will that last? Or does more work get piled on, making your employer richer not you?

[–] Sequentialsilence@lemmy.world 25 points 2 months ago

I do, I’m freelance, I make more money.

[–] Sorse@discuss.tchncs.de 14 points 2 months ago

Not a problem of the AI

[–] blanketswithsmallpox@lemmy.world 4 points 2 months ago

Most people free up hours of writing emails to do their actual job.

[–] msage@programming.dev 4 points 2 months ago

What does it do to save you so much time?

[–] Matriks404@lemmy.world 22 points 2 months ago (2 children)

I've lately tested AI if it can allow me to practice Russian in a natural sounding dialogue. While it didn't sound 100% human (it was too formal and technical), it was a good practice.

So I wouldn't say that it can't be used for good things.

load more comments (2 replies)
[–] drosophila@lemmy.blahaj.zone 16 points 2 months ago

There are plenty of applications for machine learning, logic engines, etc. They've been used in many industries since the 1970s.

[–] mayo@lemmy.world 13 points 2 months ago

This post isn't contributing to a healthy environment in this community.

Well thought out claim -> good source -> good discussion

[–] HappyTimeHarry@lemm.ee 13 points 2 months ago* (last edited 2 months ago) (2 children)

LLMs helped me with coding and debugging A LOT. I'd much rather use AI than have to try and parse stack exchange and a bunch of other web forums or developer documentation directly. AI is incredible when i get random errors and paste them in to say "fix this" and it does and tells me HOW and WHY it did what it did.

[–] Excrubulent@slrpnk.net 18 points 2 months ago* (last edited 2 months ago)

I keep seeing programmers use this as an example of what LLMs are good for, and I've seen other programmers say that the people who do that are bad programmers. The latter makes sense because trusting an LLM to do this is to fundamentally misunderstand what your job is and how the LLM works.

The LLM can't tell you HOW or WHY because it doesn't know those things. It can only give you an approximation of words that sound like someone explaing HOW and WHY. LLMs have no fidelity.

It could be completely wrong, and you wouldn't know because you've admitted you're using the LLM instead of reading the documentation and understanding yourself.

That is so irresponsible. Just RTFM like good programmers have done forever. It's not that much work if you get into the habit of it. Slow down, take the time to understand HOW and WHY to do things yourself, and make quality code rather than cranking out bigger volumes of crap that you don't understand. I'm sure it feels very productive in the moment but you're probably just creating more work for whoever has to clean up your large quantities of poorly thought out code.

[–] pedz@lemmy.ca 5 points 2 months ago

And it only consumes the equivalent in electricity of what an American house uses for a few tears.

[–] pedz@lemmy.ca 11 points 2 months ago

ITT: LLM helps me with mundane tasks so fuck the enormous energy requirements and its impact on environment!

https://www.forbes.com/sites/bethkindig/2024/06/20/ai-power-consumption-rapidly-becoming-mission-critical/

[–] solsangraal@lemmy.zip 11 points 2 months ago (1 children)

you're leaving out the main question: do they increase profit? YES.

so nothing anyone says matters. prepare your anus

[–] msage@programming.dev 4 points 2 months ago

Does it though?

How long before anyone actually looks up and says the emperor has no clothes?

[–] unexposedhazard@discuss.tchncs.de 11 points 2 months ago (1 children)

I mean the students around me, that would have failed by now without chatgpt probably DO want it. But they dont actually want the consequences that come with it. The academic world will adapt and adjust, kind of like inflation. You can just print more money, but that wont actually make everyone richer long term.

load more comments (1 replies)
[–] ArchRecord@lemm.ee 8 points 2 months ago

I've used LLMs to save me hours of time reformatting text and old notes, and restructure explanations so I can better understand and share them, used AI speech to text models to transcribe my voice notes, and used diffusion models to generate better quality mockups for designs that were later commissioned in better quality, with no need for any changes.

I can understand not liking AI, or not needing it yourself, but acting as if it has no use is frankly ridiculous. You might not use it, but other people do.

I think this says more about corporation's attempts to integrate "AI" into everything, instead of it being a user choice, than it does about the technology itself.

[–] boredsquirrel@slrpnk.net 8 points 2 months ago (2 children)

I am on an internship with like really nice people in a company that does sustainable stuff.

But they honestly have a list of AI tools they plan to use, to make automated presentations... like wtf?

[–] mayo@lemmy.world 3 points 2 months ago (1 children)

Same at my work and it's because the upper management have tasked middle managers with a way to 'use AI'. But when the tool solves a business problem it really is fantastic.

load more comments (1 replies)
load more comments (1 replies)
[–] ClamDrinker@lemmy.world 8 points 2 months ago* (last edited 2 months ago) (3 children)

Yeah... who doesn't love moral absolutism... The honest answer to all of these questions is, it depends.

Are these tools ethical or environmentally sustainable:

AI doesn't just exist of LLMs, which are indeed notoriously expensive to train and run. Using an image generator for example can be done on something as simple as a gaming grade GPU. And other AI technologies are already so light weight your phone can handle them. Do we assign the same negativity to gaming even though it's just people using electricity for entertainment? Producing a game also costs a lot more than it does for an end user to play. It's all about the balance between the two. And yes, AI technologies should rightfully be criticized for being wasteful, such as implementing it in places that it has no business in, or foregoing becoming more efficient.

The ethicality of AI is also something that is a deeply nuanced topic that has no clear consensus. Nor does every company that works with AI use it in the same way. Court cases are pending, and none have been conclusive thus far. Implying it is one sided is just incredibly dishonest.

but do they enable great things that people want?

This is probably the silliest one of them all, because AI technologies are ground breaking in medical research. They are seemingly pivotal in healing the sick people of tomorrow. And creative AIs allow people who are creative to be more creative. But they are ignored. They are shoved to the side because they don't fit in the "AI bad" narrative. Even though we should be acknowledging them, and seeing them as the allies they are against big companies trying to hoard AI technology for themselves. It is these companies that produce problematic AI, not the small artists, creatives, researchers, or anyone using AI ethically.

but are they being made by well meaning people for good reasons?

Who, exactly? You must realize there are far more parties than Google, Meta and Microsoft that create AI right? Companies and groups you've most likely never heard of before, creating open source AI for everyone to benefit from, not just those hoarding it for themselves. It's just so incredibly narrow minded to assign maliciousness to such a large group of people on the basis of what technology they work with.

Maybe you're not being negative enough

Maybe you are not being open minded enough, or have been blinded by hate. Because this shit isn't healthy. It's echo chamber level behaviour. I have a lot more respect for people that don't like AI, but base it on rational reasons. There's plenty of genuinely bad things about AI that have to be addressed, but instead you have to find yourself in a divide between people cuddling very close with spreading borderline misinformation to get what they want, and genuine people that simply want their voice and concerns about AI to be heard.

load more comments (3 replies)
[–] Dramaking37@lemmy.world 4 points 2 months ago

Most AI is being developed to try to sustain the need for content for social networks. The bots are there to make it feel lived in so they can advertise to you. They are running out of people who are willing to give them free content while they make billions off your art. So then, they just replace the artist.

load more comments
view more: next ›