this post was submitted on 24 Oct 2025
18 points (71.4% liked)

Comradeship // Freechat

2652 readers
244 users here now

Talk about whatever, respecting the rules established by Lemmygrad. Failing to comply with the rules will grant you a few warnings, insisting on breaking them will grant you a beautiful shiny banwall.

A community for comrades to chat and talk about whatever doesn't fit other communities

founded 4 years ago
MODERATORS
 

And i don't mean stuff like deepfakes/sora/palantir/anything like that, im talking about why the anti-genai crowd isn't providing an alternative where you can get instant feedback when you're journaling

top 50 comments
sorted by: hot top controversial new old
[–] LeninWeave@hexbear.net 38 points 2 months ago (2 children)

an alternative where you can get instant feedback when you're journaling

GenAI isn't giving you feedback. It's not a person. The entire thing is a social black hole for a society where everyone is already deeply alienated from each other.

load more comments (2 replies)
[–] queermunist@lemmy.ml 34 points 2 months ago (1 children)

It's a toy. I'm not against toys, but the amount of energy and resources we are pouring into this toy is alarming.

[–] yogthos@lemmygrad.ml 31 points 2 months ago

My impression is that a lot of people realize this tech will be used against them under capitalism, and they feel threatened by it. The real problem isn't with the tech itself, but with capitalist relations, and that's where people should direct their energy.

[–] knfrmity@lemmygrad.ml 31 points 2 months ago (3 children)
  • It's a complete waste of resources
  • The economic fallout of the bubble bursting could be unprecedented. (Yes shareholder value ≠ quality of life, but we've seen how working people get fucked over when the stock market crashes)
  • The environmental fallout is rarely considered
  • The cost to human knowledge and even thinking ability is huge
  • The emotional relationships people form with these models are concerning
  • What's the societal cost of further isolating people?
  • What opportunity cost is there? How many actually useful things aren't being discovered because the big seven are too focused on LLMs?
  • Nobody even wants LLMs. There's no path to profitability. GenAI is a trillion dollar meme.
  • Even when it does generate useful output sometimes, LLMs are probabilistic and therefore outputs are not reproducible
  • Why do you need instant feedback when you're doing absolutely anything? (Sometimes it's warranted but then talk with a person)
[–] LeninWeave@hexbear.net 24 points 2 months ago* (last edited 2 months ago) (8 children)

The cost to human knowledge and even thinking ability is huge

100%.

We are communists. We should understand the labor theory of value. Therefore, we should understand why GenAI does not create any new value: it's not a person and it does no labor. It recycles existing knowledge into a lower-average-quality slurry, which is dispersed into the body of human knowledge used to train the next model which is used to produce slop that is dispersed into the... and so on and so forth.

[–] Cowbee@lemmygrad.ml 9 points 2 months ago* (last edited 2 months ago) (11 children)

I don't think that's the point Marxists that are less anti-AI are making. Liberals might, but they reject the LTV. If we apply the law of value to generative AI, then we know that it's the same as all machinery, it's simply crystallized former labor that can lower the socially necessary labor time of certain commodities in certain conditions.

Take, say, a stock image for a powerpoint slide that illistrates a concept. We can either have people dedicated to making stock images in broad and unique enough situations, and have people search for and select the right image, or we can generate an image or two and be done with it. Side by side, the end products are near-identical, but the labor-time involved in the chain for each is different. The value isn't higher for the generated image, it lowers the socially necessary labor time for stock images.

We are communists, here, and while I do think there's some merit to the argument that misunderstanding the boundaries and limitations of LLMs leads to some workers and capitalists relying on it in situations it cannot be, I also think the visceral hatred I see for AI is sometimes clouding people's judgements.

TL;DR AI does have use cases. It isn't creating new value, but it can lower SNLT in certain situations, and we as communists need to properly analyze those rather than dogmatically dismiss it whole-cloth. It's over-applied in capitalism due to the AI bubble, that doesn't mean it's never usable.

load more comments (11 replies)
load more comments (7 replies)
[–] CriticalResist8@lemmygrad.ml 18 points 2 months ago (2 children)

These are all historical problems of capitalism; we need to be able to cut through the veil instead of going around it, and attack the root cause, otherwise we are just reacting to new developments.

load more comments (2 replies)
[–] 10TH_OF_SEPTEMBER_CALL@hexbear.net 11 points 2 months ago

Most of the harm comes from the hype and social panic around it. We could have threaded it as the interesting gadget it is, but the crapitalists thoughts they finally had a way to get rid of human labour and crashed the work economy... again

[–] HakFoo@lemmy.sdf.org 25 points 2 months ago (1 children)

What I don't like is that they're selling a toy as a tool, and arguably as the One And Only Tool.

You're given a black box and told to just keep prompting it to get lucky. That's fine for toys like "give me a fresh low-quality wallpaper every morning." or "pretend you're Monkey D. Luffy and write a song from his perspective."

But it's not appropriate for high-stakes work. Professional tools have documented rules, behaviours, and limits. They can be learned and steered reliably because they're deterministic to a fault. They treat the user with respect and prioritixe correctness. Emacs didn't wrap it in breathess sycopantic language when the code didn't compile. Lotus 1-2-3 didn't decide to replace half the "7's" in your spreadsheet with some random katakana becsuse it was close enough. AutoCAD didn't add a spar in the middle of your apartment building because it was statistically probable after looking at airplane wings all day.

[–] CriticalResist8@lemmygrad.ml 12 points 2 months ago (8 children)

I mean software glitches all the time, some widespread software has long-standing bugs in it that its developers or even auditors can't figure out and people just learn to work around the bug. Photoshop is made on 20 year old legacy code and also uses non-deterministic algorithms that predate AI (the spot healing brush for example which you often have to redo several times to get a different result). I agree that there's a big black box aspect to LLMs and GenAI, can't say for all AI, but I don't think it's necessarily inherent to the tech or means it shouldn't be developed more.

Actually image AI is severely simple in its methods. Provide it with the exact same inputs (including the seed number) and it will output the exact same image every time, with only very minor variations. Should it have no variations? Depends; image gen AI isn't an engineering tool and doesn't profess to have a 0.1mm margin of error like other machines might need to.

Back in 2023 already China used an AI (they didn't say what type exactly) to blueprint the electrical cabling on a new ship model, and it did it with 100% accuracy. It used to take a team of engineers one year to do this and an AI did it in 24 hours. There's a lot of toy aspects to LLMs but this is also a trap of capitalism as this is what tech companies in startup mode are banking on. It's not all neural models are capable of doing.

You might be interested that the Iranian government has recently published guidelines on AI in academia. Unfortunately I don't have a source as this comes from an Iranian compsci student I know, they say that you can use LLMs in university but need to note the specific model used, time of usage, and can prove you understand the topic then that's 100% clean for Iranian academic standards.

Iran is investing a lot in tech under heavy sanctions, and making everything locally (it is estimated 40-50% of all uni degrees in Iran are science degrees). To them AI is a potential way to improve their conditions under this context, and that's what they're exploring.

load more comments (8 replies)
[–] KalergiPlanner@lemmygrad.ml 18 points 2 months ago

"And i don’t mean stuff like deepfakes/sora/palantir/anything like that" bro, we don't live in a world where LLMs are excluded from those uses

the technology itself isn't bad, but we live in a shitty capitalist world where every instance of automation, rather than liberating mankind, fucks them over. a thing that can allow one person to do the labor of many is a beautiful thing, but under capitalism increases of productivity only lead to unemployment; though, on the bright side, it consequently also causes a decrease in the rate of profit.

[–] Darkcommie@lemmygrad.ml 18 points 2 months ago (1 children)

Because we can see what it does without properly regulation and also it’s very overhyped by tech companies in how much utility it actually has

load more comments (1 replies)
[–] fox@hexbear.net 18 points 2 months ago

isn't providing an alternative where you can get instant feedback when you're journaling

ELIZA was written in the 60s. It's a natural language processor that's able to have reflective conversations with you. It's not incredible but there's been sixty years of improvements on that front and modern ones are pretty nice.

Otherwise, LLMs are a a probabilistic tool: the input doesn't determine the output. This makes them useless at things tools are good at, which is repeatable results based on consistent inputs. They generate text with an authoritative voice but all domain experts find that they're wrong more often than they're right, which makes them unsuitable as automation for white-collar jobs that require any degree of precision.

Further, LLMs have been demonstrated to degrade thinking skills, memory, and self-confidence. There are published stories about LLMs causing latent psychosis to manifest in vulnerable people, and LLMs have encouraged suicide. They present a social harm which cannot be justified by their limited use cases.

Sociopolitically, LLMs are being pushed by some of the most evil people alive and their motives must be questioned. You'll find oceans of press about all the things LLMs can do that are fascinating or scary, such as the TaskRabbit story (which was fabricated entirely). The media is culpable in the image that LLMs are more capable than they are, or that they may become more capable in the future and thus must be invested in now.

[–] CoreComrade@lemmygrad.ml 18 points 2 months ago (1 children)

For myself, it is the projected environmental impact. The power demand for data centers has already been on the rise due to the growth of the internet. With the addition of AI and the training thereof, the amount of power is rising/will rise at an unsustainable rate. The amount of electricity used creates strain on existing power grids, the amount of water that goes into cooling the hardware for the data centers creates strain on water supply, and this all plays into a larger amount of carbon emissions.

Here is a good link that speaks to the environmental impact: genAI Environmental Impact

Beyond the above the threat of people losing jobs within an already brutal system is a bit terrifying to me. Though others have already wrote more in length here regarding this.

[–] CriticalResist8@lemmygrad.ml 13 points 2 months ago* (last edited 2 months ago)

We have to be careful how we wield the environmental arguments. In the first phase, it's often used to demonize Global South countries that are developing. Many of these countries completely skipped the personal computer step and are heavy consumers of smartphones and 4G data because it came around the time they could begin to afford the infrastructure (it's why China is developing 6G already), but there's a lot of arguments people make against smartphones (how the materials for them are produced, how you have to recharge a battery, how they get disposed of, how much electricity 5G consumes etc), but if they didn't have smartphones then these countries would just not have the internet.

edit: putting it all under the spoiler dropdown because I ended up writing an essay anyway lol.

environmental arguments

In the second phase in regards to LLM environmental impact it really depends and can already be mitigated. I'll try not to make a huge comment because I don't want to write an essay, but the source's claims need scrutiny. Everything consumes energy - even we as human bodies release GHG. Going to work requires energy and using a computer for work requires energy too. If AI can do in 10 seconds what takes a human 2 hours, then you are certainly saving energy, if that's the only metric we're worried about.

So it has to be relativized which most AI environmental articles don't do. A chatGPT prompt consumes five times more electricity than a google search, sure, but that amount is close to 0 watts. Watching Youtube also consumes energy, a minute of youtube consumes much more energy than an LLM query does.

Some people will say that we need to stop watching Youtube, no more treats or fun for workers, which is obviously not something we take seriously (deleting your emails to make room in data centers was a huge thing on linkedin a few years ago too).

And all of this pales in comparison to the fossil fuel industry that we keep pumping money into in the west or obsolete tech that does have greener alternatives but we keep forcing on people because there's money to be made.

edit - and the meat and animal industry.... Beef is very water-intensive and polluting, it's not even close to AI. If that's the metric then those that can should become vegan.

Likewise for the water usage, there was that article about texas telling people to take fewer showers because it needs the water for data centers... I don't know if you saw it at the time, it went viral on social media. It was a satirical article against AI, that people used as a serious argument. Texas never said to take fewer showers, these datacenters don't use a lot of water at all as a share of total consumption in their respective geographical areas. In the US a bigger problem imo is the damming of the Colorado River so that almost no water reaches Mexico downstream, and the water is given out to farmers for free in arid regions so they can grow water-intensive crops like rice or dates (and US dates don't even taste good)

It also has sort of an anti-civ conclusion... Everything consumes energy and emits pollution, so the most logical conclusion is to destroy all technology and go back to living like the 13th century. And if we can keep some technology how do we choose between AI and Youtube?

Rather I believe investments in research make things better over time, and this is the case for AI too (and we would have much better, safe nuclear power plants too if we kept investing in research instead of giving in to fearmongering and halting progress but I digress). I changed a lot of my point of view on environmentalism when back in 2020 people were protesting against 5G because "microwaves" and "we don't need it" and I was on board (4G was plenty fast enough) until I saw how in some places they use 5G for remote surgery and that's a great thing that they couldn't do with 4G because there was too much latency. A doctor in China with 6G could perform remote surgery on a child in the Congo.

In China electricity is considered a solved problem; at any time the grid has 2-3x more energy than it needs. The west has decided to stop investing in public projects and instead concentrate all surplus value in the hands of a select few. We have stopped building housing, we stopped building roads and rail, but we find the money to build datacenters that could be much greener, but why would they be when that costs money and there's no laws that mandate it?

Speaking of China they use a lot of coal still (comparatively speaking) but they also see it just an outdated means of energy production that can be replaced by newer, better alternatives. It's very different, they're doing a lot of solar and wind - in the west btw chinese solar panels are tariffed to hell and back, if they weren't every single building in europe would be equipped with solar panels - and even pioneering new methods of energy production and storage, like the sodium battery or gravity storage. Gravity battery storage (raising and lowering heavy blocks of concrete over the day) is not necessarily Chinese but in Europe this is still just a prototype. In China they're already building them as part of their energy strategy. They don't demonize coal as uniquely evil like liberals might, but rather that once they're able to, they'll ditch coal because there's better alternatives now.

In regards to AI in China there's been a few articles posted on the grad and it's promising. They are careful about efficiency because they have to be. I don't know if you saw the article from a few days ago about Alibaba Cloud cutting the number of GPUs needed to host their model farm by 82%. The test was done on NVidia H20 cards which is not a coincidence, it's the best China can get by US decree. The top of the line model is the H100 (the H20 having only 20% of the capabilities) but the US has an order not to export anything above the H20 to China, so they find creative ways to stretch it. And now they're developing their own GPU industry and the US shot itself in the foot again.

Speaking of model farm... it's totally possible to run models locally. I have a 16GB GPU and I can generate realistic pictures (if that's the benchmark) in 30 seconds, the model only needs 5GB Vram but the architecture inside the card is also important for speed. For LLM generation I can run 12B models, rarely higher, and with new efficiency algorithms I think over time that will stretch to bigger and bigger models, all on the same card. They run model farms for the cloud service because so many people connect to it at the same time, but it's not a hard requirement for running LLMs. In another comment I mentioned how Iran is interested in LLMs because like 4G and other modern tech that lags a bit in the west, they see it as a way to stretch their material conditions more (being heavily sanctioned economically).

There's also stuff being done in the open source community, for example LORAs are used in image generation and help skew the generation towards a certain result. This means you don't need to train a whole model, loras are usually trained by people on their machines with like 100 images. As for training time it can be done in 30 minutes to train a lora. So what we see is comparatively few companies/groups making full models (either LLM or image gen, called checkpoints) and most people making finetunes for these models.

Meanwhile in the West there's a 500 billion $ "plan" to invest in the big tech companies that already have a ton of money, that's the best they can muster. Give them unlimited money and expect that they won't act like everything is unlimited. Deepseek actually came out shortly after that plan (called Stargate) and I think pretty much killed it before it even took off lol. It's the destiny of capitalism to con the government into giving them money, of course they were not going to say "no actually if we put some personal investment we could make a model that uses 5x less energy", because they would not get 500 billion $ if they did. They also don't care about the energy grid, that's an externality for them - the government will take care of it, from their pov.

Anyway it's not entirely a direct response to your comment because I'm sure you don't believe in all the fearmongering, but it's stuff I think is important to keep in mind and I wanted to add here. And I ended up writing an essay anyway lol.

[–] infuziSporg@hexbear.net 17 points 2 months ago (13 children)

Why would you want instant feedback when you're journaling? The whole point of journaling is to have something that's entirely your own thoughts.

load more comments (13 replies)
[–] big_spoon@lemmygrad.ml 16 points 2 months ago (2 children)

there's the people who hate it bc they have petit-bourgueois leanings and think at the stuff as "stealing content" and "copyrighted material" like artist people, "code monkeys" or writers

and there's the people that hate it because it's an obvious grift made to siphon resources and "try" to be a big replacement for proles, and a huge wasteful technology that dries water sources and rises electricity bills with their data centers

yeah, it's kinda useful to make a drawing, fill blank space in a document, or being a dumb assistant who hallucinates anything to pretend that it knows stuff

[–] LeninWeave@hexbear.net 11 points 2 months ago (5 children)

there's the people who hate it bc they have petit-bourgueois leanings and think at the stuff as "stealing content" and "copyrighted material" like artist people

It's actually not petty bourgeois for proletarians in already precarious positions to object to the blatant theft of their labor product by massive corporations to feed into a computer program intended to replace them (by producing substandard recycled slop). Even in the cases where these people are so-called "self-employed" (usually not actually petty bourgeois, but rather precarious contract labor), they're still correct to complain about this - though the framing of "copyrighted material" is flawed (you can't use the master's tools to dismantle his house). No offense, but dismissing them like this is a bad take. I agree with the rest of your comment.

load more comments (5 replies)
load more comments (1 replies)
[–] ZWQbpkzl@hexbear.net 15 points 2 months ago (4 children)

GenAI really is taking people's jobs. It might not do it better. It might be less safe. It might even be less cost efficient. It's still happening.

Its not even a case of Do you think you can be replaced by AI? Instead its, Does your employer think you can be replaced with AI? Any white collar worker would be foolish to think that's something their employer has not considered. Corporate advertising is pleading them to reconsider multiple times a day.

[–] CriticalResist8@lemmygrad.ml 9 points 2 months ago (4 children)

Exactly, and this process keeps happening in capitalism making AI neither unique nor truly new in its social repercussions. Therefore the answer is socialism so that tech frees us instead of creating crises.

Although the companies that bought on the promise of replacing labor are now walking it back as they realize it doesn't replace but enhances labor. It's like Zuckerberg not allowing his kids on Facebook, AI companies are not replacing their employees with AI either but they sell the package because capitalism needs to make money, not social good.

load more comments (4 replies)
load more comments (3 replies)
[–] ZWQbpkzl@hexbear.net 15 points 2 months ago

crowd isn't providing an alternative where you can get instant feedback when you're journaling

Side bar: This is a very specific usage of GenAI. Are you like writing your diary into ChatGPT?

[–] GreatSquare@lemmygrad.ml 14 points 2 months ago (8 children)

It's not feedback. That's not what the tool is for. It doesn't have an opinion. There's no one on the other side of the screen. The "A" stands for Artificial.

load more comments (8 replies)
[–] Twongo@lemmy.ml 11 points 2 months ago

genai turned the internet into a hellhole. nothing is genuine. information became worthless. facts don't matter anymore.

it carries itself into the world outside the internet. slopaganda, decision making and policymaking are affected by genai and will make your life actively worse.

welcome to the post-fact world where you can't even trust yourself.

[–] darkernations@lemmygrad.ml 11 points 2 months ago* (last edited 2 months ago)

This is the correct take from an ML perspective (essentially an extension of the fact that we should not lament the weaver for the loom):

https://redsails.org/artisanal-intelligence/

The problem is not the technology per se (criticisms such as energy consumption or limitations of the tools just means there's room for improvement for the tech or how we use it) but capitalism. If you want a flavour of opinions on this this click on my username and order comments by most controversial for the relevant threads.

Artisans that claim they are for marxist proletariat emancipation but fear the socialisation of their own labour will need to explain why their take is not Proudhonist.

That post really is an excellent article in truly understanding the Marxist critique of reaction and bourgeoisie mindsets. Another one that people here should read along with it is Stalin’s Shoemaker; it highlights the dialectical materialist journey of a worker developing revolutionary potential:

https://redsails.org/stalins-shoemaker/

Class consciousness means understanding where one is in the cog of the machine and not being upset because one wasn’t proletariat enough. This is meant to be Marxism not vibes-based virtue signaling.

Meanwhile in a socialist country: China’s AI industry thrives with over 5,300 enterprises https://lemmygrad.ml/post/9357646

Marxism is a science. People should treat it is as such and take the opportunity to study and learn, to develop their human potential beyond what our societies consider is acceptable.

https://lemmygrad.ml/post/9364892/7113860

[–] Shinji_Ikari@hexbear.net 11 points 2 months ago (2 children)

It has a flattening effect. The things that come out the other end don't sound human. They sound like the collective mouth of reddit and blog spam.

I don't know why you'd use it for journaling. what feedback do you even need for journaling? Shouldn't that be your thoughts and not your thoughts filtered through the machine of averages and disembodied?

load more comments (2 replies)
[–] ksynwa@lemmygrad.ml 10 points 2 months ago

There is stuff like spellcheck and languagetool which can give you a specific variety of feedback.

[–] rainpizza@lemmygrad.ml 9 points 2 months ago* (last edited 2 months ago) (4 children)

Those people are usually Westerners that take the easy route which is to blame a tool for the issues caused by capitalism.

However, if you look beyond the small western world into countries like China, Cuba, Vietnam and others in the global South, AI, including genai, is celebrated. You can find plenty of content in Xiaohongshu with comments fascinated with the inventions of people.

One example of this is this song created by a person that used AI for the production of it:

This is another where someone produces a video regarding neoliberalism to educate:

There is even a Yotube channel called Dialectical Fire that posts incredible content using AI.

All I know is that this new form of luddism will disipate into history similarly to the past luddism of past century.

[–] LeninWeave@hexbear.net 19 points 2 months ago* (last edited 2 months ago) (3 children)

All I know is that this new form of luddism will disipate into history similarly to the past luddism of past century.

You're aware that the luddites were correct, right? They weren't vulgar technology haters, they had valid concerns about their pay and the quality of the products produced (actually an excellent comparison to many people who oppose LLMs), which turned out to be accurate. The idea of luddites as you use it here is explicitly liberal propaganda used to smear labor movements for expressing valid concerns, and they didn't dissipate into history, there were and are subsequent similar labor movements.

[–] yogthos@lemmygrad.ml 12 points 2 months ago (20 children)

The point is that even though the concerns the luddites had were correct, their methods were not. Hence why they failed. Now, people are trying to do the same things that we know don't work.

load more comments (20 replies)
[–] sleeplessone@lemmy.ml 9 points 2 months ago (4 children)

The luddites were dead fucking wrong. Instead of seizing the means of production, they thought smashing them would solve their woes. It doesn't matter that the luddites were skilled machine operators with a rudimentary form of class consciousness; their understanding of the issue was idealist and therefore opposed to Marxism. Luddism is liberalism.

load more comments (4 replies)
load more comments (1 replies)
load more comments (3 replies)
[–] BarrelsBallot@lemmygrad.ml 9 points 2 months ago (1 children)

Why would you want to outsource one of the last vestiges of being a human we have left (thinking) to a 3rd party of any kind?

I don't care if it's an AI or an underprivileged person in another region of the world, get that shit out of here. The internet and similar tools of isolation are bad enough, now we're being handed keys to an artificial friend keen on severing our social connections and ability to think on our own.

load more comments (1 replies)
[–] bennieandthez@lemmygrad.ml 9 points 2 months ago

People fear that they're gonna lose their job that consists 99% of sending and receiving emails and doing zoom meetings. They know their job is bullshit and replaceable.

[–] Pieplup@lemmygrad.ml 8 points 2 months ago (10 children)

The Kavernacle has videos on this. He talks about how it's eroding emotional connection in society and having people offload their thikcing onto chatgpt. I think this is a problem But my main issue i'm most passionate about is the issue of misinformation. In the process of writing thsi post i did an experiment and asked it some questiosn about autism. I asked them waht autsitic burnout is. They gave an explanation that's incorrect, and furthers the incorrect assumption alot of pepole make that i'ts something specific to autistic people. But it's a wider phenomon of physiological neurocongitive burnout. I confronted them on this they refined their position then I asked them why they said it. It constnatly contradicts itself and will just be like yeah you are correct i am wrong, while continuing to not repeat the same incorrect claim. https://i.imgur.com/KINH7lV.png https://i.imgur.com/EHtDwNj.png According to chatgpt their own sentence contradicts itself. They also proceeded to tell invent a new usage of a very obscure medical term that is not widely used then try to gaslight me into believing it's a commonly jused term among autsitic people whne it isn't https://i.imgur.com/LStZdNg.png

And what frustrates me even more is a couple months ago i had someone swear to me up and down that, the hallucinations in chatgpt were fixed and they ain't that bad anymore. Granted, they were far worse in the past. It litaerlly tol dme autims level system was something that no longer exists despite it being currently widely used.

But here's the problem. I am an expert on this topic. Most people aren't asking chatgpt questions about things they are an expert in, and they also are using it as a therapist.

All in all i wasn't expecting it to have no hallucinations but i was atelast expecting it to not still be a massive issue in just basic information retrival on topics that aren't even super obscure and information si widely available about.

Ultimately here's the issue. The vast majority of pro-genai people don't know what genai actually is and why it is bad to use it in the way they are as a result. GenAI is a very advanced from of predictive text function. It just predicts what it thinks the words following that queery is based on the tereabytes maybe evne petabytes of infromation it's scrapped from the internet. Which means it's not really useful for anything beyond very basic things like asking it to generate simple ideas or summarize an article or video and very basic coding. I only dabble very lighlty in programming but frmo hwat i've heard actaul experienced programmers say it trying to use chatgpt for major coding just means having to rewrite most of the code.

load more comments (10 replies)
[–] robot_dog_with_gun@hexbear.net 8 points 2 months ago (2 children)

lol

do not use an LLM for whatever the heck you think you're doing with it.

load more comments (1 replies)
[–] chgxvjh@hexbear.net 8 points 2 months ago* (last edited 2 months ago) (4 children)
  • Burocratic nightmare

GenAI can be kind of useful when you use it on purpose. But it will be used to make anything bureaucratic an even bigger nightmare. After sale customer support, talking to humans in a call center at least incurs costs to the corporation, with gen AI they can keep you in the loop forever for cents.

Unemployment claims, immigration, disability pay, hiring are also all made worse by AI.

  • Devaluing human labor

They are coming for our jobs. Or at least they making our jobs worse.

  • Waste of resources

Energy, water, computation ...

I think this is one of the weaker arguments tbh.

  • Corporations get away with blatant mass theft of intellectual property

  • Destruction of social reasoning

Science & academia already was in a bad spot with reproducibility crisis, fake/bad studies. Now this is automated.

Instead of letting humans do creative work, too much attention will be taken up reviewing slop.

This problem also exists in social and traditional media.

People also put a lot of implicit trust in AI answers when the answer might just be based on a whitewashed shitpost or wrong for other reasons. With websearch it's easier to judge yourself whether the source is to be trusted.

  • People letting AI control their lives

This will be worse in the future when either companies learn how to manipulate datasets to get ahead (similar to search engine optimization). Or when AI companies will just straight up places advertisements in AI answers.

  • Destruction of human connection

People replacing their human friends with AI friends and partners isn't healthy.

load more comments (4 replies)
load more comments
view more: next ›