genAI tries to computerise the only thing we can truly call human. Abstract thought in creativity. So it’s bad because it feels cold and inhuman and doesn’t even do its job that well.
Comradeship // Freechat
Talk about whatever, respecting the rules established by Lemmygrad. Failing to comply with the rules will grant you a few warnings, insisting on breaking them will grant you a beautiful shiny banwall.
A community for comrades to chat and talk about whatever doesn't fit other communities
Why would you want to outsource one of the last vestiges of being a human we have left (thinking) to a 3rd party of any kind?
I don't care if it's an AI or an underprivileged person in another region of the world, get that shit out of here. The internet and similar tools of isolation are bad enough, now we're being handed keys to an artificial friend keen on severing our social connections and ability to think on our own.
I think about it too i only ask the robot after i alr thought about it
genai turned the internet into a hellhole. nothing is genuine. information became worthless. facts don't matter anymore.
it carries itself into the world outside the internet. slopaganda, decision making and policymaking are affected by genai and will make your life actively worse.
welcome to the post-fact world where you can't even trust yourself.
The Kavernacle has videos on this. He talks about how it's eroding emotional connection in society and having people offload their thikcing onto chatgpt. I think this is a problem But my main issue i'm most passionate about is the issue of misinformation. In the process of writing thsi post i did an experiment and asked it some questiosn about autism. I asked them waht autsitic burnout is. They gave an explanation that's incorrect, and furthers the incorrect assumption alot of pepole make that i'ts something specific to autistic people. But it's a wider phenomon of physiological neurocongitive burnout. I confronted them on this they refined their position then I asked them why they said it. It constnatly contradicts itself and will just be like yeah you are correct i am wrong, while continuing to not repeat the same incorrect claim. https://i.imgur.com/KINH7lV.png https://i.imgur.com/EHtDwNj.png According to chatgpt their own sentence contradicts itself. They also proceeded to tell invent a new usage of a very obscure medical term that is not widely used then try to gaslight me into believing it's a commonly jused term among autsitic people whne it isn't https://i.imgur.com/LStZdNg.png
And what frustrates me even more is a couple months ago i had someone swear to me up and down that, the hallucinations in chatgpt were fixed and they ain't that bad anymore. Granted, they were far worse in the past. It litaerlly tol dme autims level system was something that no longer exists despite it being currently widely used.
But here's the problem. I am an expert on this topic. Most people aren't asking chatgpt questions about things they are an expert in, and they also are using it as a therapist.
All in all i wasn't expecting it to have no hallucinations but i was atelast expecting it to not still be a massive issue in just basic information retrival on topics that aren't even super obscure and information si widely available about.
Ultimately here's the issue. The vast majority of pro-genai people don't know what genai actually is and why it is bad to use it in the way they are as a result. GenAI is a very advanced from of predictive text function. It just predicts what it thinks the words following that queery is based on the tereabytes maybe evne petabytes of infromation it's scrapped from the internet. Which means it's not really useful for anything beyond very basic things like asking it to generate simple ideas or summarize an article or video and very basic coding. I only dabble very lighlty in programming but frmo hwat i've heard actaul experienced programmers say it trying to use chatgpt for major coding just means having to rewrite most of the code.
It's a toy. I'm not against toys, but the amount of energy and resources we are pouring into this toy is alarming.
My impression is that a lot of people realize this tech will be used against them under capitalism, and they feel threatened by it. The real problem isn't with the tech itself, but with capitalist relations, and that's where people should direct their energy.
GenAI is the highest form of commodification of culture so far. It treats all text, images, videos, songs, speech and all other forms of organic cultural expression as slop to be generated over and over without its original context. It provides little to no serious improvement in industry, and is only propped up despite no profits due to either artificial growth in internet platforms or unrealistic expectations from the AGI folks.
And it's inneficient. We could easily have more therapists rather than wasteful chatbots that cost billions. Such technology can only exist as a bandage to the ailments of neoliberalism, and is not a solution to anything. And that's not even going into the worsening impact of cultural imperialism due to the tendency of these models to reproduce Northwestern cultural hegemony.
The alternative is actually pretty simple: measures to lower unemployment. Most capitalist countries have issues with unemployment or underemployment. And most tasks of Gen"AI" can be done by paid humans quite well, possibly even at actually lower costs than what the informatics cartel is tanking in order to ride the bubble.
Human labour is what produces value. All else is secondary.
Why would you want instant feedback when you're journaling? The whole point of journaling is to have something that's entirely your own thoughts.
Because we can see what it does without properly regulation and also it’s very overhyped by tech companies in how much utility it actually has
"And i don’t mean stuff like deepfakes/sora/palantir/anything like that" bro, we don't live in a world where LLMs are excluded from those uses
the technology itself isn't bad, but we live in a shitty capitalist world where every instance of automation, rather than liberating mankind, fucks them over. a thing that can allow one person to do the labor of many is a beautiful thing, but under capitalism increases of productivity only lead to unemployment; though, on the bright side, it consequently also causes a decrease in the rate of profit.
an alternative where you can get instant feedback when you're journaling
GenAI isn't giving you feedback. It's not a person. The entire thing is a social black hole for a society where everyone is already deeply alienated from each other.
For myself, it is the projected environmental impact. The power demand for data centers has already been on the rise due to the growth of the internet. With the addition of AI and the training thereof, the amount of power is rising/will rise at an unsustainable rate. The amount of electricity used creates strain on existing power grids, the amount of water that goes into cooling the hardware for the data centers creates strain on water supply, and this all plays into a larger amount of carbon emissions.
Here is a good link that speaks to the environmental impact: genAI Environmental Impact
Beyond the above the threat of people losing jobs within an already brutal system is a bit terrifying to me. Though others have already wrote more in length here regarding this.
We have to be careful how we wield the environmental arguments. In the first phase, it's often used to demonize Global South countries that are developing. Many of these countries completely skipped the personal computer step and are heavy consumers of smartphones and 4G data because it came around the time they could begin to afford the infrastructure (it's why China is developing 6G already), but there's a lot of arguments people make against smartphones (how the materials for them are produced, how you have to recharge a battery, how they get disposed of, how much electricity 5G consumes etc), but if they didn't have smartphones then these countries would just not have the internet.
edit: putting it all under the spoiler dropdown because I ended up writing an essay anyway lol.
environmental arguments
In the second phase in regards to LLM environmental impact it really depends and can already be mitigated. I'll try not to make a huge comment because I don't want to write an essay, but the source's claims need scrutiny. Everything consumes energy - even we as human bodies release GHG. Going to work requires energy and using a computer for work requires energy too. If AI can do in 10 seconds what takes a human 2 hours, then you are certainly saving energy, if that's the only metric we're worried about.
So it has to be relativized which most AI environmental articles don't do. A chatGPT prompt consumes five times more electricity than a google search, sure, but that amount is close to 0 watts. Watching Youtube also consumes energy, a minute of youtube consumes much more energy than an LLM query does.
Some people will say that we need to stop watching Youtube, no more treats or fun for workers, which is obviously not something we take seriously (deleting your emails to make room in data centers was a huge thing on linkedin a few years ago too).
And all of this pales in comparison to the fossil fuel industry that we keep pumping money into in the west or obsolete tech that does have greener alternatives but we keep forcing on people because there's money to be made.
edit - and the meat and animal industry.... Beef is very water-intensive and polluting, it's not even close to AI. If that's the metric then those that can should become vegan.
Likewise for the water usage, there was that article about texas telling people to take fewer showers because it needs the water for data centers... I don't know if you saw it at the time, it went viral on social media. It was a satirical article against AI, that people used as a serious argument. Texas never said to take fewer showers, these datacenters don't use a lot of water at all as a share of total consumption in their respective geographical areas. In the US a bigger problem imo is the damming of the Colorado River so that almost no water reaches Mexico downstream, and the water is given out to farmers for free in arid regions so they can grow water-intensive crops like rice or dates (and US dates don't even taste good)
It also has sort of an anti-civ conclusion... Everything consumes energy and emits pollution, so the most logical conclusion is to destroy all technology and go back to living like the 13th century. And if we can keep some technology how do we choose between AI and Youtube?
Rather I believe investments in research make things better over time, and this is the case for AI too (and we would have much better, safe nuclear power plants too if we kept investing in research instead of giving in to fearmongering and halting progress but I digress). I changed a lot of my point of view on environmentalism when back in 2020 people were protesting against 5G because "microwaves" and "we don't need it" and I was on board (4G was plenty fast enough) until I saw how in some places they use 5G for remote surgery and that's a great thing that they couldn't do with 4G because there was too much latency. A doctor in China with 6G could perform remote surgery on a child in the Congo.
In China electricity is considered a solved problem; at any time the grid has 2-3x more energy than it needs. The west has decided to stop investing in public projects and instead concentrate all surplus value in the hands of a select few. We have stopped building housing, we stopped building roads and rail, but we find the money to build datacenters that could be much greener, but why would they be when that costs money and there's no laws that mandate it?
Speaking of China they use a lot of coal still (comparatively speaking) but they also see it just an outdated means of energy production that can be replaced by newer, better alternatives. It's very different, they're doing a lot of solar and wind - in the west btw chinese solar panels are tariffed to hell and back, if they weren't every single building in europe would be equipped with solar panels - and even pioneering new methods of energy production and storage, like the sodium battery or gravity storage. Gravity battery storage (raising and lowering heavy blocks of concrete over the day) is not necessarily Chinese but in Europe this is still just a prototype. In China they're already building them as part of their energy strategy. They don't demonize coal as uniquely evil like liberals might, but rather that once they're able to, they'll ditch coal because there's better alternatives now.
In regards to AI in China there's been a few articles posted on the grad and it's promising. They are careful about efficiency because they have to be. I don't know if you saw the article from a few days ago about Alibaba Cloud cutting the number of GPUs needed to host their model farm by 82%. The test was done on NVidia H20 cards which is not a coincidence, it's the best China can get by US decree. The top of the line model is the H100 (the H20 having only 20% of the capabilities) but the US has an order not to export anything above the H20 to China, so they find creative ways to stretch it. And now they're developing their own GPU industry and the US shot itself in the foot again.
Speaking of model farm... it's totally possible to run models locally. I have a 16GB GPU and I can generate realistic pictures (if that's the benchmark) in 30 seconds, the model only needs 5GB Vram but the architecture inside the card is also important for speed. For LLM generation I can run 12B models, rarely higher, and with new efficiency algorithms I think over time that will stretch to bigger and bigger models, all on the same card. They run model farms for the cloud service because so many people connect to it at the same time, but it's not a hard requirement for running LLMs. In another comment I mentioned how Iran is interested in LLMs because like 4G and other modern tech that lags a bit in the west, they see it as a way to stretch their material conditions more (being heavily sanctioned economically).
There's also stuff being done in the open source community, for example LORAs are used in image generation and help skew the generation towards a certain result. This means you don't need to train a whole model, loras are usually trained by people on their machines with like 100 images. As for training time it can be done in 30 minutes to train a lora. So what we see is comparatively few companies/groups making full models (either LLM or image gen, called checkpoints) and most people making finetunes for these models.
Meanwhile in the West there's a 500 billion $ "plan" to invest in the big tech companies that already have a ton of money, that's the best they can muster. Give them unlimited money and expect that they won't act like everything is unlimited. Deepseek actually came out shortly after that plan (called Stargate) and I think pretty much killed it before it even took off lol. It's the destiny of capitalism to con the government into giving them money, of course they were not going to say "no actually if we put some personal investment we could make a model that uses 5x less energy", because they would not get 500 billion $ if they did. They also don't care about the energy grid, that's an externality for them - the government will take care of it, from their pov.
Anyway it's not entirely a direct response to your comment because I'm sure you don't believe in all the fearmongering, but it's stuff I think is important to keep in mind and I wanted to add here. And I ended up writing an essay anyway lol.
isn't providing an alternative where you can get instant feedback when you're journaling
ELIZA was written in the 60s. It's a natural language processor that's able to have reflective conversations with you. It's not incredible but there's been sixty years of improvements on that front and modern ones are pretty nice.
Otherwise, LLMs are a a probabilistic tool: the input doesn't determine the output. This makes them useless at things tools are good at, which is repeatable results based on consistent inputs. They generate text with an authoritative voice but all domain experts find that they're wrong more often than they're right, which makes them unsuitable as automation for white-collar jobs that require any degree of precision.
Further, LLMs have been demonstrated to degrade thinking skills, memory, and self-confidence. There are published stories about LLMs causing latent psychosis to manifest in vulnerable people, and LLMs have encouraged suicide. They present a social harm which cannot be justified by their limited use cases.
Sociopolitically, LLMs are being pushed by some of the most evil people alive and their motives must be questioned. You'll find oceans of press about all the things LLMs can do that are fascinating or scary, such as the TaskRabbit story (which was fabricated entirely). The media is culpable in the image that LLMs are more capable than they are, or that they may become more capable in the future and thus must be invested in now.
People fear that they're gonna lose their job that consists 99% of sending and receiving emails and doing zoom meetings. They know their job is bullshit and replaceable.
This is the correct take from an ML perspective (essentially an extension of the fact that we should not lament the weaver for the loom):
https://redsails.org/artisanal-intelligence/
The problem is not the technology per se (criticisms such as energy consumption or limitations of the tools just means there's room for improvement for the tech or how we use it) but capitalism. If you want a flavour of opinions on this this click on my username and order comments by most controversial for the relevant threads.
Artisans that claim they are for marxist proletariat emancipation but fear the socialisation of their own labour will need to explain why their take is not Proudhonist.
That post really is an excellent article in truly understanding the Marxist critique of reaction and bourgeoisie mindsets. Another one that people here should read along with it is Stalin’s Shoemaker; it highlights the dialectical materialist journey of a worker developing revolutionary potential:
Class consciousness means understanding where one is in the cog of the machine and not being upset because one wasn’t proletariat enough. This is meant to be Marxism not vibes-based virtue signaling.
Meanwhile in a socialist country: China’s AI industry thrives with over 5,300 enterprises https://lemmygrad.ml/post/9357646
Marxism is a science. People should treat it is as such and take the opportunity to study and learn, to develop their human potential beyond what our societies consider is acceptable.
- It's a complete waste of resources
- The economic fallout of the bubble bursting could be unprecedented. (Yes shareholder value ≠ quality of life, but we've seen how working people get fucked over when the stock market crashes)
- The environmental fallout is rarely considered
- The cost to human knowledge and even thinking ability is huge
- The emotional relationships people form with these models are concerning
- What's the societal cost of further isolating people?
- What opportunity cost is there? How many actually useful things aren't being discovered because the big seven are too focused on LLMs?
- Nobody even wants LLMs. There's no path to profitability. GenAI is a trillion dollar meme.
- Even when it does generate useful output sometimes, LLMs are probabilistic and therefore outputs are not reproducible
- Why do you need instant feedback when you're doing absolutely anything? (Sometimes it's warranted but then talk with a person)
The cost to human knowledge and even thinking ability is huge
100%.
We are communists. We should understand the labor theory of value. Therefore, we should understand why GenAI does not create any new value: it's not a person and it does no labor. It recycles existing knowledge into a lower-average-quality slurry, which is dispersed into the body of human knowledge used to train the next model which is used to produce slop that is dispersed into the... and so on and so forth.
These are all historical problems of capitalism; we need to be able to cut through the veil instead of going around it, and attack the root cause, otherwise we are just reacting to new developments.
What I don't like is that they're selling a toy as a tool, and arguably as the One And Only Tool.
You're given a black box and told to just keep prompting it to get lucky. That's fine for toys like "give me a fresh low-quality wallpaper every morning." or "pretend you're Monkey D. Luffy and write a song from his perspective."
But it's not appropriate for high-stakes work. Professional tools have documented rules, behaviours, and limits. They can be learned and steered reliably because they're deterministic to a fault. They treat the user with respect and prioritixe correctness. Emacs didn't wrap it in breathess sycopantic language when the code didn't compile. Lotus 1-2-3 didn't decide to replace half the "7's" in your spreadsheet with some random katakana becsuse it was close enough. AutoCAD didn't add a spar in the middle of your apartment building because it was statistically probable after looking at airplane wings all day.
There doesn't need to be an alternative option to offer. I don't support genAI because its flooded the internet with fake content thatnhas no label to differentiate it. It's unreversible.