My goto is basically since I have to strictly verify all the information/data AI gives me, it's faster for me to just produce this information myself. It's what they literally pay me for.
Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
You are an AI vegan though. Why try not to sound like one?
Personally it's because the harder something is pushed to me by large corporations, the more skeptical I am to begin with.
It is your stance, you don't have to compulsively change other people's minds, let them live their lives and you live how you want. For people that are wanting to listen to you, you can tell them how you feel about AI (or perhaps specifically AI chatbots) in both subjective and objective terms. If you want to prepare research and talking points, I think the most effective thing is to have a couple examples such as the Google AI box putting out objectively wrong info with the citation links leading to sites that don't back up any claim in it. Or how the outputs of comic style image generation tend to look like knock-off Tintin and appear uninspiring and unsettling. How reading generated paragraphs, looking at images and videos of fluffy slop is simply a waste of time for you. Just mix that with all the rest of the shortcomings people have provided and you'll make for a good discussion. Remember, the point is not to change people's minds or proselytize but rather to explain why you hold your opinion.
If it's a decision you make out of conviction and value it is 100% like veganism so I would say embrace it
Live your truth and people will follow. Or not and that's ok too
"it looks like shit from a butt and sounds like shit from a butt, and if I wanted to look at a shit from a butt, I would do that for free"
What is an “extremist view” in this context? Kill sam Altman? Lmao
Welcome to the world of being an activist buddy. Vegans are doing it for a living being with consciousness. Your cause is just too, imo, but just like the vegan who feels motivated and justified in bringing up their views because, to them, it’s a matter of life and death you will be belittled and mocked by those who either genuinely disagree or who do recognize the issues you describe but do not have the courage or self control to change
Start with speaking when it’s relevant. Note that this will not always win you fans. I recently spoke to my physician on this issue, who asked for consent for LLM transcription of audio session notes and automatic summarization. I am not morally opposed to such a thing for health care providers but I had many questions: how are records transmitted, stored, destroyed, does the model use any data fed into it or resultant summaries for seeding/reinforcement learning/refinement/updating internal embeddings/continual learning (this point is key bc the language I’ve seen about this shifts a lot, but basically do they feed your data back into the model to refine it further or do they have separate training and production models that allow for one to be “sanitary”), does the AI model come from the EMR provider (often Epic) or a 3rd party and if so is there a BAA, etc
In my case my provider could answer exactly 0 (zero) of these so I refused consent and am actively monitoring to ensure they are continuing to not use it at subsequent appointments. They are a professional so they’ve remained professional but it’s created some tension. I get it; I work in healthcare myself and I’ve seen these tools demoed and have colleagues that use them. They save a fairly substantial amount of time and in some cases they even guarantee against insurance clawbacks, which is a tremendous security advantage for a healthcare provider. But you gotta know what you’re doing and even then you gotta accept that some people simply will be against it on principle, thems the breaks
i don´t necessarily think sources are needed.
people don´t really care: if an aquaintance asks you you can just tell them it´s not your thing. if an employer asks you, you lose either way. the deranged rants are reserved for close friends :) But if you need some evidence: Look into the environmental consequences (fire up those coalmines for LLM prompts), the several studies that suggest only 60% of all answers are factual, the MIT study that shows how the brain atrophies from using AI and the phenomenon called "ai psychosis"
I tried AI a few times over the last few years, and sometimes I don't ignore the Gemini results from a search when I'm tired or I'm struggling to get good results.
Almost every time I've done either, helpful looking hallucinations wasted my time and made my attempt to find a solution to a technical problem less efficient. I will give specific examples, often unprompted.
I also point to a graph of my electric bill.
I also describe the logon script that a colleague (with no coding experience) asked for help with. He'd used AI to generate what he had to show me and was looking for help getting it to work. Variables declared and never used. Variables storing relevant information but different, similarly named variables used to retrieve the information.
"Its not my cup of tea".
Youre over thinking this
Your question is too vague to give any practical advice. I guess my advice is don't be so vague? There are 100s of subjects within the umbrella term of AI (you're actually talking about tokenized data inferred by LLMs but I digress). A healthy distrust around centralization of all the things is an honest conversation between adults. Using these various LLMs to remove tedious blockers to one's work is perfectly acceptable.
Now if you're coming at this from an envrionmental angle, then have that conversation with your people just as honestly as the centralization conversation. If you're in a position wherein people hang on your advice, being diplomatic for self-preservation reasons is the worst thing you can do.
Is this a work requirement? If not, who cares.
No is a full sentence.
Oh you want to explain. For those that are really interested, there are websites explaining the main points.
You don't need artificial intelligence. We already have intelligence at home.
I just mentioned to a friend of mine why I don't use AI. My hatred towards AI strives from people making it seem sentient, the companies business model, and of course, privacy.
First off, to clear any misconception, AI is not a sentient being, it does not know how to critical think, and it's incapable of creating thoughts outside from the data it's trained on. Technically speaking, a LLM is a lossy compression model, which means it takes what is effectively petabytes of information and compresses it down to a sheer 40Gb. When it gets uncompressed it doesnt uncompress the entire petabytes of information it uncompresses the response that it was trained from.
There are several issues I can think of that makes the LLM do poorly at it's job. remember LLM's are trained exclusively on the internet, as large as the internet is, it doesn't have everything, your codebase of a skiplist implementation is probably not going to be the same from on the internet. Assuming you have a logic error in your skiplist implementation, and you ask chatGPT "whats the issue with my codebase" it will notice the code you provided isn't what it was trained on and will actively try to fix it digging you into a deeper rabbit hole then when you began the implementation.
On the other hand, if you ask chatGPT to derive a truth table given the following sum of minterms, it will not ever be correct unless heavily documented (IE: truth table of an adder/subtractor). This is the simplest example I could give where these LLMs cannot critical think, cannot recognize pattrrns, and only regurgitate the information it has been trained on. It will try to produce a solution but it will always fail.
This leads me to my first point why I refuse to use LLMs, it unintentionally fabricates a lot of the information and treat it as if it's true. When I started using chatGPT to fix my codebases or to do this problem, it induced a lot of doubt in my knowledge and intelligence that I gathered these past years in college.
The second reason why I don't like LLMs are the business models of these companies. To reiterate, these tech billionaires make this bubble of delusions and fearmongering to get their userbase to stay. Titles like "chatGPT-5 is terrifying" or "openAI has fired 70,000 employees over AI improvements" they can do this because people see the title, reinvesting more money into the company and because employees heads are up these tech giants asses will of course work with openAI. It is a fucking money making loophole for these giants because of how many employees are fucking far up their employers asses. If I end up getting a job at openAI and accept it, I want my family to put me into a god damn psych ward, that's how much I frown on these unethical practices.
I often joke about this to people who don't believe this to be the case, but is becoming more and more a valid point to this fucked up mess: if AI companies say they've fired X amount of employees for "AI improvements" why has this not been adopted by defense companies/contractors or other professions in industry. Its a rhetorical question, but it makes them conclude on a better trajectory than "the reason X amount of employees were fired was because of AI improvement"
This really is a problem with expectations and hype though. And it will probably be a problem with cost as well.
I think that LLMs are really cool. It's way faster and more concise than traditional search engines at answering most questions nowadays. This is partly because search engines have degraded in the last 10 years, but LLMs blow them out of the water in my opinion.
And beyond that, I think you can generate some pretty cool things with it to use as a template. I'm not a programmer but I'm making a quite massive and relatively complicated application. That wouldn't be possible without an LLM. Sure I still have to check every line and clean up a ton of code, and of course I realize that this is all going to have to go to a substantial code review and cleanup by real programmers if I'm ever going to ship it, but the thing I'm making is genuinely already better (in terms of performance and functionality) than a lot of what's on the market. That has to count for something.
Despite all that, I think we're in the same kind of bubble now as we were in the early 2000s, except bigger. The oversell of AI comes from CEOs claiming (and to the best of my judgement they appear to be actually believing) that LLMs somehow magically will transcend into AGI if they're given enough compute. I think part of that stems from the massive (and unexpected) improvements that happened from GPT-2 to GPT-3.
And lots of smart people (like Linus Tordvals for example) point out that really, when you think about it, what is intelligence other than a glorified auto-correct? Our brains essentially function as lossy compression. So I think for some people it is incredibly alluring to believe that if we just throw more chips on the fire a true consciousness will arise. And so, we're investing all of our extra money and our pension funds into this thing.
And the irony is that I and millions of others can therefore use LLMs at a steep discount. So lots of people are quickly getting accustomed to LLMs thinking that they're always going to be free or cheap, whereas it's paid for by the bubble money and it's not super likely that it will get much more efficient in the near future.
at work mgmt always brings it up. "we need to use it more!".
I say nothing. I smile and nod. I ignore AI prompts. I ignore emails written by AI. I ignore requests coming in to integrate AI into the product.
nobody has asked me about any of the inaction for the last year so I don't plan on drawing any attention to it by outing myself.
edit: I suppose if anybody does I can just say the AI agent I used failed to alert me to the thing they wanted. 🤣
Depending on how hardcore you are about it, you can't.
Are you getting up in people's face to tell them not to use it, or are you answering why you choose not to use it?
Are you extremely strict in your adherence? Or are you more forgiving based on the application or user?
There are two general points I like to make:
- Big companies are using it to steal the work of the powerless, en masse. It is making copyright strictly the tool of the powerful to use against the powerless.
- If these companies aren't lying and will actually deliver what they say they're going to deliver in the timeline they stated, then it's going to cause mass unemployment, because even if (IF) this creates new jobs for every job it destroys, the market can't move fast enough to invent these new careers in the timeline described. So either they're lying or they're going to cause great suffering, and a massive increase in wealth inequality.
Energy usage honestly never seems to be a concern for people, so I don't even try to make that argument.
While I understand new data enters for ai are increasing power usage, it’s just highlighting the existing problems where there are decades of insufficient investment in infrastructure.
You can’t get enough power to run a new data center? Where were you when I complained we needed additional transmission lines to keep bringing more renewable energy online? Where were you when I wanted the huge infrastructure project to import huge amounts of Canadian hydro? I bet you wish you had that now.
Where were you when I complained we needed additional transmission lines to keep bringing more renewable energy online?
I've strongly argued for this in the past.
All these tech bros with AI datacenters are putting their spare couch change together to build HVDC lines across the continent, right?
Here’s a piece I wrote to explain my apprehensive stance on AI to friends and colleagues: https://blog.erlend.sh/non-consensual-technology
A discussion in good faith means treating the person you are speaking to with respect. It means not having ulterior motives. If you are having the discussion with the explicit purpose of changing their minds or, in your words, "alarming them to take action" then that is by default a bad faith discussion.
If you want to discuss with a pro-AI person in good faith, you HAVE to be open to changing your own mind. That is the whole point of a good faith discussion - but rather, you already believe you are correct, and are wanting to enter these discussions with objective ammunition to defeat somebody.
How do you actually discuss in good faith? You ask for their opinions and are open to them, then you share your own in a respectful manner. You aren't trying to 'win' you are just trying to understand and in turn, help others to understand your own POV.
Chiming in here:
Most of the arguments against ai - the most common ones being plagiarism, the ecological impact - are not things people making the arguments give a flying fuck about in any other area.
Having issues with the material the model is trained on isn't an issue with ai - it's an issue with unethical training practices, copyright law, capitalism. These are all valid complaints, by the way, but they have nothing to do with the underlying technology. Merely with the way it's been developed.
For the ecological side of things, sure, ai uses a lot of power. Lots of data enters. So does the internet. Do you use that? So does the stock market. Do you use that? So do cars. Do you drive?
I've never heard anyone say "we need less data centers" until ai came along. What, all the other data centers are totally fine but the ones being used for ai are evil? If you have an issue with the drastically increased power consumption for ai you should be able to argue a stance that is inclusive of all data centers - assuming it's something you give a fuck about. Which you don't.
If a model, once trained, is being used entirely locally on someone's personal pc - do you have an issue with the ecological footprint of that? The power has been used. The model is trained.
It's absolutely valid to have an issue with the increased power consumption used to train ai models and everything else but these are all issues with HOW and not the ontological arguments against the tech that people think they are.
It doesn't make any of these criticisms invalid, but if you refuse to understand the nuance at work then you aren't arguing in good faith.
If you enslave children to build a house then the issue isn't that youre building a house, and it doesn't mean houses are evil, the issue is that YOURE ENSLAVING CHILDREN.
Like any complicated topic there's nuance to it and anyone that refuses to engage with that and instead relies on dogmatic thinking isn't being intellectually honest.
I’ve never heard anyone say “we need less data centers” until ai came along. What, all the other data centers are totally fine but the ones being used for ai are evil? If you have an issue with the drastically increased power consumption for ai you should be able to argue a stance that is inclusive of all data centers - assuming it’s something you give a fuck about. Which you don’t.
AI data centers take up substantially more power than regular ones. Nobody was talking about spinning up nuclear reactors or buying out the next several years of turbine manufacturing for non-AI datacenters. Hell, Microsoft gave money to a fusion startup to build a reactor, they've already broken ground, but it's far from proven that they can actually make net power with fusion. They actually think they can supply power by 2028. This is delusion driven by an impossible goal of reaching AGI with current models.
Your whole post is missing out on the difference in scale involved. GPU power consumption isn't comparable to standard web servers at all.
To be fair if we accidentally stumble upon fusion while foolishly pursuing AGI, that'd be a great thing
I really don’t think they do take up a lot more power but more how quickly they’re being built. At least in the US, power usage has been effectively flat, with datacenters and other growing power needs balanced with increasing efficiency….. but a lot of people want a lot of new datacenters at once
And the growth of power usage for ai data centers isn’t really all that high except that we’re structured for zero power growth. This really seems like the other side of the same issue we’ve been having with renewables: no infrastructure investment. We’ve been building effectively zero transmission lines for a couple decades. That equally means we have trouble bringing renewables online and we have trouble powering a data center that pops up. There’s every chance we already have plenty of power for the new datacenters but can’t get it there
You can think that, but you'd be wrong. A ChatGPT search uses 10x the power of a regular Google search.
Google has quietly removed their net-zero pledge from their website:
This stuff isn't being built with renewables.
For the ecological side of things, sure, ai uses a lot of power. Lots of data enters. So does the internet. Do you use that? So does the stock market. Do you use that? So do cars. Do you drive?
There are many, many differences between AI data centers and ones that don't have to run $500k GPU clusters. They require a lot less power, a lot less space, and a lot less cooling.
Also you're implying here that your debate opponents are being intellectually dishonest while using the same weasely arguments that people that argue in bad faith constantly employ.
Once you realize you can change your opinion about something after you learn about it, it's like a super power. So many people only have the goal of proving themselves right or safeguarding their ego.
It's okay to admit a mistake. It's normal to be wrong about things.
The problem is it's incredible rare to find others that are willing to change their minds in return, so every discussion either involves you changing your mind, or the other person getting agitated.
Don’t bring it up unless prompted or posing philosophical questions to family and friends. I once was creating a video for a client that sent me some generated images into the video that he thought were hilarious. I told him sorry, no, didn’t over explain and just said that he would need to hire somebody else if he wants those things. It’s not very hard. I always try to push for human artists and explain not only is it better for the ecosystem but for copyright reasons.
What do you normally say that you're worried sounds like an "Ai vegan"?
There isn't a way to use AI in good faith.
Either you are ignorant of the tech and its negative effects, or you arent.
What about cancer research? Or are you specificly talking about LLM & Image generating AI?
Generative AI isn't really useful except for slop.
It's kind of a cool idea to use it for finding unknown chemicals and stuff like that, but for media and most other uses it's been a travesty
Generative AI, sure, seems to be hard to find anything remotely useful for that (+the environment impact etc is stupidly high).
But neural networks are used everywhere in research, fast, cheap (a 2k€ graphic card can do it) and better than any other machine learning.
I'm not disagreing here, just pointing out all AI is not bad.
Neural Networks are Machine Learning
I think a lot of the things we used Machine Learning and LLMs for are good ideas, but we were doing that before we slapped them together and called it AI
Or people stopped calling machine learning AI, and tried to hype the NN with it. Then switched AI to language models and generative networks.
I mean Deep blue was AI back in the day, and so was pathfinders 🤷🏻♀️
Anyway, I'm not arguing with you.
You're right that AI is used now because marketing teams started calling it that.
All current AIs are based on stolen content.
-- and are being used by rich sociopaths to replace the very people that made that content.
Thats on top of the large pile of shit.
If nothing is taken from anyone and no profit is made from a model trained on publicly accessible data - can you elaborate on how that constitutes theft?
Actually - if 100% copy righted content is used to train a model, which is released for free and never monetized - is that theft?
People downloading stuff for personal use vs making money off of it are not the same at all. We don't tend to condone people selling bootleg DVDs, either.
Cool. So you're in support of developing a model that financially compensates all of the rights holders used for its training data then?
Yes, I am. But i don't expect them to do that.
Good!
I don't either. But they probably should. And that's a reasonable position to take.
Maybe part of the answer is to not be so strictly against it. AI is starting to be used in a variety of tools and not all your criticisms are valid for all of them. Being able to see where it is useful and maybe you even find it desirable helps explain that you’re not against the technology per se.
For example Zoom has an ai tool that can generate meeting summaries. It’s pretty accurate with discussions although sometimes gets confused about who said what. That ai likely used much less power, might not have been trained on copyrighted content