this post was submitted on 28 Jan 2025
204 points (100.0% liked)

chat

8432 readers
271 users here now

Chat is a text only community for casual conversation, please keep shitposting to the absolute minimum. This is intended to be a separate space from c/chapotraphouse or the daily megathread. Chat does this by being a long-form community where topics will remain from day to day unlike the megathread, and it is distinct from c/chapotraphouse in that we ask you to engage in this community in a genuine way. Please keep shitposting, bits, and irony to a minimum.

As with all communities posts need to abide by the code of conduct, additionally moderators will remove any posts or comments deemed to be inappropriate.

Thank you and happy chatting!

founded 4 years ago
MODERATORS
 

it is fucking priceless that an innovation that contained such simplicities as "don't use 32-bit weights when tokenizing petabytes of data" and "compress your hash tables" sent the stock exchange into 'the west has fallen' mode. I don't intend to take away from that, it's so fucking funny peltier-laugh

This is not the rights issue, this is not the labor issue, this is not the merits issue, this is not even the philosophical issue. This is the cognitive issue. When not exercised, parts of your brain will atrophy. You will start to outsource your thinking to the black box. You are not built different. It is the expected effect.

I am not saying this is happening on this forum, or even that there are tendencies close to this here, but I preemptively want to make sure it gets across because it fucked me up for a good bit. Through Late 2023–Early 2024 I found myself leaning into both AI images for character conceptualization and AI coding for my general workflow. I do not recommend this in the slightest.

For the former, I found that in retrospect, the AI image generation reified elements into the characters I did not intend and later regretted. For the latter, it essentially kneecapped my ability to produce code for myself until I began to wean off of it. I am a college student. I was in multiple classes where I was supposed to be actively learning these things. Deferring to AI essentially nullified that while also regressing my abilities. If you don't keep yourself sharp, you will go dull.

If you don't mind that or don't feel it is personally worth it to learn these skills besides the very very basics and shallows, go ahead, that's a different conversation but this one does not apply to you. I just want to warn those who did not develop their position on AI beyond "the most annoying people in the world are in charge of it and/or pushing it" (a position that, when deployed by otherwise-knowledgeable communists, is correct 95% of the time) that this is something you will have to be cognizant of. The brain responds to the unknowable cube by deferring to it. Stay vigilant.

top 50 comments
sorted by: hot top controversial new old
[–] freagle@lemmygrad.ml 52 points 5 months ago (2 children)

Honestly, overuse of AI in the West is likely going to compound our societal incompetence and accelerate our decline

[–] SkingradGuard@hexbear.net 19 points 5 months ago
[–] Cimbazarov@hexbear.net 11 points 5 months ago (1 children)

Yea, it gets said about every generation with a new technology going back to writing, but I really feel like this is the one that will actually sink us.

[–] freagle@lemmygrad.ml 15 points 5 months ago

Every other technology going back to writing still has a learning feedback loop. This particular technology interrupts the learning process and replaces it with what we currently call prompt engineering - a circular self-contained process that is limited by the user's existing level of learning, does not add to the user's level of learning, and explicitly prevents the user from doing the learning.

[–] Lyudmila@hexbear.net 42 points 5 months ago* (last edited 5 months ago) (4 children)

Can this AI power a smart speaker that can actually reliably manage timers and alarms, manage my lights, play music, convert units, and find my TV remote? No?

That's literally all I've ever wanted out of any of this and nobody has been able to pull it off. Every single option is awful, takes like 10 seconds to respond for no reason other than to upload my whole shit directly to the NSA, and has maybe a 1 in 3 chance of being so comically wrong that it's a nuisance.

"Alexa, set a 5 minute tea timer." "Buying 5 kilograms of Earl Grey Tea on Amazon." "NO NO NO NO"

"Hey Google, what's 1/4 cup in ml?" "Playing the Quarter Life Crisis podcast on Spotify." "NO NO NO NO"

[–] JustSo@hexbear.net 20 points 5 months ago (1 children)

"Buying 5kg of Earl Grey TNT on the darknet" "Hahaha, YES YESS."

[–] 9to5@hexbear.net 10 points 5 months ago* (last edited 5 months ago) (2 children)

Delivering 5tons of Earl Grey in T-MINUS 5 seconds.

load more comments (2 replies)
[–] KnilAdlez@hexbear.net 14 points 5 months ago (2 children)
[–] Lyudmila@hexbear.net 8 points 5 months ago (1 children)

I've got hardware ready to go, but trying to figure out proxmox has me like oooaaaaaaauhhh

[–] KnilAdlez@hexbear.net 8 points 5 months ago* (last edited 5 months ago) (1 children)

If it's any consolation, I have never used proxmox for it. I'm rawdogging docker on an Ubuntu server

[–] Welp_im_damned 6 points 5 months ago (1 children)

Just as god intended.

Tbf I do the same with my jellyfin server

load more comments (1 replies)
[–] ZWQbpkzl@hexbear.net 7 points 5 months ago (1 children)

Still needs the voice recognition that responds to "computer?"

[–] KnilAdlez@hexbear.net 10 points 5 months ago (3 children)

Home assistant has that, and with some knowhow you can change it to whatever you want. My assistant is named Sorcha and has a Scottish accent.

[–] NaevaTheRat@vegantheoryclub.org 5 points 5 months ago (1 children)

I've wondering about dicking with this. Assume I'm somewhat lazy, approximately technically competent but a slow worker, and prone to dropping projects that take more than a week.

Would I be able to make something for the kitchen which can like:

  • Set multiple labelled timers, and announce milestones (e.g. 'time one hour for potatoes, notify at half an hour', 'time 10 minutes for grilling')
  • Be fed a recipe and recite steps navigating forward and backwards when prompted e.g. 'next' 'last' 'next 3' etc
  • Automatically convert from barbarian units
  • email me notes I make at the end of the day (like accumulate them all like a mailing list)

Or is this still a pipe dream?

[–] KnilAdlez@hexbear.net 5 points 5 months ago (7 children)

The answer is that it will take work, but not as much as you may think. And some of the especially niche things may not be possible, I'll address them one by one.

Multiple labelled timers

Yes!

Announce milestones

I don't think so, but you can just set two timers. Timers are new, so that may be a feature in the future.

Be fed a recipe

I have never tried this, but it does integrate with grocy, so maybe.

Automatically convert units

Officially, probably not, but I have done this before and it has worked just by asking the llm assistant.

Email me notes at the end of the day.

I have never done this, and it would take some scripting, but I am willing to bet that it can be done. Someone might have a script for it in the forums.

Ultimately, home assistant is not an all-in-one solution. It is a unified front end to connect smart home devices and control them. Everything else requires integrations and add-ons, of which there are many. There are lots of tools for automation that don't require scripting at all, and if you're willing to code a little it becomes exponentially more powerful. I love it, it helps with my disability, and I build my own devices to connect to it. Give it a shot if you have some spare hardware, So to do what you want, You will need a computer, a GPU at least a RTX 3060, and a speaker phone of some kind.

load more comments (7 replies)
load more comments (2 replies)
[–] DinosaurThussy@hexbear.net 6 points 5 months ago

Try MyCroft AI

[–] Des@hexbear.net 5 points 5 months ago

I just want a dictation and reminder engine that I can talk to in nearly any situation pleeeeease thats all.

[–] SovietBeerTruckOperator@hexbear.net 39 points 5 months ago (1 children)

I'm gonna ask my AI what I should do about this.

[–] KurtVonnegut@hexbear.net 16 points 5 months ago

"A strange game. The only winning move is not to play."

[–] robot_dog_with_gun@hexbear.net 35 points 5 months ago

i've yelled at a couple cs students about learning things for real and they've at least stopped telling me about it

it's tolerable to use it to get a lead on something you don't understand well enough to formulate search terms but you have to do extra work to verify anything it spits out

[–] stigsbandit34z@hexbear.net 34 points 5 months ago

This is the cognitive issue. When not exercised, parts of your brain will atrophy. You will start to outsource your thinking to the black box. You are not built different. It is the expected effect.

gold-communist

Tools not systems

[–] Yeat@hexbear.net 32 points 5 months ago

The people in Dune were about this one

[–] JustSo@hexbear.net 27 points 5 months ago (1 children)

I've run some underwhelming local LLMs and done a bit of playing with the commercial offerings.

I agree with this post. My experiments are on hold, though I'm curious to just have a poke around DeepSeek's stuff just to get an idea of how it behaves.

I am most concerned with next generation devices that come with this stuff built in. There's a reactionary sinophobe on youtube who produced a video with some pretty interesting talking points that, since the goal is to have these "AI assistants" basically observe everything you do with your device (and are blackboxes that rely on cloud hosted infrastructure) that this effectively negates E2E encryption. I am convinced by these arguments and in that respect the future looks particularly bleak. Having a wrongthink censor that can read all your inputs before you've even sent them and can flag you for closer surveillance and logging, combined with the three letter agencies really "chilling out" about eg Apple's refusal to assist in decrypting iPhones, it all looks quite fucked.

There are obviously some use cases where LLMs are sort of unobjectionable, but even then, as OP points out, we often ignore the way our tools shape our minds. People using them as surrogates for human interaction etc are a particularly sad case.

Even if you accept the (flawed) premise that these machines contain a spark of consciousness, what does it say about us that we would spin up one-time single use minds to exploit for a labor task and then terminate them? I don't have a solid analysis but it smells bad to me.

Also China's efforts effectively represent a more industrial scale iteration of what the independent hacker and opensource communities have been doing anyway- proving that the moat doesn't really exist and that continuing to try and use brute force (scale) to make these tools "better" is inefficient and tunnel visioned.

Between this and the links shared with me recently about China's space efforts, I am simply left disappointed that we remain in competition and opposition to more than half of the world when cooperation could have saved us a lot of time, energy, water, etc. It's sad and a shame.

I cherish my coding ability. I don't mind playing with an LLM to generate some boilerplate to have a look at, but the idea that people who cannot even assess the function of the code that is generated are putting this stuff into production is really sad. We haven't exactly solved the halting problem yet have we? There's no real way for these machines to accurately assess code to determine that it does the task it is intended to do without side effects or corner cases that fail. These are NP-hard problems and we continue to ignore that fact.

The hype driving this is clear startup bro slick talk grifting shit. Yes it's impressive that we can build these things but they are being misapplied and deferred to as authorities on topics by people who consider themselves to be otherwise Very Smart People. It's.. in a word.. pathetic.

[–] aspensmonster@lemmygrad.ml 12 points 5 months ago (2 children)

Between this and the links shared with me recently about China’s space efforts, I am simply left disappointed that we remain in competition and opposition to more than half of the world when cooperation could have saved us a lot of time, energy, water, etc. It’s sad and a shame.

The gigawatts of wasted electricity :(

[–] JustSo@hexbear.net 6 points 5 months ago

We can only laugh or we'd never stop crying.

[–] HakFoo@lemmy.sdf.org 5 points 5 months ago

I was surprised the response wasn't "okay, China made this on 1/50 the budget, so if we do what they did but threw double our budget at it, we can make something 100 times better, and we'll be so far advanced that we'll be opening Walmarts on Ganymede next spring, we just need more Quadros, bro"

[–] sewer_rat_420@hexbear.net 27 points 5 months ago

I'm excited that another country is continuing to research AI and LLMs without burning the planet to the max degree. The technology might have better uses in the future but commercializing it now, especially to the ridiculous degree we see in the US (why does my fridge need chatgpt?) Is absolute folly if not a war crime on all future generations who will suffer from the unnecessary emissions

[–] Feinsteins_Ghost@hexbear.net 24 points 5 months ago (2 children)

Luckily ChatGPT and deepseek can’t plumb worth a fuck so I’m not too worried about it coming for my job.

Until robots develop the dexterity to crawl under a home, isolate broken plumbing, repair and test after, my job will be human only.

[–] JustSo@hexbear.net 17 points 5 months ago

Honestly, plumbers stay winning. I do not recall a point in my medium-length life in which plumbers have not had more work than they can handle and have been able to live comfortably. Until DPRK exports butthole removal tech / juche magic, plumbers will continue to be the best.

[–] Acute_Engles@hexbear.net 9 points 5 months ago

LLMs could never get the drug use correct in order to work construction anyway

[–] BodyBySisyphus@hexbear.net 24 points 5 months ago (1 children)

AI coding for my general workflow. I do not recommend this in the slightest.

Yes, I use stack overflow like God intended.

load more comments (1 replies)
[–] dannoffs@hexbear.net 22 points 5 months ago (1 children)

The only ethical use of AI was when DougDoug made it try to beat Pajama Sam.

[–] TheSpectreOfGay@hexbear.net 14 points 5 months ago

i've been feeling kinda wary of dougdoug lately bc he's been saying some pretty cringe ai-bro things sicko-wistful

that video is very funny though

[–] SpiderFarmer@hexbear.net 20 points 5 months ago

It may be a petty thing, but I hate how people rely on AI programs to make pfp's and thumbnails. I'd rather get a shitty crayon drawing that you even put thirty seconds into.

[–] Alaskaball@hexbear.net 18 points 5 months ago (2 children)

this is me with google maps

[–] Inui@hexbear.net 11 points 5 months ago

Someone posted some studies on an earlier AI article that showed people's ability to navigate based on landmarks and such was way worse if they relied on GPS. So what OP said tracks as far as skills regressing.

I don't miss printing Mapquest directions out on paper before leaving onto a cross-state trip though.

[–] queermunist@lemmy.ml 7 points 5 months ago

Look I can't navigate worth shit in a car and I never will. Give me a map and a compass and I can orienteer through the wilderness, but put me in a car and I'll get lost in a low density neighborhood.

[–] Speaker@hexbear.net 15 points 5 months ago (1 children)

There is "use the machine to write code for you" (foolish, a path to ruin) and there is "use the machine like a particularly incompetent coworker who nevertheless occasionally has an acceptable idea to iterate on".

If you are already an expert, it is possible to interpret the hallucinations of the machine to avoid some pointless dead-end approaches. More importantly, you've had to phrase the problem in simple enough terms that it can't go too wrong, so you've mostly just got a notebook that spits text at you. There's enough bullshit in there that you cannot trust it or use it as is, but none of the ego attached that a coworker might have when you call their idea ridiculous.

Don't use the machine to learn anything (it is trained on almost exclusively garbage), don't use anything it spits out, don't use it to "augment your abilities" (if you could identify the augmentation, you'd already have the ability). It is a rubber duck that does not need coffee.

If your code is so completely brainless that the plagiarism machine can produce it, you're better off writing a code generator to just do it right rather than making a token generator play act as a VIM macro.

[–] blame@hexbear.net 8 points 5 months ago (1 children)

don't use it to "augment your abilities" (if you could identify the augmentation, you'd already have the ability

I actually disagree with this take. I can work fine without LLMs, I've done it for a long time, but in my job i encounter tasks that are not production facing nor do they need the rigor of a robust software development lifecycle such as making the occasional demo or doing some legacy system benchmarking. These tasks are usually not very difficult to do but the require me writing python code or whatever (i'm more of a c++ goblin) so I just have whatever the LLM of the day is to write up some python functions for me and i paste them into my script that i build up and it works pretty well. I could sit there and search about for the right python syntax to filter a list or i can let the LLM do it because it'll probably get it right and if it's wrong it's close enough that I can repair it.

Anyway these things are another (decadently power hungry) tool in the toolbag. I think it's probably like a low double digit productivity boost for certain tasks I have, so nothing really as revolutionary as the claims are being made about it, but I'm also not about to go write a code generator to hack together some python i'm never going to touch again.

[–] Speaker@hexbear.net 5 points 5 months ago

Generating Python is a special case, first because there's so god damn much of it in the training data and second because almost any stream of tokens is valid Python. 😉

The code generator remark is particularly aimed at the Copilot school of "generate this boilerplate 500 times with small variations" sludge, rather than toy projects and demo code. I do think it's worth setting a fairly high baseline even with those (throwaway code today is production code tomorrow!) to make it easy to pick up and change, but I cannot begrudge anyone not wanting to sift through Python API docs.

[–] lil_tank@hexbear.net 13 points 5 months ago (2 children)

I naturally stopped using LLMs when I understood how to structure my code. They are useless at generating anything that relies on functions and objects you built up yourself. The exception being famous algorithms that are well documented, because you can generally generate a single function in the language you're using which usually works fine. Saves you the hassle of having to understand math

[–] porous_grey_matter@lemmy.ml 6 points 5 months ago

The exception being famous algorithms that are well documented, because you can generally generate a single function in the language you're using which usually works fine.

That's okay, I guess, but those already exist in an optimised form, double checked by people who do know maths, in libraries for most languages.

load more comments (1 replies)
[–] mechwarrior2@hexbear.net 13 points 5 months ago

anyone mentions ai to me and I wince like they farted

[–] newmou@hexbear.net 11 points 5 months ago

The only thing I use AI for is for like trivial specific information that I would otherwise need to hunt down information about on ad filled blogs or videos. Like I was trying to fix something with my toilet and it was way faster to describe it to a bot and have it give me info on likely what’s causing it. Or I’ve also used it for things like figuring out what the best ranked seasons are for some dumb show, with context on each. I feel like it assembles that super low-level information really well. But it does feel like a slippery slope sometimes, I could see it bleeding into other parts of your life unintentionally

[–] HelluvaBottomCarter@hexbear.net 11 points 5 months ago

Too late. I re-wrote my resume, published a book on amazon, generated 5 new business ideas, and created an app.

[–] KrasMazov@lemmygrad.ml 10 points 5 months ago (1 children)

Completely agree. AI should be just another tool to easy the life of the workers.

I'm not so sure I even like the idea of AI like ChatGPT or Deepseek or the countless others, mostly because I avoided it till now and don't really know much about it, so I need to further investigate to actually form an opinion, tho I would be lying if I said I'm not intrigued by it. The only times I used one of these AIs have been recently to translate a few sentences with Gemini since Google already forced it on my phone. And honestly, with search engines going down the sewer, maybe Deepseek with it's search function could be useful.

One thing I can't really understand tho is generative AI. I don't want to sound like a Luddite, but I really can't see the use of it. Like, it's one thing to have a very specialized AI tool for parts of the creative process, but generating whole images and voices? Just, why? It's depressing. You're removing the human part of these creative works and stealing in the process just to automate it for profit or for the sake of it. I already saw some AI generated ads here in Brasil from some big companies, including Coca-cola, and it just makes me mad knowing they did it just to cut costs by not paying actors, artists, designers, etc. It's fucked up.

And not only that, but artists literally have the ability to draw, paint, sculpt, voice act, etc, whatever they want in their own style and process. Why would they want to generate their whole work for them removing themselves from the process? It just sounds completely dystopic to me.

load more comments (1 replies)
[–] queermunist@lemmy.ml 10 points 5 months ago* (last edited 5 months ago)

Butlerian Jihad! Dump the thinking machines!

[–] Alisu@hexbear.net 8 points 5 months ago

I used AI for an rpg character. Using the AI made me want to draw it myself instead because of how completely idiotic it was to make the AI do what I want. I'll just do it myself and it will look better

[–] MelianPretext@lemmygrad.ml 6 points 5 months ago* (last edited 5 months ago)

One thing to note about these AI boondoggles is that they represent a decentralization of information and are currently the most accessible means to promoting and accessing leftist or anti-imperialist information. At this point in time, the English (global) internet has been completely fine-tuned to serve US propaganda slop if you're looking for any political information. Google search is now wired to direct people straight to natopedia and that leads to state gov, cia gov, freedomhouse whenever you try to search for anything anti-imperialist. Information has been completely centralized so that any Western propaganda drivel is boosted to a dominant position and every "alternative platform" are also NATO lackeys, like how DuckDuckGo search results now also filter out all Russian sources.

This is where it is actually useful for me because I would tolerate some AI hallucination (which can be reduced to be relatively marginal depending on your queries) over having to shovel through some shit BBC news just to learn about the Sahel state leadership or some natopedia article of small Soviet towns where you need to comb through every second sentence, because some CIA bootlick editor vomited RFE "sources" all over screaming about how it was a "secret KGB torture gulag" or "Stalin once ate all the grain there," just to find out some basic geographical or biographical details. I got Deepseek to compile a list of Marxist-Leninist states because the natopedia article had propaganda all over like claiming the DPRK was "not" ML because some western ultraleftist "Marxist" scholar claimed Juche was not Marxism. I'd prefer the risk of encountering some hallucination slop something like "among the notable Marxist-Leninist-Titoist states during the Cold War period was Asgard" than being made to analyze some ultraleft Western hegemony bootlick "scholar" slop for potential facts.

Deepseek in particular is currently working rather well as a substitute for places like r/genzhou where you used to be able to ask questions about leftist history and theory before it was banned. Its ability to scrape search results means that it works fairly well for finding reading materials without as egregious hallucination as ChatGPT where it makes up book titles. I had it spit out book recommendations from Losurdo, Parenti and Grover Furr when I asked about non-Western slanted sources about the USSR.

Ideally, of course, there wouldn’t be a need for AI to fill these gaps, but given the complete centralization of information and conditions of soft censorship that the Western platform monopolies allow them to enact, I'd say that there is a use case for these LLM chat engines provided that one exercises caution.

load more comments
view more: next ›