18

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week's thread

(Semi-obligatory thanks to @dgerard for starting this)

(page 3) 50 comments
sorted by: hot top controversial new old
[-] dgerard@awful.systems 21 points 6 days ago

fig. 1: how awful.systems works

[-] BlueMonday1984@awful.systems 8 points 5 days ago
[-] Soyweiser@awful.systems 6 points 5 days ago* (last edited 5 days ago)

This sadly has caused my login problems to reappear on bsky. No idea what they are doing with their service, but I'm having regular issues with the site. Also seems 'downforeveryoneorjustme' enshittified. (the image is showing a part of the site which is now an advertisement for some AI bullshit chatbot/imagegenerator everything aislop roleplay thing). (seems to be fixed now, but wow did bsky have weird issues for me).

[-] dgerard@awful.systems 6 points 5 days ago

looks like the name server is getting hammered

load more comments (2 replies)
[-] BlueMonday1984@awful.systems 13 points 5 days ago
[-] maol@awful.systems 11 points 5 days ago

This is a license for stalkers & abusers ! No surprise from someone like Elon I suppose

[-] mii@awful.systems 8 points 5 days ago

I really wonder what the meeting looked like where they decided on that change, because I’m struggling coming up with a single argument for it that doesn’t boil down to giving abusive asshats more playtime.

load more comments (1 replies)
[-] saucerwizard@awful.systems 9 points 5 days ago* (last edited 5 days ago)

I’m really really not happy about this. There is one person I’ve been trying to keep out for the last few years and now they can come crawl all my fucking posts?? And report my account!?

Edit: apparently being protected should offer me some protection still.

load more comments (1 replies)
[-] froztbyte@awful.systems 11 points 5 days ago* (last edited 5 days ago)

saw this via a friend earlier, forgot to link. xcancel

socmed administrator for a conf rolls with liarsynth to "expand" a cropped image, and the autoplag machine shits out a more sex-coded image of the speaker

the mindset of "just make some shit to pass muster" obviously shines through in a lot of promptfans and promptfondlers, and while that's fucked up I don't want to get too stuck on that now. one of the things I've been mulling over for a while is pondering what a world (and digital landscape) with a richer capability for enthusiastic consent could look like. and by that I mean, not just more granular (a la apple photo/phonebook acl) than this current y/n bullshit where a platform makes a landgrab for a pile of shit, but something else entirely. "yeah, on my gamer profile you can make shitposts, but on academic stuff please keep it formal" expressed and traceable

even if just as a thought experiment (because of course there's lots of funky practical problems, combined with the "humans just don't really exist that way" effort-tax overhead that this may require), it might inform about some avenues of how to to go about some useful avenues on how to go about handling this extremely overt bullshit, and informing/shaping impending norms

(e: apologies for semi stream of thought, it's late and i'm tired)

[-] MBM@lemmings.world 7 points 5 days ago

I don't know what's worse, that or some of the weird twitter responses it's getting

[-] froztbyte@awful.systems 7 points 5 days ago

why not both

load more comments (1 replies)
[-] froztbyte@awful.systems 12 points 6 days ago

25085 N + Oct 15 GitHub ( 19K) Your free GitHub Copilot access has expired

tinyviolin.bmp

[-] skillissuer@discuss.tchncs.de 11 points 6 days ago

it just clicked for me but idk if it makes sense: openai nonprofit status could be used later (inevitably in court) to make research clause of fair use work. they had it when training their models and that might have been a factor why they retained it, on top of trying to attract actual skilled people and not just hypemen and money

[-] V0ldek@awful.systems 9 points 6 days ago

There's no way this works, right? It's like a 5y.o.'s idea of a gotcha.

This would be like starting a tax-exempt charity to gather up a large amount in donations and then switching to a for-profit before spending it on any charitable work and running away with the money.

[-] skillissuer@discuss.tchncs.de 8 points 5 days ago

i'm not a lawyer and i've typed it up after 4h of sleep, trying to make sense of what tf were they thinking. they're not bagging up money, they're stealing all data they can, so it's less direct and it'd depend on how that data (unstructured, public) will be valued at. then, what a coincidence, their proprietary thing made something useful commercially, or so were they thinking. sbf went to court with less

[-] froztbyte@awful.systems 8 points 6 days ago

There’s no way this works, right?

the US legal system has this remarkable "little" failure mode where it is easily repurposed to be not an engine of justice, but instead of engine of enforcing whatever story you can convince someone of

(the extremely weird interaction(s) of "everything allowed except what is denied", case precedent, and the abovementioned interaction mode, result in some really fucking bad outcomes)

[-] gerikson@awful.systems 12 points 6 days ago

this demented take on using GenAI to create documentation for open source projects

https://lobste.rs/s/rmbos5/large_language_models_reduce_public#c_j8boat

[-] blakestacey@awful.systems 18 points 6 days ago

Good sneer from "Internet_Janitor" a few comments up the page:

LLMs inherently shit where they eat.

[-] BlueMonday1984@awful.systems 15 points 6 days ago

The top comment's also pretty good, especially the final paragraph:

I guess these companies decided that strip-mining the commons was an acceptable deal because they’d soon be generating their own facts via AGI, but that hasn’t come to pass yet. Instead they’ve pissed off many of the people they were relying on to continue feeding facts and creativity into the maws of their GPUs, as well as possibly fatally crippling the concept of fair use if future court cases go against them.

[-] gerikson@awful.systems 13 points 6 days ago

oh hey that would be my comment 😁

[-] BlueMonday1984@awful.systems 11 points 5 days ago

It was a pretty good comment, and pointed out one of the possible risks this AI bubble can unleash.

I've already touched on this topic, but it seems possible (if not likely) that copyright law will be tightened in response to the large-scale theft performed by OpenAI et al. to feed their LLMs, with both of us suspecting fair use will likely take a pounding. As you pointed out, the exploitation of fair use's research exception makes it especially vulnerable to its repeal.

On a different note, I suspect FOSS licenses (Creative Commons, GPL, etcetera) will suffer a major decline in popularity thanks to the large-scale code theft this AI bubble brought - after two-ish years of the AI industry (if not tech in general) treating anything publicly available as theirs to steal (whether implicitly or explicitly), I'd expect people are gonna be a lot stingier about providing source code or contributing to FOSS.

[-] gerikson@awful.systems 12 points 5 days ago

Yeah, I'm no longer worried that LLMs will take my job (nor ofc that AGI will kill us all) Instead the lasting legacy of GenAI will be a elevated background level of crud and untruth, an erosion of trust in media in general, and less free quality stuff being available. It's a bit like draining the Aral Sea, a vibrant ecosystem will be permanently destroyed in the short-sighted pursuit of "development".

[-] BlueMonday1984@awful.systems 10 points 5 days ago

the lasting legacy of GenAI will be a elevated background level of crud and untruth, an erosion of trust in media in general, and less free quality stuff being available.

I personally anticipate this will be the lasting legacy of AI as a whole - everything that you mentioned was caused in the alleged pursuit of AGI/Superintelligence^tm^, and gen-AI has been more-or-less the "face" of AI throughout this whole bubble.

I've also got an inkling (which I turned into a lengthy post) that the AI bubble will destroy artificial intelligence as a concept - a lasting legacy of "crud and untruth" as you put it could easily birth a widespread view of AI as inherently incapable of distinguishing truth from lies.

load more comments
view more: ‹ prev next ›
this post was submitted on 13 Oct 2024
18 points (100.0% liked)

TechTakes

1337 readers
113 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS