fig. 1: how awful.systems works
Not a sneer, but an unsurprising development: Bluesky's seeing a surge in users:
This sadly has caused my login problems to reappear on bsky. No idea what they are doing with their service, but I'm having regular issues with the site. Also seems 'downforeveryoneorjustme' enshittified. (the image is showing a part of the site which is now an advertisement for some AI bullshit chatbot/imagegenerator everything aislop roleplay thing). (seems to be fixed now, but wow did bsky have weird issues for me).
looks like the name server is getting hammered
In other news, Elon actually did it
This is a license for stalkers & abusers ! No surprise from someone like Elon I suppose
I really wonder what the meeting looked like where they decided on that change, because I’m struggling coming up with a single argument for it that doesn’t boil down to giving abusive asshats more playtime.
I’m really really not happy about this. There is one person I’ve been trying to keep out for the last few years and now they can come crawl all my fucking posts?? And report my account!?
Edit: apparently being protected should offer me some protection still.
saw this via a friend earlier, forgot to link. xcancel
socmed administrator for a conf rolls with liarsynth to "expand" a cropped image, and the autoplag machine shits out a more sex-coded image of the speaker
the mindset of "just make some shit to pass muster" obviously shines through in a lot of promptfans and promptfondlers, and while that's fucked up I don't want to get too stuck on that now. one of the things I've been mulling over for a while is pondering what a world (and digital landscape) with a richer capability for enthusiastic consent could look like. and by that I mean, not just more granular (a la apple photo/phonebook acl) than this current y/n bullshit where a platform makes a landgrab for a pile of shit, but something else entirely. "yeah, on my gamer profile you can make shitposts, but on academic stuff please keep it formal" expressed and traceable
even if just as a thought experiment (because of course there's lots of funky practical problems, combined with the "humans just don't really exist that way" effort-tax overhead that this may require), it might inform about some avenues of how to to go about some useful avenues on how to go about handling this extremely overt bullshit, and informing/shaping impending norms
(e: apologies for semi stream of thought, it's late and i'm tired)
I don't know what's worse, that or some of the weird twitter responses it's getting
why not both
25085 N + Oct 15 GitHub ( 19K) Your free GitHub Copilot access has expired
tinyviolin.bmp
it just clicked for me but idk if it makes sense: openai nonprofit status could be used later (inevitably in court) to make research clause of fair use work. they had it when training their models and that might have been a factor why they retained it, on top of trying to attract actual skilled people and not just hypemen and money
There's no way this works, right? It's like a 5y.o.'s idea of a gotcha.
This would be like starting a tax-exempt charity to gather up a large amount in donations and then switching to a for-profit before spending it on any charitable work and running away with the money.
i'm not a lawyer and i've typed it up after 4h of sleep, trying to make sense of what tf were they thinking. they're not bagging up money, they're stealing all data they can, so it's less direct and it'd depend on how that data (unstructured, public) will be valued at. then, what a coincidence, their proprietary thing made something useful commercially, or so were they thinking. sbf went to court with less
There’s no way this works, right?
the US legal system has this remarkable "little" failure mode where it is easily repurposed to be not an engine of justice, but instead of engine of enforcing whatever story you can convince someone of
(the extremely weird interaction(s) of "everything allowed except what is denied", case precedent, and the abovementioned interaction mode, result in some really fucking bad outcomes)
this demented take on using GenAI to create documentation for open source projects
https://lobste.rs/s/rmbos5/large_language_models_reduce_public#c_j8boat
Good sneer from "Internet_Janitor" a few comments up the page:
LLMs inherently shit where they eat.
The top comment's also pretty good, especially the final paragraph:
I guess these companies decided that strip-mining the commons was an acceptable deal because they’d soon be generating their own facts via AGI, but that hasn’t come to pass yet. Instead they’ve pissed off many of the people they were relying on to continue feeding facts and creativity into the maws of their GPUs, as well as possibly fatally crippling the concept of fair use if future court cases go against them.
oh hey that would be my comment 😁
It was a pretty good comment, and pointed out one of the possible risks this AI bubble can unleash.
I've already touched on this topic, but it seems possible (if not likely) that copyright law will be tightened in response to the large-scale theft performed by OpenAI et al. to feed their LLMs, with both of us suspecting fair use will likely take a pounding. As you pointed out, the exploitation of fair use's research exception makes it especially vulnerable to its repeal.
On a different note, I suspect FOSS licenses (Creative Commons, GPL, etcetera) will suffer a major decline in popularity thanks to the large-scale code theft this AI bubble brought - after two-ish years of the AI industry (if not tech in general) treating anything publicly available as theirs to steal (whether implicitly or explicitly), I'd expect people are gonna be a lot stingier about providing source code or contributing to FOSS.
Yeah, I'm no longer worried that LLMs will take my job (nor ofc that AGI will kill us all) Instead the lasting legacy of GenAI will be a elevated background level of crud and untruth, an erosion of trust in media in general, and less free quality stuff being available. It's a bit like draining the Aral Sea, a vibrant ecosystem will be permanently destroyed in the short-sighted pursuit of "development".
the lasting legacy of GenAI will be a elevated background level of crud and untruth, an erosion of trust in media in general, and less free quality stuff being available.
I personally anticipate this will be the lasting legacy of AI as a whole - everything that you mentioned was caused in the alleged pursuit of AGI/Superintelligence^tm^, and gen-AI has been more-or-less the "face" of AI throughout this whole bubble.
I've also got an inkling (which I turned into a lengthy post) that the AI bubble will destroy artificial intelligence as a concept - a lasting legacy of "crud and untruth" as you put it could easily birth a widespread view of AI as inherently incapable of distinguishing truth from lies.
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community