self

joined 2 years ago
MODERATOR OF
[–] self@awful.systems 3 points 1 month ago

it’s not pseudoscience unless it’s from the “literally studying ghosts” region of crankery, otherwise it’s just sparkling… actually I don’t know what your point is with all this

[–] self@awful.systems 2 points 1 month ago

I agree, you are fucking done. good job showing up 12 days late to the thread expecting strangers to humor your weird fucking obsession with using LLMs for something existing software does better

[–] self@awful.systems 14 points 1 month ago (1 children)

imagine if you read the article at all instead of posting 6 paragraphs about an impossible game you’re fantasizing about, that LLMs do nothing to enable because they’re stochastic chatbots and don’t understand game systems (just like you!)

[–] self@awful.systems 9 points 1 month ago

you know it’s weird

I looked for established reviews of Suck Up, the perfect local LLM game that isn’t local and is barely a game, and I couldn’t find any

all of the hype for this piece of shit that came out in 2023 and made zero impact was from paid influencers and the game’s dev Gabriel spamming reddit on a regular basis

so I guess what I’m trying to say is: fuck off with this shit, we’re not buying

[–] self@awful.systems 7 points 1 month ago

Weird that you’re downvoting me already. Lol

weird that you’re complaining

The game Suck Up! is the perfect example save for the part where the developers chose to run it server-side on release

the perfect example. yeah, this is barely a game and they couldn’t even make it run locally. all of this shit is just an awful tech demo for an expensive gimmick. none of it is fun, nobody plays it. why in fuck are you even here pumping it?

[–] self@awful.systems 6 points 1 month ago

the one that nvidia’s currently pumping as AI is the frame generation one, I believe. upscaling predates the current bubble and is mostly fine — I usually don’t like it outside of very limited use on my steam deck, but that’s personal preference

[–] self@awful.systems 5 points 1 month ago (3 children)

for an LLM? it’s a heavy GPU-bound workload that’ll tank performance for anything else using the GPU

[–] self@awful.systems 5 points 1 month ago

pretty much same! I’ve heard good things about some of the games published under Sony, and their umbrella as a publisher still includes excellent studios whose previous games I have very good memories of. but… I just can’t swing the price for a PS5, it really doesn’t feel worth it just for a few games, and I’m not a huge fan of the hardware design. they also seem to have fumbled PSVR2, and I was a big fan of the indie VR scene and how accessible it was on the PSVR1. on top of everything else, I feel like I’ve gotten by far more mileage out of open platforms than I have from any modern console — so for me, just like you, most Sony releases are invisible unless they’re the ones that bomb

[–] self@awful.systems 7 points 1 month ago (4 children)

Sony (I guess defensible, idk),

their two highest profile failures as of now are Concord, a live service Overwatch clone that was shut down two weeks after launch, and Marathon, an upcoming (or possibly cancelled) Bungie live service Escape from Tarkov clone that doesn’t play well, isn’t anything like the original Marathon games, and infamously has already had several credible accusations of art plagiarism leveled against it. for the latter, I suspect we’ll see a second controversy surface over generative assets; the art that wasn’t plagiarized was starkly ugly and weirdly generic, and I don’t buy that it was that way stylistically.

that shit like this is a normal part of doing business points at a gaming industry that’s rotting at the head, because as unpopular as live service games are, corporations like EA proved they can be very profitable if you tweak the right dopamine receptors to hook enough whales. it’d be nice if EA and Ubisoft were irrelevant now, but unfortunately the industry is still exactly the same exploitative piece of shit they helped make it into. myself and anyone who gives a fuck about quality can keep playing indie games all we want, but these corporations don’t care — they know that a mediocre live service with gambling mechanics will make many times more profit than any indie hit, so they target mediocrity. sometimes they miss and hit rock bottom instead, but who cares? the executives responsible will decimate the studio that developed the game with layoffs or eliminate it entirely, and because capitalism is a death cult that’ll be seen as a win.

[–] self@awful.systems 11 points 1 month ago (3 children)

if only the industry could be rid of Ubisoft and EA, we could finally play our AAA live service gacha games in peace, without being exploited for money

if only we could go back to the good old days, when the most prominent people in gaming were:

  • the out and proud fascist who runs Epic
  • the out and proud fascists who ran id
  • Todd Howard
  • fucking Peter Molyneux
  • it’s ok, a developer who’s existed since the Amiga days has made a good game!
  • I regret to inform you that the above-mentioned developer has willingly sold their entire studio to EA in exchange for a sack of money and now the sequel is a live service game with gambling mechanics
  • at least we’ll always have the Wing Commander guy. I wonder what he’s up to?
[–] self@awful.systems 9 points 1 month ago (5 children)

also:

Russian Spyware now with built in support for fascism. Fucking garbage

it’s fascism except when the exact same backend is used to make the NPCs in my garbage generative games say fash shit, then there’s no harm done

what the fuck even are you

[–] self@awful.systems 11 points 1 month ago (3 children)

oh wow, the one Gamer who doesn’t hate frame generation for looking like shit has joined the chat

bye bye Gamer

 

Wolfram’s post is fucking interminable and consists of about 20% semi-interesting math and 80% goofy shit like deciding that the creepy (to Wolfram) images in the AI model’s probability space must represent how aliens perceive the world. to my memory, this is about par for the course for Wolfram

the orange site decides that the reason why the output isn’t very interesting is because the AI isn’t a robot:

What we see from AI is what you get when you remove the "muscle module", and directly apply the representations onto the paper. There's no considering of how to fill in a pixel; there's just a filling of the pixel directly from the latent space.

It's intriguing. Also makes me wonder if we need to add a module in between the representational output and the pixel output. Something that mimics how we actually use a brush.

this lack of muscle memory is, of course, why we have never done digital art once in the history of humanity. all claims to the contrary are paid conspirators in the pocket of Big Dick Blick

Of course, the AIs can't wake up if we use that analogy. They are not capable of anything more than this state right now.

But to me, lucid dreaming is already a step above the total unconsciousness of just dreaming, or just nothing at all. And wakefulness always follows shortly after I lucid dream.

only 10x lucid dreamers wake up after falling asleep

we can progressively increase the numerical values of the weights—eventually in some sense “blowing the mind” of the network (and going a bit “psychedelic” in the process)

I wonder if there's a more exact analog of the action of psychedelics on the brain that could be performed on generative models?

I always find it interesting how a hero dose of LSD gives similar visuals to what these image AI's do to achieve a coherent image.

[more nonsense]

I feel like the more we get AI to act like humans, and the more those engineers and others use LSD, the more convergence we are going to have with curiosity and breakthroughs about how we function.

the next time you’re in an altered state, I want you to close your eyes and just imagine how annoyed you’d be if one of these shitheads was there with you, trying to get you to “form a BCI” or whatever by typing free association words into ChatGPT

 

you know it’s a fucking banger when you try to collapse the top comment in the thread to skip all the folks litigating over the value of an ebike and more than 2/3rds of the comments in an 884 comment long thread disappear

also featuring many takes from understanders of statistics:

I'm wary about using public roads to test these, but I think the way the data is presented is misleading. I'm not sure how it's misleading, but separating "incidents" into categories (safety, traffic, accident, etc) might be a good start.

For example, I could start coning cruise cars, and cause these numbers to skyrocket. While that's an inconvenience to other drivers, it's not a safety issue at all.

By the way, as a motorcyclist (and thus hyper annoyed at bad driving), I find Uber/Lyft/Food drivers to be both much more dangerous and inconveniencing than these self driving cars.

 

see also the github thread linked in the mastodon post, where the couple of gormless AI hypemen responsible for MDN’s AI features pick a fight with like 30 web developers

from that thread I’ve also found out that most MDN content is written by a collective that exists outside of Mozilla (probably explaining why it took them this long to fuck it up), so my hopes that somebody forks MDN are much higher

 

there’s a fun drinking game you can play where you take a shot whenever the spec devolves into flowery nonsense

§1. Purpose and Scope

The purpose of DIDComm Messaging is to provide a secure, private communication methodology built atop the decentralized design of DIDs.

It is the second half of this sentence, not the first, that makes DIDComm interesting. “Methodology” implies more than just a mechanism for individual messages, or even for a sequence of them. DIDComm Messaging defines how messages compose into the larger primitive of application-level protocols and workflows, while seamlessly retaining trust. “Built atop … DIDs” emphasizes DIDComm’s connection to the larger decentralized identity movement, with its many attendent virtues.

you shouldn’t have pregamed

 

today Mozilla published a blog post about the AI Help and AI Explain features it deployed to its famously accurate MDN web documentation reference a few days ago. here’s how it’s going according to that post:

We’re only a handful of days into the journey, but the data so far seems to indicate a sense of skepticism towards AI and LLMs in general, while those who have tried the features to find answers tend to be happy with the results.

got that? cool. now let’s check out the developer response on github soon after the AI features were deployed:

it seems like this feature was conceived, developed, and deployed without even considering that an LLM might generate convincing gibberish, even though that's precisely what they're designed to do.

oh dear

That is demonstrably wrong. There is no demo of that code showing it in action. A developer who uses this code and expects the outcome the AI said to expect would be disappointed (at best).

That was from the very first page I hit that had an accessibility note. Which means I am wary of what genuine user-harming advice this tool will offer on more complex concepts than simple stricken text.

So the "solution" is adding a disclaimer and a survey instead of removing the false information? 🙃 🙃 🙃

This response is clearly wrong in its statement that there is no closing tag, but also incorrect in its statement that all HTML must have a closing tag; while this is correct for XHTML, HTML5 allows for void elements that do not require a closing tag

that doesn’t sound very good! but at least someone vetted the LLM’s answers, right?

MDN core reviewer/maintainer here.

Until @stevefaulkner pinged me about this (thanks, Steve), I myself wasn’t aware that this “AI Explain” thing was added. Nor, as far as I know, were any of the other core reviewers/maintainers aware it’d been added. Nor, as far as I know, did anybody get an OK for this from the MDN Steering Committee (the group of people responsible for governance of MDN) — nor even just inform the Steering Committee about it at all.

The change seems to have landed in the sources two days ago, in e342081 — without any associated issue, instead only a PR at #9188 that includes absolutely not discussion or background info of any kind.

At this point, it looks to me to be something that Mozilla decided to do on their own without giving any heads-up of any kind to any other MDN stakeholders. (I could be wrong; I've been away a bit — a lot of my time over the last month has been spent elsewhere, unfortunately, and that’s prevented me from being able to be doing MDN work I’d have otherwise normally been doing.)

Anyway, this “AI Explain” thing is a monumentally bad idea, clearly — for obvious reasons (but also for the specific reasons that others have taken time to add comments to this issue to help make clear).

(note: the above reply was hidden in the GitHub thread by Mozilla, usually something you only do for off topic replies)

so this thing was pushed into MDN behind the backs of Mozilla’s experts and given only 15 minutes of review (ie, none)? who could have done such a thing?

…so anyway, some kind of space alien comes in and locks the thread:

Hi there, 👋

Thank you all for taking the time to provide feedback about our AI features, AI Explain and AI Help, and to participate in this discussion, which has probably been the most active one in some time. Congratulations to be a part of it! 👏

congratulations to be a part of it indeed

 

hopefully this is alright with @dgerard@awful.systems, and I apologize for the clumsy format since we can’t pull posts directly until we’re federated (and even then lemmy doesn’t interact the best with masto posts), but absolutely everyone who hasn’t seen Scott’s emails yet (or like me somehow forgot how fucking bad they were) needs to, including yud playing interference so the rats don’t realize what Scott is

 

there’s just so much to sneer at in this thread and I’ve got choice paralysis. fuck it, let’s go for this one

everyone thinking Prompt Engineering will go away dont understand how close Prompt Engineering is to management or executive communications. until BCI is perfect, we'll never be done trying to serialize our intent into text for others to consume, whether AI or human.

boy fuck do I hate when my boss wants to know how long a feature will take, so he jacks straight into my cerebral cortex to send me email instead of using zoom like a normal person

 

it’s a short comment thread so far, but it’s got a few posts that are just condensed orange site

The constant quest for "safety" might actually be making our future much less safe. I've seen many instances of users needing to yell at, abuse, or manipulate ChatGPT to get the desired answers. This trains users to be hateful to / frustrated with AI, and if the data is used, it teaches AI that rewards come from such patterns. Wrote an article about this -- https://hackernoon.com/ai-restrictions-reinforce-abusive-user-behavior

But you think humans (by and large) do know what "facts" are?

 

one of hn’s core demographics (windbag grifters) fights with a bunch of skeptics over whether it’s a bad thing the medicine they’re selling is mostly cocaine and alcohol

 

linked to the orange site because there's a funny contrast in the comments between paully's fans who think they've just read the greatest thing imaginable and paully's more jaded fans who want to know why he's posting this when the industry's entering a downturn

view more: ‹ prev next ›