18

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week's thread

(Semi-obligatory thanks to @dgerard for starting this)

(page 4) 50 comments
sorted by: hot top controversial new old
[-] slopjockey@awful.systems 20 points 1 week ago

Musk's twitter is unleashin/g/ the worst posters that the CS world has to offer

[-] self@awful.systems 18 points 1 week ago

the raw, mediocre teenage energy of assuming you can pick up any subject in 2 weeks because you’ve never engaged with a subject more complex than playing a video game and you self-rate your skill level as far higher than it actually is (and the sad part is, the person posting this probably isn’t a teenager, they just never grew out of their own bullshit)

given how oddly specific “application auth protocol” is, bets on this person doing at best minor contributions to someone else’s OAuth library they insist on using everywhere? and when they’re asked to use a more appropriate auth implementation for the situation or to work on something deeper than the surface-level API, their knowledge immediately ends

[-] froztbyte@awful.systems 11 points 1 week ago

have implemented jwt (used the library, first in the company)

[-] self@awful.systems 12 points 1 week ago

so uh, they keep self-fellating on Twitter about how they invented their own CAD program over the objections of the haters

here it is, it’s an extremely thin wrapper around the typescript version of manifold with live reloading on changes. note that not only is manifold already a CAD library, they already have a web-based editor that reloads the model on code changes, and kache’s live reloading is just nodemon. the server part looks like it’s barely modified from a code example. the renderer is just three.js grabbed from a CDN.

it’s so weird they didn’t take the necessary 2 weeks to learn how to write the CAD parts of the CAD system they made!

load more comments (6 replies)
load more comments (1 replies)
[-] slopjockey@awful.systems 15 points 1 week ago
[-] blakestacey@awful.systems 10 points 1 week ago

Fun fact: The plain vanilla physics major at MIT requires three semesters of quantum mechanics. And that's not including the quantum topics included in the statistical physics course, or the experiments in the lab course that also depend upon it.

Grad school is another year or so of quantum on top of that, of course.

(MIT OpenCourseWare actually has fairly extensive coverage of all three semesters: 8.04, 8.05 and 8.06. Zwiebach was among the best lecturers in the department back in my day, too.)

load more comments (23 replies)
[-] barsquid@lemmy.world 14 points 1 week ago

This person has certainly committed to this philosophy, even to the extent of spending less than one week of thought coming to this very conclusion.

[-] istewart@awful.systems 13 points 1 week ago

twitter gon' have nothin' left but the cranks

[-] slopjockey@awful.systems 12 points 1 week ago

Just guys like that and guys like this

load more comments (3 replies)
[-] BlueMonday1984@awful.systems 20 points 1 week ago
[-] Soyweiser@awful.systems 17 points 1 week ago* (last edited 1 week ago)

I had heard some vague stuff about this, but had no idea it was this bad. Also, I didn't know how much of a fool RMS was. : "RMS did not believe in providing raises — prior cost of living adjustments were a battle and not annual. RMS believed that if a precedent was created for increasing wages, the logical conclusion would be that employees would be paid infinity dollars and the FSF would go bankrupt." (It gets worse btw).

[-] bitofhope@awful.systems 17 points 1 week ago* (last edited 1 week ago)

Little of this was news to me, but damn, laid out systematically like that, it's even more damning than I expected. And the stuff that was new to me certainly didn't help.

Very serious people at HN at it again:

The only argument I find here against it is the question of whether someone's personal opinions should be a reason to be removed from a leadership position.

Yes, of course they should be! Opinions are essential to the job of a leader. If the opinions you express as a leader include things like "sexual harassment is not a real crime" or "we shouldn't give our employees raises because otherwise they'll soon demand infinite pay" or "there's no problem in adults having sex with 14 year olds and me saying that isn't going to damage the reputation of the organization I lead" you're a terrible leader and and embarrassment of a spokesman.

Edit: The link submitted by the editors is [flagged] [dead]. Of course.

load more comments (4 replies)
[-] swlabr@awful.systems 13 points 1 week ago* (last edited 1 week ago)

Top level comment at time of posting:

“This might not look that bad, but consider the post-USSR…”

???

No need for these soviet level mental gymnastics. You can just say he needs to be removed permanently.

[-] gerikson@awful.systems 12 points 1 week ago

the lobste.rs thread is a trash fire too.

of note is that the Stallman defenders from about 3 years back (when he waded in unprompted in a mailing list meant for undergrads at MIT and was pretty damn sure that Marvin Minsky never had sex with one of Epstein's victims, and if he did, it would have been because he was sure she wasn't underage) have registered https://stallman-report.com which redirects to their lengthy apologia. Could be worth taking into account fi you want to spread the original around

load more comments (8 replies)
[-] nightsky@awful.systems 20 points 1 week ago

Today I was looking at buying some stickers to decorate a laptop and such, so I was browsing Redbubble. Looking here and there I found some nice designs and then stumbled upon a really impressive artist portfolio there. Thousands of designs, woah, I thought, it must have been so much work to put that together!

Then it dawned on me. For a while I had completely forgotten that we live in the age of AI slop... blissfull ignorance! But then I noticed the common elements in many of the designs... noticed how everything is surrounded by little dots or stars or other design trinkets. Such a typical AI slop thing, because somehow these "AI" generators can't leave any whitespace, they must fill every square millimeter with something. Of course I don't know for sure, and maybe I'm doing an actual artist injustice with my assumption, but this sure looked like Gen-AI stuff...

Anyway, I scrapped my order for now while I reconsider how to approach this. My brain still associates sites like redbubble or etsy with "art things made by actual humans", but I guess that certainty is outdated now.

This sucks so much. I don't want to pay for AI slop based on stolen human-created art - I want to pay the actual artists. But now I can never know... How can trust be restored?

load more comments (2 replies)
[-] BlueMonday1984@awful.systems 17 points 1 week ago

New pair of Tweets from Zitron just dropped:

I also put out a lengthy post about AI's future on MoreWrite - go and read it, its pretty cool

Boo! Hiss! Bring Saltman back out! I want unhinged conspiracy theories, damnit.

It feels like this is supposed to be the entrenchment, right? Like, the AGI narrative got these companies and products out into the world and into the public consciousness by promising revolutionary change, and now this fallback position is where we start treating the things that have changed (for the worse) as fair accompli and stop whining. But as Ed says, I don't think the technology itself is capable of sustaining even that bar.

Like, for all that social media helped usher in surveillance capitalism and other postmodern psychoses, it did so largely by providing a valuable platform for people to connect in new ways, even if those ways are ultimately limited and come with a lot of external costs. Uber came into being because providing an app-based interface and a new coat of paint on the taxi industry hit on a legitimate market. I don't think I could have told you how to get a cab in the city I grew up in before Uber, but it's often the most convenient way to get somewhere in that particular hell of suburban sprawl unless you want to drive yourself. And of course it did so by introducing an economic model that exploits the absolute shit out of basically everyone involved.

In both cases, the thing that people didn't like was external or secondary to the thing people did like. But with LLMs, it seems like the thing people most dislike is also the main output of the system. People don't like AI art, they don't like interacting with chatbots in basically anywhere, and the confabulation problems undercut their utility for anything where correlation to the real world actually matters, leaving them somewhere between hilariously and dangerously inept at many of the functions they're still being pitched for.

[-] Soyweiser@awful.systems 17 points 1 week ago

As more and more browsers are enshittifying, this is a small reminder that Brave is not a great alternative.

[-] blakestacey@awful.systems 16 points 1 week ago

Max Tegmark has taken a break from funding neo-Nazi media to blather about Artificial General Intelligence.

As humanity gets closer to Artificial General Intelligence (AGI)

The first clause of the opening line, and we've already hit a "citation needed".

He goes from there to taking a prediction market seriously. And that Aschenbrenner guy who thinks that Minecraft speedruns are evidence that AI will revolutionize "science, technology, and the economy".

You know, ten or fifteen years ago, I would have disagreed with Tegmark about all sorts of things, but I would have granted him default respect for being a scientist.

load more comments (8 replies)
[-] o7___o7@awful.systems 15 points 1 week ago

Me, a nuclear engineer reading about "Google restarting six nuclear power plants"

lol, lmao even

[-] swlabr@awful.systems 11 points 1 week ago

Future headline: “Google quietly shuts down six nuclear power plants”

[-] BlueMonday1984@awful.systems 15 points 1 week ago

Zitron's given commentary on PC Gamer's publicly pilloried pro-autoplag piece:

He's also just dropped a thorough teardown of the tech press for their role in enabling Silicon Valley's worst excesses. I don't have a fitting Kendrick Lamar reference for this, but I do know a good companion piece: Devs and the Culture of Tech, which goes into the systemic flaws in tech culture which enable this shit.

[-] swlabr@awful.systems 14 points 1 week ago

I bear news from the other place!

https://www.reddit.com/r/australia/comments/1g3zt5b/hsc_english_exam_using_ai_images/

Post content reproduced here:

autoplag image of some electronics on a table

hello, as a year 12 student who just did the first english exam, i was genuinely baffled seeing one of the stimulus texts u have to analyse is an AI IMAGE. my friend found the image of it online, but that’s what it looked like

for a subject which tells u to “analyse the deeper meaning”, “analyse the composer’s intent”, “appreciate aesthetic and intellectual value” having an AI image in which you physically can’t analyse anything deeper than what it suggests, it’s just extremely ironic 😭 idk, [as an artist who DOESNT use AI]* i might have a different take on this since i’m an artist, what r ur thoughts?

*NB: original post contains the text: "as an artist using AI images" but this was corrected in a later comment:

also i didn’t read over this after typing it out but, meant to say, “as an artist who DOESNT use AI”

[-] bitofhope@awful.systems 10 points 1 week ago

In a twisted way, this makes sense as an exercise for English class. Why would someone go to an autoplag image generator, type in a prompt (perhaps something like "laptop and smartphones on a table at a lakefront") and save this image. It's a question I can't easily answer myself. It's hard to imagine the intention behind wanting to synthesize this particular picture, but it's probably something we'll be asking often in the near future.

I can even understand the shrimp Jesus slop or soldiers with huge bibles stuff to an extent. I can understand what the intended emotional appeal is and at least feel something like bewilderment or amusement about the surreality of them. This one would be just banal even if it were a real photo, so why make this? The AI didn't have intent or imbue meaning in the image but surely someone did.

[-] o7___o7@awful.systems 12 points 1 week ago* (last edited 1 week ago)

v light, only weakly techtakes material, but I'm immature enough to want to share:

spoilerI just got a sales email from "Richard at Autodesk" titled "Hear from the probing experts"

Does anyone read these things before or after they're sent?

load more comments (3 replies)
[-] gerikson@awful.systems 12 points 1 week ago

this demented take on using GenAI to create documentation for open source projects

https://lobste.rs/s/rmbos5/large_language_models_reduce_public#c_j8boat

load more comments (6 replies)
[-] ICastFist@programming.dev 11 points 1 week ago

Speaking of twitter shit, I'm sad that it's back online in Brazil.

[-] BlueMonday1984@awful.systems 10 points 1 week ago

You know, I can't tell if this is supposed to be "I know you're saying that calling unhoused people vermin is some Nazi shit, but it's more complicated than that" or "I know calling unhoused people vermin is some Nazi shit, and I'm honestly okay with that".

Gonna guess the latter given where it's coming from and the fact that the actual "more complicated" is a salad of non sequiturs.

[-] froztbyte@awful.systems 10 points 1 week ago

eigen is squarely in the tpot crew

it's definitely not coming from a good place

load more comments (2 replies)
load more comments
view more: ‹ prev next ›
this post was submitted on 13 Oct 2024
18 points (100.0% liked)

TechTakes

1337 readers
96 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS