scruiser

joined 2 years ago
[–] scruiser@awful.systems 6 points 6 hours ago

I mean... Democrats making dishonest promises of actual leftist solutions would be them making any acknowledgement of actual leftism, so I would count that as net progress compared to their current bland status quo maintenance. But yeah, your overall point is true.

[–] scruiser@awful.systems 5 points 6 hours ago

That sounds like actual leftism, so no they really don't have the slightest inkling, they still think mainstream Democrats are leftist (and Democrats with some traces of leftism like Bernie or AOC are radical extremist leftists).

[–] scruiser@awful.systems 7 points 6 hours ago* (last edited 6 hours ago)

These people need to sit through a college level class on linguistics or something like that. This is a demonstration of why STEM majors need general higher education.

[–] scruiser@awful.systems 6 points 6 hours ago

Yeah if the author had any self awareness they might consider why the transphobes and racists they have made common cause with are so anti-science and why pro-science and college education people lean progressive, but that would lead to admitting their bigotry is opposed to actual scientific understanding and higher education, and so they will understood come up with any other rationalization.

[–] scruiser@awful.systems 13 points 6 hours ago (3 children)

Keep in mind the author isn't just (or even primarily) counting ultra wealth and establishment politicians as "elites", they are also including scientists trying to educate the public on their area of expertise (i.e. COVID, Global Warming, Environmentalism, etc.), and sociologists/psychologists explaining problems the author wants to ignore or are outright in favor of (racism/transphobia/homophobia).

 

I am still subscribed to slatestarcodex on reddit, and this piece of garbage popped up on my feed. I didn't actually read the whole thing, but basically the author correctly realizes Trump is ruining everything in the process of getting at "DEI" and "wokism", but instead of accepting the blame that rightfully falls on Scott Alexander and the author, deflects and blames the "left" elitists. (I put left in quote marks because the author apparently thinks establishment democrats are actually leftist, I fucking wish).

An illustrative quote (of Scott's that the author agrees with)

We wanted to be able to hold a job without reciting DEI shibboleths or filling in multiple-choice exams about how white people cause earthquakes. Instead we got a thousand scientific studies cancelled because they used the string “trans-” in a sentence on transmembrane proteins.

I don't really follow their subsequent points, they fail to clarify what they mean... In sofar as "left elites" actually refers to centrist democrats, I actually think the establishment Democrats do have a major piece of blame in that their status quo neoliberalism has been rejected by the public but the Democrat establishment refuse to consider genuinely leftist ideas, but that isn't the point this author is actually going for... the author is actually upset about Democrats "virtue signaling" and "canceling" and DEI, so they don't actually have a valid point, if anything the opposite of one.

In case my angry disjointed summary leaves you any doubt the author is a piece of shit:

it feels like Scott has been reading a lot of Richard Hanania, whom I agree with on a lot of points

For reference the ssc discussion: https://www.reddit.com/r/slatestarcodex/comments/1jyjc9z/the_edgelords_were_right_a_response_to_scott/

tldr; author trying to blameshift on Trump fucking everything up while keeping up the exact anti-progressive rhetoric that helped propel Trump to victory.

[–] scruiser@awful.systems 7 points 19 hours ago

Yeah I also worry the slop and spam is here to stay, it's easy enough to make, of as passable quality for the garbage uses people want from it, and if GPUs/compute go down in price, affordable enough for the spammers and account boosters and karma farmers and such to keep using it.

[–] scruiser@awful.systems 9 points 19 hours ago

I think you are much more optimistic than me about the general public's ability to intellectually understand fascism or think about copyright or give artists their appropriate credit. To most people that know about image gen, it's a fun toy: throw in some words and rapidly get pictures. The most I hope for is that AI image generation becomes unacceptable to use in professional or serious settings and it is relegated to a similar status as clip art.

[–] scruiser@awful.systems 6 points 19 hours ago (1 children)

I don’t think they’d try that hard.

Wow lol... 2) was my guess at an easy/lazy/fast solution, and you think they are too lazy for even that? (I think a "proper" solution would involve substantial modifications/extensions to the standard LLM architecture, and I've seen academic papers with potential approaches, but none of the modelfarmers are actually seriously trying anything along those lines.)

[–] scruiser@awful.systems 7 points 20 hours ago (4 children)

Serious question: what are people's specific predictions for the coming VC bubble popping/crash/AI winter? (I've seen that prediction here before, and overall I agree, but I'm not sure about specifics...)

For example... I've seen speculation that giving up on the massive training runs could free up compute and cause costs to drop which the more streamlined and pragmatic GenAI companies could use to pivot to providing their "services" at sustainable rates (and the price of GPUs would drop to the relief of gamers everywhere). Alternatively, maybe the bubble bursting screws up the GPU producers and cloud service providers as well and the costs on compute and GPUs don't actually drop that much if any?

Maybe the bubble bursting makes management stop pushing stuff like vibe coding... but maybe enough programmers have gotten into the habit of using LLMs for boilerplate that it doesn't go away, and LLM tools and plugins persist to make code shittery.

[–] scruiser@awful.systems 2 points 20 hours ago

which I estimate is going to slide back out of affordability by the end of 2026.

You don't think the coming crash is going to drive compute costs down? I think the VC money for training runs drying up could drive down costs substantially... but maybe the crash hits other aspects of the supply chain and cost of GPUs and compute goes back up.

He doubles down on copyright despite building businesses that profit from Free Software. And, most gratingly, he talks about the Pareto principle while ignoring that the typical musician is never able to make a career out of their art.

Yeah this shit grates so much. Copyright is so often a tool of capital to extract rent from other people's labor.

[–] scruiser@awful.systems 10 points 20 hours ago (3 children)

I have two theories on how the modelfarmers (I like that slang, it seems more fitting than "devs" or "programmers") approached this...

  1. Like you theorized, they noticed people doing lots of logic tests, including twists on standard logic tests (that the LLMs were failing hard on), so they generated (i.e. paid temp workers) to write a bunch of twists on standard logic tests. And here we are, with it able to solve a twist on the duck puzzle, but not really better in general.

  2. There has been a lot of talk of synthetically generated data sets (since they've already robbed the internet of all the text they could). Simple logic puzzles could actually be procedurally generated, including the notation diz noted. The modelfarmers have over-generalized the "bitter lesson" (or maybe they're just lazy/uninspired/looking for a simple solution they can tell the VCs and business majors) and think just some more data, deeper network, more parameters, and more training will solve anything. So you get the buggy attempt at logic notation from synthetically generated logic notation. (Which still doesn't quite work, lol.)

I don't think either of these approaches will actually work for letting LLM's solve logic puzzles in general, these approaches will just solve individual cases (for solution 1) and make the hallucinations more convincing (for 2). For all their talk of reaching AGI... the approaches the modelfarmers are taking suggest a mindset of just reaching the next benchmark (to win more VC, and maybe market share?) and not of creating anything genuinely reliable much less "AGI". (I'm actually on the far optimistic end of sneerclub in that I think something useful might be invented that lasts the coming AI winter... but if the modelfarmers just keep scaling and throwing more data at the problem, I doubt they'll even manage that much).

[–] scruiser@awful.systems 9 points 3 days ago

With a name like that and lesswrong to springboard it's popularity, BayesCoin should be good for at least one cycle of pump and dump/rug-pull.

Do some actual programming work (or at least write a "white paper") on tying it into a prediction market on the blockchain and you've got rationalist catnip, they should be all over it, you could do a few cycles of pumping and dumping before the final rug pull.

 

So despite the nitpicking they did of the Guardian Article, it seems blatantly clear now that Manifest 2024 was infested by racists. The post article doesn't even count Scott Alexander as "racist" (although they do at least note his HBD sympathies) and identify a count of full 8 racists. They mention a talk discussing the Holocaust as a Eugenics event (and added an edit apologizing for their simplistic framing). The post author is painfully careful and apologetic to distinguish what they personally experienced, what was "inaccurate" about the Guardian article, how they are using terminology, etc. Despite the author's caution, the comments are full of the classic SSC strategy of trying to reframe the issue (complaining the post uses the word controversial in the title, complaining about the usage of the term racist, complaining about the threat to their freeze peach and open discourse of ideas by banning racists, etc.).

view more: next ›