Soyweiser

joined 2 years ago
[–] Soyweiser@awful.systems 1 points 4 minutes ago* (last edited 2 minutes ago)

From your prev post:

There is a “lattice” which connects all consciousnesses

The noosphere, the old cosmists strike again. This sort of stuff and the global consciousness projects (who used random number generators iirc) etc are def part of the training data.

[–] Soyweiser@awful.systems 6 points 22 hours ago

And the judge sort of snapped. She said very sternly that this trial was not about whether or not artificial intelligence has damaged humanity.

Someone give the judge a honorable sneerclub account.

[–] Soyweiser@awful.systems 1 points 23 hours ago

Are you saying you alt+27:q!?

[–] Soyweiser@awful.systems 2 points 23 hours ago

Almost like they want to destroy any trust people have in tech companies.

[–] Soyweiser@awful.systems 4 points 5 days ago

This doing the work together thing reminds me of how some teachers at my uni used to teach. It was always more satisfying when your teachers didn't know the answers beforehand and people worked on it together than if it turned out the teacher already knew. Of course these sorts of lessons are way harder to setup.

[–] Soyweiser@awful.systems 4 points 5 days ago

top 1%

So... 1 in a 100? That isn't that impressive. I'm ignoring the utter weirdness of what he is even talking about, but you expect a billionaire to have at least a better grasp of numbers.

[–] Soyweiser@awful.systems 8 points 5 days ago* (last edited 5 days ago) (6 children)

Basically it’s really obvious that they don’t have a meaningful way to describe exactly what they want it to do and so they’re playing whack-a-mole with undesired behaviors in order to minimize how often it embarrasses them.

The whole 'how many r's in strawberry' sort of stuff already made me suspect that, when the popular one was fixed and other attempts at asking for letters did still give the miscounts.

Wonder of the goblin stuff is the start of some model collapse. And if we all can make it worse by talking about goblins more. As goblins are always relevant.

[–] Soyweiser@awful.systems 4 points 5 days ago

Employees have discussed ways to tweak AI models to prioritize sponsored information in ChatGPT’s responses when users ask relevant queries

Hope people realize that this doesnt stop at ads. (Preaching to the choir here). See Grok.

[–] Soyweiser@awful.systems 3 points 6 days ago* (last edited 6 days ago) (1 children)

That is a still quite high right? Esp considering they think 5% of nul-a is quite high. (For some reason I once had two copies of that). (I have read nul-a and not metamorp of prime)

[–] Soyweiser@awful.systems 9 points 6 days ago

The manosphere lingo, the header image with the leather jacket and the fake signing of a boob, the self dealing, the pretending they pay all their devs/researchers a lot of money, the who uses the much tokens leaderboard. There is such a high amount of sick desperation in all this.

We were to hard on the previous wave, who like Balmer were just cringe capitalist overlords.

[–] Soyweiser@awful.systems 3 points 1 week ago

I think that if someone were to be as obssessed with living forever as LW are, it would be seen as a form of mental illness and the Minds would gently try to correct it.

Yeah, I don't think they would care if it was just a few, or a small group, but culture people who start to claim others are deathists and the extreme of whom have all kinds weird violent thoughts on them would be concerning. Doubt it would be a huge concern to the minds however, they prob only really get active when one of them also starts wants to create an empire or something, but it is hard to amass resources for that in the culture, esp if no mind is on your side.

Do wonder why we never see culture people who worship the minds as gods.

90
submitted 3 weeks ago* (last edited 3 weeks ago) by Soyweiser@awful.systems to c/sneerclub@awful.systems
 

As we don't have a top level post about this already (nor on reddit) I thought why not make one. Archive.is

Extremely likely the guy was a lesswronger, or at least radicalized by that sort of thinking.

But not much else seems to be known as far as I can tell. Corbin also posted about the HN reactions in the stubsack.

And remember, no fed posting.

Edit: looks like his house also got shot. Archive (after the speculation in this thread, makes you wonder if this was a follow up false flag, as the bottle didn't break last time).

 

Via reddits sneerclub. Thanks u/aiworldism.

I have called LW a cult incubator for a while now, and while the term has not catched on, nice to see more reporting on the problem that lw makes you more likely to join a cult.

https://www.aipanic.news/p/the-rationality-trap the original link for the people who dont like archive.is used the archive because I dont like substack and want to discourage its use.

 

As found by @gerikson here, more from the anti anti TESCREAL crowd. How the antis are actually R9PRESENTATIONALism. Ottokar expanded on their idea in a blog post.

Original link.

I have not read the bigger blog post yet btw, just assumed it would be sneerable and posted it here for everyone's amusement. Learn about your own true motives today. (This could be a troll of course, boy does he drop a lot of names and thinks that is enough to link things).

E: alternative title: Ideological Turing Test, a critical failure

 

Original title 'What we talk about when we talk about risk'. article explains medical risk and why the polygenic embryo selection people think about it the wrong way. Includes a mention of one of our Scotts (you know the one). Non archived link: https://theinfinitesimal.substack.com/p/what-we-talk-about-when-we-talk-about

11
submitted 11 months ago* (last edited 11 months ago) by Soyweiser@awful.systems to c/sneerclub@awful.systems
 

Begrudgingly Yeast (@begrudginglyyeast.bsky.social) on bsky informed me that I should read this short story called 'Death and the Gorgon' by Greg Egan as he has a good handle on the subjects/subjects we talk about. We have talked about Greg before on Reddit.

I was glad I did, so going to suggest that more people he do it. The only complaint you can have is that it gives no real 'steelman' airtime to the subjects/subjects it is being negative about. But well, he doesn't have to, he isn't the guardian. Anyway, not going to spoil it, best to just give it a read.

And if you are wondering, did the lesswrongers also read it? Of course: https://www.lesswrong.com/posts/hx5EkHFH5hGzngZDs/comment-on-death-and-the-gorgon (Warning, spoilers for the story)

(Note im not sure this pdf was intended to be public, I did find it on google, but might not be meant to be accessible this way).

 

Some light sneerclub content in these dark times.

Eliezer complements Musk on the creation of community notes. (A project which predates the takeover of twitter by a couple of years (see the join date: https://twitter.com/CommunityNotes )).

In reaction Musk admits he never read HPMOR and he suggests a watered down Turing test involving HPMOR.

Eliezer invents HPMOR wireheads in reaction to this.

view more: next ›