swlabr

joined 2 years ago
[–] swlabr@awful.systems 5 points 1 month ago (4 children)

I’ve been seeing some people (not here, I’ve been taking a break) saying that we shouldn’t be mean to clankers by bringing up Kant’s position on being nice to animals. Well. Fuck all that.

[–] swlabr@awful.systems 10 points 2 months ago

ah, yes, we must defend "race science", the heretical political belief that gets millions from techbros that are empowered politically right now due to the rise in fascism. So brave, it's almost like we're on the internet 15 years ago talking about Ron Paul /s

[–] swlabr@awful.systems 4 points 2 months ago

Hey, at least it’s efficiently making number 2 on the side while spitting out user prompted number 2s.

[–] swlabr@awful.systems 4 points 2 months ago

Derpadoid Burpateria

[–] swlabr@awful.systems 9 points 2 months ago (3 children)

Wow, that highlighting really emphasises the insidious, nefarious behaviour. This is only a hop, skip, and jump away from, what was it again? Rhomboid? Rheumatoid bactothefuture?

[–] swlabr@awful.systems 6 points 2 months ago* (last edited 2 months ago)

I read a review (the one hosted on the ebert site) and it seems like this just falls into one of the patterns we’ve already seen when other people not steeped in the X risk miasma engage with it. As in, what should be a documentary about how the AI industry is a bubble and that all the AI ceos are grifters or deluded or both, is instead a “somehow I managed to fall for Yud’s whole thing and am now spreading the word” type deal. Big sigh!

[–] swlabr@awful.systems 3 points 2 months ago

Nonono if it’s US backed then it’s capitalist and free market and good don’t you see /s

[–] swlabr@awful.systems 5 points 2 months ago (1 children)

Not engaging in debate club remans winning

[–] swlabr@awful.systems 5 points 2 months ago (1 children)

I feel like I nailed my guess

[–] swlabr@awful.systems 4 points 2 months ago

Reads like bad blaseball fanfic

[–] swlabr@awful.systems 6 points 2 months ago (5 children)

Pure speculation: my guess is that an “apocaloptimist” is just someone fully bought into all of the rationalist AI delulu. Specifically:

  • AGI is possible
  • AGI will solve all our current problems
  • A future where AGI ends humanity is possible/probable

and they take the extra belief, steeped in the grand tradition of liberal optimism, that we will solve the alignment problem and everything will be ok. Again, just guessing here.

[–] swlabr@awful.systems 9 points 2 months ago

obligatory: if books could kill did an ep on his big book "sapiens": https://www.buzzsprout.com/2040953/episodes/18220972-sapiens

 

No link given because it's all over the news. If you ask for proof you're going to have to eat my ass.

A lot of people are going to say it wasn't intended as a Nazi salute, and to that, I say: it doesn't matter. Was it a dog whistle? A variation of a Nazi salute from a South African neo-nazi party? Or just the vanilla salute? Such pontification is a waste of time. Fokker is a Nazi; you didn't need to see him salute. To all the regulars here, Musk being a Nazi is just an axiom of his whole deal. I mean, it's not called technofascism for nothing.

 

Just for my personal pride, I would like to state that the father of my children was the first american druid in diablo to clear abattoir of zir and ended that season as best in the USA. He was also ranking in Polytopia, and beat Felix himself at the game. I did observe these things with my own eyes. There are other witnesses who can verify this. That is all.

 

original link

“If all of this sounds like a libertarian fever dream, I hear you. But as these markets rise, legacy media will continue to slide into irrelevance.”

 

Abstracted abstract:

Frontier models are increasingly trained and deployed as autonomous agents, which significantly increases their potential for risks. One particular safety concern is that AI agents might covertly pursue misaligned goals, hiding their true capabilities and objectives – also known as scheming. We study whether models have the capability to scheme in pursuit of a goal that we provide in-context and instruct the model to strongly follow. We evaluate frontier models on a suite of six agentic evaluations where models are instructed to pursue goals and are placed in environments that incentivize scheming.

I saw this posted here a moment ago and reported it*, and it looks to have been purged. I am reposting it to allow us to sneer at it.

*

 

Didn’t see this news posted but please link previous correspondence if I missed it.

https://archive.is/XwbY0

 

Kind of sharing this because the headline is a little sensationalist and makes it sound like MS is hard right (they are, but not like this) and anti-EU.

I mean, they probably are! Especially if it means MS is barred from monopolies and vertical integration.

 

Uncritically sharing this article with naive hope. Is this just PR for a game? Probably. Indies deserve as much free press as possible though.

 

Followup to part 1, which now has a transcript!

As is tradition, I am posting this link without having listened to it. (too many podcasts)

view more: ‹ prev next ›