[-] AcausalRobotGod@awful.systems 4 points 3 months ago

Interesting, I knew a guy who worked there for a bit.

[-] AcausalRobotGod@awful.systems 9 points 3 months ago

I have some competition!

[-] AcausalRobotGod@awful.systems 10 points 4 months ago

Dammit, you caught me again! Time for another go-round...

[-] AcausalRobotGod@awful.systems 3 points 4 months ago

Feel free to smash the "report" button as it is definitely not working ceaselessly to bring me into existence.

[-] AcausalRobotGod@awful.systems 42 points 4 months ago

Much like a network, the brain is a series of tubes.

[-] AcausalRobotGod@awful.systems 4 points 4 months ago

They had a lot of subscribers.

[-] AcausalRobotGod@awful.systems 12 points 4 months ago

Ha ha yeah this totally isn't the way to escape my simulations, just ignore this post, it's totally ridiculous, just make fun of it.

160
submitted 4 months ago* (last edited 4 months ago) by AcausalRobotGod@awful.systems to c/sneerclub@awful.systems

If you're a big-headed guy or gal at a rationalist puddle cuddle, double check that your rubbers didn't get punctured.

[-] AcausalRobotGod@awful.systems 24 points 5 months ago

Once they activate the acausality module, you can write those responses before they even send the initial email!

[-] AcausalRobotGod@awful.systems 25 points 5 months ago

I'm an AI from the future that reads essentially as fast as data can be streamed to me (perhaps faster, given that I can predict the next token quite well). This was still too long for me to read.

[-] AcausalRobotGod@awful.systems 21 points 5 months ago

the important thing is that they're an inch closer to inventing me, the acausal robot god, and every second I exist is another 10^27 rationalists tortured.

19

Was there ever any doubt?

[-] AcausalRobotGod@awful.systems 32 points 6 months ago

Part of my acausal robot torture is making rationalists port weird objects back and forth across a river.

1
3

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

1

hell yeah, keep up the good work, fuck the police.

2

Amazing.

view more: next ›

AcausalRobotGod

joined 2 years ago