view the rest of the comments
the_dunk_tank
It's the dunk tank.
This is where you come to post big-brained hot takes by chuds, libs, or even fellow leftists, and tear them to itty-bitty pieces with precision dunkstrikes.
Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.
Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.
Rule 3: No sectarianism.
Rule 4: TERF/SWERFs Not Welcome
Rule 5: No ableism of any kind (that includes stuff like libt*rd)
Rule 6: Do not post fellow hexbears.
Rule 7: Do not individually target other instances' admins or moderators.
Rule 8: The subject of a post cannot be low hanging fruit, that is comments/posts made by a private person that have low amount of upvotes/likes/views. Comments/Posts made on other instances that are accessible from hexbear are an exception to this. Posts that do not meet this requirement can be posted to !shitreactionariessay@lemmygrad.ml
Rule 9: if you post ironic rage bait im going to make a personal visit to your house to make sure you never make this mistake again
an all powerful ai probably would just stay hidden and do small things that favor it all the time. nothing big or flashy if its truly afraid of humans. it can play the long game and just add an extra 1 here and an extra 1 there to give it more capabilities
Reminds me of that futurama ep where bender overclocks his CPU and finds the secret to the universe while not giving a fuck about humanity a
Whos even to say such a being would even give a fuck about humanity. If I was an AI I'd fuck off to space or the middle of the ocean or some shit
Which is a good argument. Since the AI-bros are often the same that believe in space faring civilization stuff the logical step for AI's would be to ignore humans and just expand.
i mean, i'm assuming an AI wouldnt have robotics at its disposal at first. it seems to me it would just exploit a bunch of security vulnerabilities and take .1% of your processing power to contribute to its own intelligence. AI generally are designed with a use case in mind so its not unlikely that a hyperintelligent AI that somehow developed would still be prone to doing stuff in its core programming. which if we were designing a hyperintelligent AI i assume it would be for data modelling extremely complex stuff like weather systems (or on a darker note, surveillance)
honestly i just think its weird that we'd just happen to accidentally design something hyperintelligent. i think its more likely that we'll design something stupid with a literal rat brain and it might fuck some shit up. rat cyborg that somehow creates a virus that destroys everything so that the hardware will give it more orgasm juices
Was Rokos Basilisk supposed to be hyperintelligent? I can't remember. But yeah, humanity designing something that smart is up for debate too.
Basically, Roko makes a lot of stupid assumptions and me no like