qbduubdp

joined 2 years ago
[–] qbduubdp@hexbear.net 7 points 20 hours ago (1 children)

Could we not be like this. Those kids didn't vote for anyone.

[–] qbduubdp@hexbear.net 4 points 2 months ago (1 children)

I named my ramshorn Tom Clancy cause all they did was produce an endless stream of shit.

[–] qbduubdp@hexbear.net 3 points 4 months ago

My last time playing, I was able to afford a T10 defender, but I'm currently lacking the funds to really fit it out + rebuy if I was to boom boom. I play with a friend, so it makes stacking missions pretty easy, but I'm just unable to enjoy the gameplay loop long enough to really make money.

[–] qbduubdp@hexbear.net 4 points 4 months ago (3 children)

Recently got a cheap HOTAS and been getting back into elite dangerous the past week or so. Joined up with Communism Interstellar and have been RPing by increasing the influence of labor unions and other in game communist parties, even though it just amounts to pew pewing space pirates.

[–] qbduubdp@hexbear.net 1 points 4 months ago (1 children)

Following the asteroid analogy, I view it as this: If there's a 20% chance that an asteroid could hit us in 2050, does that supplant the threat of climate change today?

I'm not trying to say that AI systems won't kill us all, just that they are using to directly harm entire populations right now and the appeal to a future danger is being used to minimize that discussion.

Another thing to consider: If an AI system does kill us all, it will still be a human or organization that gave it the ability to do so, whether that be through training practices, or plugging it in to weapons systems. Placing the blame on the AI itself absolves any person or organization of the responsibility, which is in line with how AI is used today (i.e. the promise of algorithmic 'neutrality'). Put another way, do the bombs kill us all in a nuclear armageddon or do the people who pressed the button? Does the gun kill me, or does the person pulling the trigger?

[–] qbduubdp@hexbear.net 2 points 4 months ago (3 children)

Another user already touched the Bayesian point, so I'm not going to follow that rabbit.

I was on board with this concern since around 2016 or so, long before LLMs, and I don't have a vested interest in AI.

Ok? AI will become Skynet is such a popular idea that has permeated society since before I was even born, and I'm guessing before you were, or at least was something you were exposed to in your early years. It's frankly not an original thought you came up with in 2016, but rather something you and everyone else has inherited from popular media. Saying we need to slow research on AI to align it with "human values" still allows for this idea that we can control AI to not kill us. Moreover, it allows for the idea that only large companies can align the AI to human values, and the "human values" they are currently aligning it with have nothing to do with saving humanity. Instead, the human values are to reinforce dominant classes in society, accelerate climate change through forcing scale as the only path forward (at least until deepseek dropped), and spark mass layoffs as white collar work is automated away.

We're not going to create a paper clip machine that kills us all because it wants to simply make paper clips. We're going to make a sophisticated bullshit generator whose primary role is to replace labor. Hopefully, I don't need to spell out what this means in a capitalist society which is currently free falling into fascism. We're reaching a point where LLMs have slightly preferable error rates at scale than human workers, and that's the real danger here.

a moratorium would advance both goals. Frankly, if people see AI as an existential threat, that should be a great boon for other anti-AI parties, no?

I'm all for a Butlerian jihad, mount up. I'm not going to join you for a Yudkowskian Jihad, though.

In my view, the danger remains that if the only concern being talked about is AI will kill us all in some fantastical war or apocalyptic scenario, it creates a "hero" (i.e. Sam Altman or some other ghoul) who alone can fix it. The apocalypse argument is not currently pushing anyone towards any moratorium on AI development, but rather just creating a subfield of "alignment" which is more concerned with making sure LLMs don't say mean things, follow the narrative, and don't suggest people use irons to smooth out the wrinkles in their balls.

Global warming made sense to me when I was 8 too, but it's a common talking point among conservatives that it's ludicrous to suggest that humans could have an impact on something as large as the planet as a whole.

This part is tangential, but it actually helps as an allegory to this issue. Exxon new in the late 70s the effects their production would have, that climate change was due to our use of fossil fuels. Rather than act accordingly and pivot away, they protected their profits and muddied the waters by bringing these talking points to media and conservative outlets. Conservatives didn't organically think this is ridiculous, they were told it was absurd by media empires, and they ate it up and spread it.

I get the feeling you are here in good faith, so if you want to read more about the very real, current, actually happening dangers of AI, I would point you to Atlas of AI, Resisting AI, and the work of Bender and Gebru.

[–] qbduubdp@hexbear.net 8 points 4 months ago (9 children)

Percentages and probability don't work like that. You can't just make up percentages for no reason to confidently proclaim a probability of an outcome in the future.

AI killing us all discourse is silly and is promoted by the people who want more funding to align AI with their goals to create the modern version of the steam engine to automate away white collar work. Even if it was a likely event, it's an appeal to an unknowable, far away event, which distracts from the very real impacts it already is having today.

One of the most pervasive arguments against global warming is that it sounds absurd

I don't know? Seemed pretty straight forward to me when I was as young as eight.

[–] qbduubdp@hexbear.net 9 points 7 months ago (1 children)

To follow on to this, I just asked my partner who is a therapist and they had the same advice. They pointed out that a therapist often knows the resources in your area that will be able to help. For example, in my area, my partner was able to recommend several resources and therapy groups over animal companion loss. If your former therapist isn't able to help, they should be able to point you to resources that can help you.

I'm so terribly sorry for your loss. I mainly lurk around these parts but your post brought me to tears. meow-hug

[–] qbduubdp@hexbear.net 6 points 2 years ago (1 children)

Pretty sure this post is a honeypot

[–] qbduubdp@hexbear.net 1 points 2 years ago

New user not really contributing atm, but I found my way here from exploring lemmy and hearing all the controveries.

What I've enjoyed the most so far is seeing my preconceptions of internet leftism challenged and corrected. I checked out cth and genzedong back before reddit banned both and always thought it was simply full of antagonistic lefties who just wanted to troll libs. Honestly, it put a bad taste in my mouth and made me sad about the state of the left.

Being around here for a few days has shown me how absolutely wonderful everyone is as well as how deep conversations can go when yall have your own space free from constant lib attack. It's easy to see how I attributed the antagonism wrong and that it was simply defense against lib brigades. In hindsight, wish I would have just dug a little deeper back then and be partying with y'all since.

I've already learned so much lurking and look forward to learning more and interacting with all you beautiful people.