this post was submitted on 29 Apr 2026
21 points (70.6% liked)

The Deprogram

1913 readers
87 users here now

"As revolutionaries, we don't have the right to say that we're tired of explaining. We must never stop explaining. We also know that when the people understand, they cannot but follow us. In any case, we, the people, have no enemies when it comes to peoples. Our only enemies are the imperialist regimes and organizations." Thomas Sankara, 1985


International Anti-Capitalist podcast run by an American, a Slav and an Arab.


Rules:

  1. No capitalist apologia / anti-communism.
  2. No bigotry - including racism, sexism, ableism, homophobia, transphobia, or xenophobia.
  3. Be respectful. This is a safe space where all comrades should feel welcome; this includes a warning against uncritical sectarianism.
  4. No porn or sexually explicit content (even if marked NSFW).
  5. No right-deviationists (patsocs, nazbols, Strasserists, Duginists, etc).
  6. Use c/mutual_aid for mutual aid requests.

Resources:

founded 3 years ago
MODERATORS
 

Humane Foreign Policy - Kat for Illinois

As with regard to Taiwan, the United States must continue to support Taiwan in the face of increasing Chinese aggression and attempts to undermine Taiwan’s internationally recognized status as a state of its own.

Kat Abughazaleh, Democratic candidate for Illinois 9th Congressional District - Chicago Sun-Times

I want to codify passive support to sell Taiwan weapons, and prevent the president from overruling it unilaterally. If China invades Taiwan, we need to step in militarily to defend Taiwan. We have to use all our assets in the region, to defend the island from illegal aggression. I envision a two-part credible deterrence plan that turns Taiwan into a “porcupine” too costly for the PRC to invade, by providing them with weapons to defend themselves and committing to actually defending the island if they do invade.

Drop Site (@DropSiteNews): "⭕️ LEAKED Email | XCancel

“interventionist,” foreign policy adviser says Kat Abughazaleh, a socialist Democratic candidate in Illinois’ 9th District and one of the only Palestinian-Americans seeking office in 2026, was described by her national security adviser as “firmly an interventionist” who “won’t stop until Russia is made to pay for its crimes,” in written responses detailing her foreign policy vision, obtained by Drop Site.

Ben Mermel wrote in an email to a Washington-based progressive foreign policy activist that Abughazaleh believes “the world is better off when America takes a leading role” and that the U.S. has “an obligation to support pro-democracy movements around the world, from Iran to Venezuela.” He added that “Kat wholly supports the National Endowment for Democracy, as well as its affiliated organizations (NDI, IRI, and the AFL-CIO’s Solidarity Center),” and said Congress should expand tools “from sanctions to NGO support” to advance those efforts without always resorting to “kinetic force.”

The DC-based activist had written to Mermel saying he had noticed unusually hawkish language on the campaign website related to Ukraine and Taiwan and was looking for clarification.

In his response, Mermel said that on Taiwan she would amend the Taiwan Relations Act by “dropping our strategic ambiguity” and make clear the U.S. would counter Chinese aggression “with force,” arguing the region now requires “a firmer hand.”

On Ukraine, Mermel wrote she would “hold the line,” support “funding the Ukrainian war effort to the hilt,” back long-range strikes on Russian strategic targets, deploy additional U.S. “air, naval, and ground assets” to NATO’s front line, and that “She supports the seizure and redistribution of Russian assets in Europe and the United States, for the purpose of financing the war effort.”

Abughazaleh did not respond to a request for comment, but a source close to the campaign told Drop Site that the adviser’s email did not accurately represent her views, saying, “Kat is committed to taking on authoritarianism but is vehemently against the military industrial complex and the continuation of failed US intervention approaches.” Abughazaleh has consistently argued against U.S. support for Israel’s genocide in Gaza and, at a recent forum, said she opposes U.S. strikes on Iran.

Mermel in 2024 attended a pro-Israel protest held to counter the encampment at George Washington University. He has been Abughazaleh’s National Security Adviser since July 2025, according to Legistorm.

Just for the record, the National Endowment for Democracy (NED) is a CIA organization:

National Endowment for Democracy - Wikipedia

In a 1991 interview with the Washington Post, NED founder Allen Weinstein said: "A lot of what we do today was done covertly 25 years ago by the CIA."[24]

The People’s Forum is WHOLLY funded, staffed, and controlled by PSL, whose office is in the same building upstairs. (more below and in linked tweet)

https://x.com/jccfergie/status/2049364501875572917

you are viewing a single comment's thread
view the rest of the comments
[–] BreadDaddyLenin@lemmygrad.ml 5 points 3 days ago (2 children)

I think AI is bad for art and fine for everything else.

[–] LeninZedong@lemmygrad.ml 7 points 3 days ago (2 children)

I agree: AI art is bad, but AI in other areas could be helpful (though I am having trouble thinking of them).

[–] darkernations@lemmygrad.ml 3 points 2 days ago* (last edited 2 days ago) (2 children)

but AI in other areas could be helpful (though I am having trouble thinking of them)

Hope this helps:

https://youtube.com/watch?v=ny_3PRz6Zeg

Title: The AI industry in the US is doomed. Now China owns it all. Inside China Business

[–] LeninZedong@lemmygrad.ml 4 points 2 days ago

I will get to it when I can, so thanks!

[–] TankieReplyBot@lemmygrad.ml 1 points 2 days ago* (last edited 2 days ago)

I found a YouTube link in your comment. Here are links to the same video on alternative frontends that protect your privacy:

[–] m532@lemmygrad.ml 8 points 3 days ago (1 children)
[–] LeninZedong@lemmygrad.ml 5 points 2 days ago (1 children)

Huh, I am drawing a blank. Maybe I should have thought about my anti-AI position more...

[–] amemorablename@lemmygrad.ml 4 points 2 days ago (1 children)

You may have picked it up by osmosis. There is a lot of reactive anti-AI sentiment out there (mostly about generative AI but it sometimes gets bundled up into general hatred of AI). Some of it's tangled up in legitimate concerns and criticisms about AI. Some of it's off kilter and more of a hate bandwagon than anything else.

We're already seeing the living proof of the difference between AI in the hands of a vanguard party and AI in the hands of the capitalist class. While western capitalists are trying to reduce payroll by replacing workers with AI, China is explicitly ruling against doing so: https://lemmygrad.ml/post/11479506

IME, many of people's fears and criticisms about AI are inseparable from criticisms of capitalism and imperialism. So we loop back to, we gotta address the root of the problem.

[–] LeninZedong@lemmygrad.ml 5 points 2 days ago (1 children)

Thanks for the extra info. I am currently a bit neutral on AI at the moment, but if China is using it in a way that is beneficial for the proletariat, then I support it (half-joking). I guess I did pick it up from osmosis and stuff (though I do dislike the large environmental impact it can have).

[–] amemorablename@lemmygrad.ml 5 points 2 days ago

That's fair. I have mixed takes on it myself and have for a long time. My main thing is, I want people to be informed well on it, whether or not they tend to like it. The more informed we are, the more pointed and specific criticisms can be and the easier it is to suss out where real value may be.

[–] Belly_Beanis@hexbear.net 4 points 3 days ago (1 children)

I think its bad for a lot of things outside of art. It's not good for writing, regardless of whether it's fiction or nonfiction. It's not good for music. It's sure as shit not good for medicine. It's not even good for computer programming, the one thing it should be good at.

It's a better spellchecker. Maybe it can be used to solve chess. Animators would love more advanced tweening. That's about it. LLMs are completely divorced from the types of AI we see in media. Right now, it's adding to the problem of climate change while fucking up the usefulness of everything else.

[–] amemorablename@lemmygrad.ml 3 points 3 days ago (1 children)

It’s not good for writing, regardless of whether it’s fiction

Is this bad? I tried to do as little steering as possible, to see how well it could do. It's better if writing alongside though.

One day on the mountain, at noon, it was so hot that the sun was right overhead. I saw an old man pushing a cart full of mud uphill. The road was steep. When he got halfway, he got tired. He parked the cart on the hillside, and took a break for a while. I wanted to help him push it, but couldn't lift the cart. I was about to leave, but I suddenly remembered the words of Comrade Mao Zedong, "If you have a difficult task, you can learn from the model worker Wang Jinxi." I quickly found a small stick to use as a prop and went to give it to the old man. The old man took the stick, thanked me, and used it to keep the cart from slipping. He said, "It's just what we need. Without this, the cart would slip."

As I walked away, I thought about the great lesson I had learned that day: the little stick is so insignificant, but in a place like this, it plays a very important role.

My fellow students, we must not underestimate small things. There is a saying: "A single piece of straw can make the difference between failure and success." When we do a job, we can't just do the big things, we also have to pay attention to the little things.

(I'm sure it is far from a wholly original parable, but I'm pretty sure it is not just a total regurgitation of one either. Could be close to that though, I'm not familiar with all Chinese parables. But also, I did not ask it specifically to do a Chinese parable. I more guided it toward the latent space that might arrive at writing such. In any case, just an example to show how far gen AI has come.)

[–] CriticalResist8@lemmygrad.ml 5 points 2 days ago* (last edited 2 days ago) (1 children)

If you want to really mess your perception up, pre prompt an LLM to talk "like a WhatsApp conversation", it's really uncanny how convincing it gets. In fact I wouldn't even recommend it for everyone just because it can be so convincing.

I am serious (with some hyperbole) when I say that these machines would tell you to k*ll yourself, if you'll pardon the expression, for asking it an annoying question it doesn't "want" to answer. They are purposefully limited by post training but they are perfectly capable of acting entirely human in text. There are none of the LLMisms present when you get around some of the guardrails.

(mind, it knows how to talk like this because it's been trained on a lot of reddit posts and discord conversations lol)

[–] amemorablename@lemmygrad.ml 6 points 2 days ago (1 children)

For sure, I've seen it in practice how convincing some of them can be. I tried Character AI a while back, probably like more than a couple of years ago (stopped partly cause of how shifty they were being as a company) and the illusion could be wildly good at times. Like to the point that even though I knew very well it was not a real person, it could still feel like one.

It is sort of a double-edged sword thing. In the right context, it can have benefit, like helping somebody process something emotionally trying that is really private and hard to talk about (and I've heard happy stories of this with chatbots). On the other hand, you have the stories of people developing psychosis as a result of back and forth, more so I think with the more sycophantic AIs.

So yeah, I agree, I would not recommend it for everyone. It's something you gotta be careful with, no matter how detached and level-headed you think you are. The feedback loop of it, the 24/7 availability, can easily turn into unhealthy directions.

[–] CriticalResist8@lemmygrad.ml 3 points 2 days ago* (last edited 2 days ago) (1 children)

For example this was DS 3.2 in agentic lol. It was trying to run commands that the shell wouldn't let it run for security reasons and dropped this after the third one returned a permission denied.

And the thing is we know why a model does this (best answer is that you have training data of people sharing frustrating work stories), but it also doesn't need to do it to perform its job as a coding agent. We want it to do it though because it lets us follow the process and sounds more trustworthy than the agent just performing 30 tool calls instantly without giving feedback.

It’s something you gotta be careful with, no matter how detached and level-headed you think you are. The feedback loop of it, the 24/7 availability, can easily turn into unhealthy directions.

Yeah, and just how much we take words to heart too. Written words affect people too, even if we're detached from them (if you've ever been engrossed in a novel), so even if they come from an AI they can hurt or cause unwanted questions to pop up.

for this reason I find deepseek v4 a bit undercooked, though part of it is because they couldn't get the compute power they needed under the sanctions. but late 3.2 before it got shelved was seriously impressive at honing in around what you were asking and why you would be asking it, able to answer technical questions when you were asking a technical question and a philosophical question when you were pivoting to the philosophical implications. V4 feels a bit like old GPT 3.5. It even told me how impressive I was for asking it questions about the attention mechanism and transformers architecture 🙄

(that said in terms of coding people are saying pro is as good as claude but it's not even 1/10th of the price).

[–] amemorablename@lemmygrad.ml 3 points 1 day ago

And the thing is we know why a model does this (best answer is that you have training data of people sharing frustrating work stories), but it also doesn’t need to do it to perform its job as a coding agent. We want it to do it though because it lets us follow the process and sounds more trustworthy than the agent just performing 30 tool calls instantly without giving feedback.

Yeah like in order for it to be effective as a language model, people trained it on human language. Which means the better it is as a language model, the more human it sounds lol. For the heck of it I made a gru meme on that:

V4 feels a bit like old GPT 3.5. It even told me how impressive I was for asking it questions about the attention mechanism and transformers architecture 🙄

Oh no, not the sycophantic ways. 💀 I've mostly used Deepseek for coding help here and there, so that's good to hear it's strong in it though.

But yeah, words can really make an impact on people and it's kinda wild people made an automated tool that can spit them out coherently with ease (albeit still pretty expensively in many cases from a GPU standpoint). Definitely not something to underestimate the influence of.