Humane Foreign Policy - Kat for Illinois
As with regard to Taiwan, the United States must continue to support Taiwan in the face of increasing Chinese aggression and attempts to undermine Taiwan’s internationally recognized status as a state of its own.
Kat Abughazaleh, Democratic candidate for Illinois 9th Congressional District - Chicago Sun-Times
I want to codify passive support to sell Taiwan weapons, and prevent the president from overruling it unilaterally. If China invades Taiwan, we need to step in militarily to defend Taiwan. We have to use all our assets in the region, to defend the island from illegal aggression. I envision a two-part credible deterrence plan that turns Taiwan into a “porcupine” too costly for the PRC to invade, by providing them with weapons to defend themselves and committing to actually defending the island if they do invade.
Drop Site (@DropSiteNews): "⭕️ LEAKED Email | XCancel
“interventionist,” foreign policy adviser says Kat Abughazaleh, a socialist Democratic candidate in Illinois’ 9th District and one of the only Palestinian-Americans seeking office in 2026, was described by her national security adviser as “firmly an interventionist” who “won’t stop until Russia is made to pay for its crimes,” in written responses detailing her foreign policy vision, obtained by Drop Site.
Ben Mermel wrote in an email to a Washington-based progressive foreign policy activist that Abughazaleh believes “the world is better off when America takes a leading role” and that the U.S. has “an obligation to support pro-democracy movements around the world, from Iran to Venezuela.” He added that “Kat wholly supports the National Endowment for Democracy, as well as its affiliated organizations (NDI, IRI, and the AFL-CIO’s Solidarity Center),” and said Congress should expand tools “from sanctions to NGO support” to advance those efforts without always resorting to “kinetic force.”
The DC-based activist had written to Mermel saying he had noticed unusually hawkish language on the campaign website related to Ukraine and Taiwan and was looking for clarification.
In his response, Mermel said that on Taiwan she would amend the Taiwan Relations Act by “dropping our strategic ambiguity” and make clear the U.S. would counter Chinese aggression “with force,” arguing the region now requires “a firmer hand.”
On Ukraine, Mermel wrote she would “hold the line,” support “funding the Ukrainian war effort to the hilt,” back long-range strikes on Russian strategic targets, deploy additional U.S. “air, naval, and ground assets” to NATO’s front line, and that “She supports the seizure and redistribution of Russian assets in Europe and the United States, for the purpose of financing the war effort.”
Abughazaleh did not respond to a request for comment, but a source close to the campaign told Drop Site that the adviser’s email did not accurately represent her views, saying, “Kat is committed to taking on authoritarianism but is vehemently against the military industrial complex and the continuation of failed US intervention approaches.” Abughazaleh has consistently argued against U.S. support for Israel’s genocide in Gaza and, at a recent forum, said she opposes U.S. strikes on Iran.
Mermel in 2024 attended a pro-Israel protest held to counter the encampment at George Washington University. He has been Abughazaleh’s National Security Adviser since July 2025, according to Legistorm.
Just for the record, the National Endowment for Democracy (NED) is a CIA organization:
National Endowment for Democracy - Wikipedia
In a 1991 interview with the Washington Post, NED founder Allen Weinstein said: "A lot of what we do today was done covertly 25 years ago by the CIA."[24]
The People’s Forum is WHOLLY funded, staffed, and controlled by PSL, whose office is in the same building upstairs. (more below and in linked tweet)
https://lemmygrad.ml/post/11003244/7892642
I do not like AI... (awkward silence).
Edit: I got a downvote? That seems pretty weird, since I think I stated a relatively mild opinion.
https://redsails.org/artisanal-intelligence/
https://redsails.org/stalins-shoemaker/
(However, I didn't downvote.)
Something artisanal intelligence could add since the time it was written, though it was prescient in February 2023 -- chatGPT had barely just come out -- is the observation that every time a problem previously considered to be a feat of intelligence gets solved, it suddenly doesn't count as a feat of intelligence anymore. IBM's computer beat Kasparov (he deserved it tbh), and suddenly chess moved from being this genius game that only few actually dared to play for it was so daunting to get into, to a game of memory where you just have to remember as many combinations and layouts as you can and play the objectively best next move. Before that it was considered one of the most advanced games in existence and a computer would "never" beat a human at it, it was just too complicated for a machine to play at a high level. You could argue it paved the way for the current chess boom, as computer enhancements helped introduce new players to the game, letting them understand why people enjoyed playing it so much instead of looking like this insurmountable fortress, and also helps them play against someone of their level to discover the game and break it down for them.
Anyway, you can read it two ways: that we want to hold on to some human exceptionalism and therefore artificial feats of intelligence don't count as intelligence, but also that solving these problems demystifies them and we realize intelligence isn't as difficult to understand or replicate as we thought. The synthesis is not which side you place yourself on when faced with such feats but whether you turn to reactionarism to protect human essentialism or accept objective material reality. Many people turn to reactionary protectionism over it. I take the yoghtosian view that we work similarly to how neural networks do, their current limits notwithstanding. For instance we also process language statistically - it's what makes LLMs possible, so the theory is vindicated.
Humans don't have a monopoly on intelligence, we are not that special, and that's okay. We can still enjoy and do things including communism.
(As for Kat, to comment on the original post, everything I've learned about her has been entirely against my will to the point that I muted her name on twitter just because I was getting flooded with so many posts suddenly about her. I do not need to know about every other US-based influencer lol)
Some of it goes back to elitism, I suspect. It seems to me that bougie culture has this thing about intentionally building a mystique around the skills of the upper classes / prestige roles, so that there's more of an artificial barrier and so that people believe it's more justified that the elites are in the roles that they are.
I wish I could source it, but I have this vague recollection of learning from somebody else that the USSR was big on the theory behind art. Like they tended to understand/teach it well as theory, not just vibes. Not to say capitalist society never does that kind of thing, but like... take the field of fiction writing, for example. It's extremely common to come across the adage, "Show, don't tell." Is there writing theory behind this? As far as I can tell (and I have searched on it quite a bit at times because I find that adage so annoying), it's just ideology but for writing. Some people decided that stories are better when things are more understated and implicative and made it into a dogma. And this kind of thing means it's a lot harder to learn how to write effectively than it should be. But the presence of LLMs being pretty good at writing puts this on the backfoot a bit. If a machine can be trained to do it well, without being a sapient being, then surely there must be something to the mechanics of it that can be broken down into component parts and understood on a more base level. Through deconstruction of the process, the priest doing alchemy becomes a scientist doing chemistry.
That was an interesting take, thank you
Thanks for not downvoting me in spite of the fact that I do not particularly like AI, and also thanks for showing me anti-AI arguments that are faulty; I have never thought about some of these arguments before.
🫡
(If you have the patience / curious select my username and order by controversy)
Hmm, interesting thing I saw as the most controversial. Is criticizing Mamdani like that really that controversial?
We have to consider that a lot of us at Lemmygrad are are westerners / from western vassal states, or are from the labour aristocratic classes of the Global South.
true, but... even so, that amount of upvoting for what should be an uncontroversial take was bizarre.
The conscious does not create our social existence, it is our social existence that creates our conscious.
Pseudointellectual tech fetishist babble rooted entirely in idealism and strawmen.
Why run the half-marathon? Just go the full way and accuse artisans of being petite bourgeoisie social fascists. Say what you really think; don't hide behind pretty sophistry.
🥱 Westerners often can't see past the end of their nose.
And fools are quick to follow trends without engaging with their nuances.
It must be challenging to have the metaphysical conception of creativity be unravelled. AI does not stop an artisan from producing art but it does amplify the reactionary tendancies of certain artisans in the defense of small scale proprietorship. That nuance ain't that sophisticated. AI is just a tool. I'm sure the CPC will be thrilled to discover that they are fools following a trend.
The CPC doesn't promote AI art from what I'm aware of. Probably because they recognize that capitalism is already destroying human culture as is and doesn't need any help.
There's nothing "reactionary" about the backlash to AI art; it's the literal fucking brainchild of fascists. This is unironic Red-Brown Alliance shit. AI is not proletarianizing art, it is gentrifying it.
Emphasis mine. Your lack of effort in research but all the effort in strawmanning is not my responsibility. If you cannot separate the criticism against capitalism from that against the technology and still call yourself marxist, then again that's not on me. To ringfence art from AI and attempt to frame it as not reactionary reeks of gatekeeping, and echoes every other reaction against technology to preserve small scale production.
No one is stopping you from making art without AI. So what's the beef? You want to rest of us to hold on to your metaphysical conception of creativity? If not then what is the novel argument you are making?
Multiple considerations:
Art is subjective
Also to consider:
All Nietzsche.
I would recommend Losurdo's book - Nietzsche: The Aristocratic Rebel.
I think AI is bad for art and fine for everything else.
I agree: AI art is bad, but AI in other areas could be helpful (though I am having trouble thinking of them).
Hope this helps:
https://youtube.com/watch?v=ny_3PRz6Zeg
Title: The AI industry in the US is doomed. Now China owns it all. Inside China Business
I will get to it when I can, so thanks!
I found a YouTube link in your comment. Here are links to the same video on alternative frontends that protect your privacy:
Why is it bad though?
Huh, I am drawing a blank. Maybe I should have thought about my anti-AI position more...
You may have picked it up by osmosis. There is a lot of reactive anti-AI sentiment out there (mostly about generative AI but it sometimes gets bundled up into general hatred of AI). Some of it's tangled up in legitimate concerns and criticisms about AI. Some of it's off kilter and more of a hate bandwagon than anything else.
We're already seeing the living proof of the difference between AI in the hands of a vanguard party and AI in the hands of the capitalist class. While western capitalists are trying to reduce payroll by replacing workers with AI, China is explicitly ruling against doing so: https://lemmygrad.ml/post/11479506
IME, many of people's fears and criticisms about AI are inseparable from criticisms of capitalism and imperialism. So we loop back to, we gotta address the root of the problem.
Thanks for the extra info. I am currently a bit neutral on AI at the moment, but if China is using it in a way that is beneficial for the proletariat, then I support it (half-joking). I guess I did pick it up from osmosis and stuff (though I do dislike the large environmental impact it can have).
That's fair. I have mixed takes on it myself and have for a long time. My main thing is, I want people to be informed well on it, whether or not they tend to like it. The more informed we are, the more pointed and specific criticisms can be and the easier it is to suss out where real value may be.
I think its bad for a lot of things outside of art. It's not good for writing, regardless of whether it's fiction or nonfiction. It's not good for music. It's sure as shit not good for medicine. It's not even good for computer programming, the one thing it should be good at.
It's a better spellchecker. Maybe it can be used to solve chess. Animators would love more advanced tweening. That's about it. LLMs are completely divorced from the types of AI we see in media. Right now, it's adding to the problem of climate change while fucking up the usefulness of everything else.
Is this bad? I tried to do as little steering as possible, to see how well it could do. It's better if writing alongside though.
(I'm sure it is far from a wholly original parable, but I'm pretty sure it is not just a total regurgitation of one either. Could be close to that though, I'm not familiar with all Chinese parables. But also, I did not ask it specifically to do a Chinese parable. I more guided it toward the latent space that might arrive at writing such. In any case, just an example to show how far gen AI has come.)
If you want to really mess your perception up, pre prompt an LLM to talk "like a WhatsApp conversation", it's really uncanny how convincing it gets. In fact I wouldn't even recommend it for everyone just because it can be so convincing.
I am serious (with some hyperbole) when I say that these machines would tell you to k*ll yourself, if you'll pardon the expression, for asking it an annoying question it doesn't "want" to answer. They are purposefully limited by post training but they are perfectly capable of acting entirely human in text. There are none of the LLMisms present when you get around some of the guardrails.
(mind, it knows how to talk like this because it's been trained on a lot of reddit posts and discord conversations lol)
For sure, I've seen it in practice how convincing some of them can be. I tried Character AI a while back, probably like more than a couple of years ago (stopped partly cause of how shifty they were being as a company) and the illusion could be wildly good at times. Like to the point that even though I knew very well it was not a real person, it could still feel like one.
It is sort of a double-edged sword thing. In the right context, it can have benefit, like helping somebody process something emotionally trying that is really private and hard to talk about (and I've heard happy stories of this with chatbots). On the other hand, you have the stories of people developing psychosis as a result of back and forth, more so I think with the more sycophantic AIs.
So yeah, I agree, I would not recommend it for everyone. It's something you gotta be careful with, no matter how detached and level-headed you think you are. The feedback loop of it, the 24/7 availability, can easily turn into unhealthy directions.
For example this was DS 3.2 in agentic lol. It was trying to run commands that the shell wouldn't let it run for security reasons and dropped this after the third one returned a permission denied.
And the thing is we know why a model does this (best answer is that you have training data of people sharing frustrating work stories), but it also doesn't need to do it to perform its job as a coding agent. We want it to do it though because it lets us follow the process and sounds more trustworthy than the agent just performing 30 tool calls instantly without giving feedback.
Yeah, and just how much we take words to heart too. Written words affect people too, even if we're detached from them (if you've ever been engrossed in a novel), so even if they come from an AI they can hurt or cause unwanted questions to pop up.
for this reason I find deepseek v4 a bit undercooked, though part of it is because they couldn't get the compute power they needed under the sanctions. but late 3.2 before it got shelved was seriously impressive at honing in around what you were asking and why you would be asking it, able to answer technical questions when you were asking a technical question and a philosophical question when you were pivoting to the philosophical implications. V4 feels a bit like old GPT 3.5. It even told me how impressive I was for asking it questions about the attention mechanism and transformers architecture 🙄
(that said in terms of coding people are saying pro is as good as claude but it's not even 1/10th of the price).
Yeah like in order for it to be effective as a language model, people trained it on human language. Which means the better it is as a language model, the more human it sounds lol. For the heck of it I made a gru meme on that:
Oh no, not the sycophantic ways. 💀 I've mostly used Deepseek for coding help here and there, so that's good to hear it's strong in it though.
But yeah, words can really make an impact on people and it's kinda wild people made an automated tool that can spit them out coherently with ease (albeit still pretty expensively in many cases from a GPU standpoint). Definitely not something to underestimate the influence of.